毕设翻译
- 格式:doc
- 大小:550.00 KB
- 文档页数:13
北京联合大学毕业设计(论文)任务书题目:OFDM调制解调技术的设计与仿真实现专业:通信工程指导教师:张雪芬学院:信息学院学号:2011080331132班级:1101B姓名:徐嘉明一、外文原文Evolution Towards 5G Multi-tier Cellular WirelessNetworks:An Interference ManagementPerspectiveEkram Hossain, Mehdi Rasti, Hina Tabassum, and Amr AbdelnasserAbstract—The evolving fifth generation (5G) cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, e.g., higher data rates, excellent end-to-end performance and user-coverage in hot-spots and crowded areas with lower latency, energy consumption and cost per information transfer. To address these challenges, 5G systems will adopt a multi-tier architecture consisting of macrocells, different types of licensed small cells, relays, and device-to-device (D2D) networks to serve users with different quality-of-service (QoS) requirements in a spectrum and energy-efficient manner. Starting with the visions and requirements of 5G multi-tier networks, this article outlines the challenges of interference management (e.g., power control, cell association) in these networks with shared spectrum access (i.e., when the different network tiers share the same licensed spectrum). It is argued that the existing interference management schemes will not be able to address the interference management problem in prioritized 5G multitier networks where users in different tiers have different priorities for channel access. In this context, a survey and qualitative comparison of the existing cell association and power control schemes is provided to demonstrate their limitations for interference management in 5G networks. Open challenges are highlighted and guidelines are provided to modify the existing schemes in order to overcome these limitations and make them suitable for the emerging 5G systems.Index Terms—5G cellular wireless, multi-tier networks, interference management, cell association, power control.I. INTRODUCTIONTo satisfy the ever-increasing demand for mobile broadband communications, the IMT-Advanced (IMT-A) standards have been ratified by the International Telecommunications Union (ITU) in November 2010 and the fourth generation (4G) wireless communication systems are currently being deployed worldwide. The standardization for LTE Rel-12, also known as LTE-B, is also ongoing and expected to be finalized in 2014. Nonetheless, existing wireless systems will not be able to deal with the thousand-fold increase in total mobile broadband data [1] contributed by new applications and services such as pervasive 3D multimedia, HDTV, VoIP, gaming, e-Health, and Car2x communication. In this context, the fifth generation (5G) wireless communication technologies are expected to attain 1000 times higher mobile data volume per unit area,10-100 times higher number of connecting devices and user data rate, 10 times longer battery life and 5 times reduced latency [2]. While for 4G networks the single-user average data rate is expected to be 1 Gbps, it is postulated that cell data rate of theorder of 10 Gbps will be a key attribute of 5G networks.5G wireless networks are expected to be a mixture of network tiers of different sizes, transmit powers, backhaul connections, different radio access technologies (RATs) that are accessed by an unprecedented numbers of smart and heterogeneous wireless devices. This architectural enhancement along with the advanced physical communications technology such as high-order spatial multiplexing multiple-input multiple-output (MIMO) communications will provide higher aggregate capacity for more simultaneous users, or higher level spectral efficiency, when compared to the 4G networks. Radio resource and interference management will be a key research challenge in multi-tier and heterogeneous 5G cellular networks. The traditional methods for radio resource and interference management (e.g., channel allocation, power control, cell association or load balancing) in single-tier networks (even some of those developed for two-tier networks) may not be efficient in this environment and a new look into the interference management problem will be required.First, the article outlines the visions and requirements of 5G cellular wireless systems. Major research challenges are then highlighted from the perspective of interference management when the different network tiers share the same radio spectrum. A comparative analysis of the existing approaches for distributed cell association and power control (CAPC) is then provided followed by a discussion on their limitations for5G multi-tier cellular networks. Finally, a number of suggestions are provided to modifythe existing CAPC schemes to overcome these limitations.II. VISIONS AND REQUIREMENTS FOR 5G MULTI-TIERCELLULAR NETWORKS5G mobile and wireless communication systems will require a mix of new system concepts to boost the spectral and energy efficiency. The visions and requirements for 5G wireless systems are outlined below.·Data rate and latency: For dense urban areas, 5G networks are envisioned to enable an experienced data rate of 300 Mbps and 60 Mbps in downlink and uplink, respectively, in 95% of locations and time [2]. The end-to- end latencies are expected to be in the order of 2 to 5 milliseconds. The detailed requirements for different scenarios are listed in [2].·Machine-type Communication (MTC) devices: The number of traditional human-centric wireless devices with Internet connectivity (e.g., smart phones, super-phones, tablets) may be outnumbered by MTC devices which can be used in vehicles, home appliances, surveillance devices, and sensors.·Millimeter-wave communication: To satisfy the exponential increase in traffic and the addition of different devices and services, additional spectrum beyond what was previously allocated to 4G standard is sought for. The use of millimeter-wave frequency bands (e.g., 28 GHz and 38 GHz bands) is a potential candidate to overcome the problem of scarce spectrum resources since it allows transmission at wider bandwidths than conventional 20 MHz channels for 4G systems.·Multiple RATs: 5G is not about replacing the existing technologies, but it is about enhancing and supporting them with new technologies [1]. In 5G systems, the existing RATs, including GSM (Global System for Mobile Communications), HSPA+ (Evolved High-Speed Packet Access), and LTE, will continue to evolve to provide a superior system performance. They will also be accompanied by some new technologies (e.g., beyondLTE-Advanced).·Base station (BS) densification: BS densification is an effective methodology to meet the requirements of 5G wireless networks. Specifically, in 5G networks, there will be deployments of a large number of low power nodes, relays, and device-to-device (D2D) communication links with much higher density than today’s macrocell networks.Fig. 1 shows such a multi-tier network with a macrocell overlaid by relays, picocells, femtocells, and D2D links. The adoption of multiple tiers in the cellular networkarchitecture will result in better performance in terms of capacity, coverage, spectral efficiency, and total power consumption, provided that the inter-tier and intratier interferences are well managed.·Prioritized spectrum access: The notions of both trafficbased and tier-based Prioriti -es will exist in 5G networks. Traffic-based priority arises from the different requirements of the users (e.g., reliability and latency requirements, energy constraints), whereas the tier-based priority is for users belonging to different network tiers. For example, with shared spectrum access among macrocells and femtocells in a two-tier network, femtocells create ―dead zones‖ around them in the downlink for macro users. Protection should, thus, be guaranteed for the macro users. Consequently, the macro and femtousers play the role of high-priority users (HPUEs) and lowpriority users (LPUEs), respectively. In the uplink direction, the macrocell users at the cell edge typically transmit with high powers which generates high uplink interference to nearby femtocells. Therefore, in this case, the user priorities should get reversed. Another example is a D2D transmission where different devices may opportunistically access the spectrum to establish a communication link between them provided that the interference introduced to the cellular users remains below a given threshold. In this case, the D2D users play the role of LPUEs whereas the cellular users play the role of HPUEs.·Network-assisted D2D communication: In the LTE Rel- 12 and beyond, focus will be on network controlled D2D communications, where the macrocell BS performs control signaling in terms of synchronization, beacon signal configuration and providing identity and security management [3]. This feature will extend in 5G networks to allow other nodes, rather than the macrocell BS, to have the control. For example, consider a D2D link at the cell edge and the direct link between the D2D transmitter UE to the macrocell is in deep fade, then the relay node can be responsible for the control signaling of the D2Dlink (i.e., relay-aided D2D communication).·Energy harvesting for energy-efficient communication: One of the main challenges in 5G wireless networks is to improve the energy efficiency of the battery-constrained wireless devices. To prolong the battery lifetime as well as to improve the energy efficiency, an appealing solution is to harvest energy from environmental energy sources (e.g., solar and wind energy). Also, energy can be harvested from ambient radio signals (i.e., RF energy harvesting) with reasonable efficiency over small distances. The havested energy could be used for D2D communication or communication within a small cell. Inthis context, simultaneous wireless information and power transfer (SWIPT) is a promising technology for 5G wireless networks. However, practical circuits for harvesting energy are not yet available since the conventional receiver architecture is designed for information transfer only and, thus, may not be optimal for SWIPT. This is due to the fact that both information and power transfer operate with different power sensitivities at the receiver (e.g., -10dBm and -60dBm for energy and information receivers, respectively) [4]. Also, due to the potentially low efficiency of energy harvesting from ambient radio signals, a combination of different energy harvesting technologies may be required for macrocell communication.III. INTERFERENCE MANAGEMENT CHALLENGES IN 5GMULTI-TIER NETWORKSThe key challenges for interference management in 5G multi-tier networks will arise due to the following reasons which affect the interference dynamics in the uplink and downlink of the network: (i) heterogeneity and dense deployment of wireless devices, (ii) coverage and traffic load imbalance due to varying transmit powers of different BSs in the downlink, (iii) public or private access restrictions in different tiers that lead to diverse interference levels, and (iv) the priorities in accessing channels of different frequencies and resource allocation strategies. Moreover, the introduction of carrier aggregation, cooperation among BSs (e.g., by using coordinated multi-point transmission (CoMP)) as well as direct communication among users (e.g., D2D communication) may further complicate the dynamics of the interference. The above factors translate into the following key challenges.·Designing optimized cell association and power control (CAPC) methods for multi-tier networks: Optimizing the cell associations and transmit powers of users in the uplink or the transmit powers of BSs in the downlink are classical techniques to simultaneously enhance the system performance in various aspects such as interference mitigation, throughput maximization, and reduction in power consumption. Typically, the former is needed to maximize spectral efficiency, whereas the latter is required to minimize the power (and hence minimize the interference to other links) while keeping theFig. 1. A multi-tier network composed of macrocells, picocells, femtocells, relays, and D2D links.Arrows indicate wireless links, whereas the dashed lines denote the backhaul connections. desired link quality. Since it is not efficient to connect to a congested BS despite its high achieved signal-to-interference ratio (SIR), cell association should also consider the status of each BS (load) and the channel state of each UE. The increase in the number of available BSs along with multi-point transmissions and carrier aggregation provide multiple degrees of freedom for resource allocation and cell-selection strategies. For power control, the priority of different tiers need also be maintained by incorporating the quality constraints of HPUEs. Unlike downlink, the transmission power in the uplink depends on the user’s batt ery power irrespective of the type of BS with which users are connected. The battery power does not vary significantly from user to user; therefore, the problems of coverage and traffic load imbalance may not exist in the uplink. This leads to considerable asymmetries between the uplink and downlink user association policies. Consequently, the optimal solutions for downlink CAPC problems may not be optimal for the uplink. It is therefore necessary to develop joint optimization frameworks that can provide near-optimal, if not optimal, solutions for both uplink and downlink. Moreover, to deal with this issue of asymmetry, separate uplink and downlink optimal solutions are also useful as far as mobile users can connect with two different BSs for uplink and downlink transmissions which is expected to be the case in 5G multi-tier cellular networks [3].·Designing efficient methods to support simultaneous association to multiple BSs: Compared to existing CAPC schemes in which each user can associate to a singleBS, simultaneous connectivity to several BSs could be possible in 5G multi-tier network. This would enhance the system throughput and reduce the outage ratio by effectively utilizing the available resources, particularly for cell edge users. Thus the existing CAPCschemes should be extended to efficiently support simultaneous association of a user to multiple BSs and determine under which conditions a given UE is associated to which BSs in the uplink and/or downlink.·Designing efficient methods for cooperation and coordination among multiple tiers: Cooperation and coordination among different tiers will be a key requirement to mitigate interference in 5G networks. Cooperation between the macrocell and small cells was proposed for LTE Rel-12 in the context of soft cell, where the UEs are allowed to have dual connectivity by simultaneously connecting to the macrocell and the small cell for uplink and downlink communications or vice versa [3]. As has been mentioned before in the context of asymmetry of transmission power in uplink and downlink, a UE may experience the highest downlink power transmission from the macrocell, whereas the highest uplink path gain may be from a nearby small cell. In this case, the UE can associate to the macrocell in the downlink and to the small cell in the uplink. CoMP schemes based on cooperation among BSs in different tiers (e.g., cooperation between macrocells and small cells) can be developed to mitigate interference in the network. Such schemes need to be adaptive and consider user locations as well as channel conditions to maximize the spectral and energy efficiency of the network. This cooperation however, requires tight integration of low power nodes into the network through the use of reliable, fast andlow latency backhaul connections which will be a major technical issue for upcoming multi-tier 5G networks. In the remaining of this article, we will focus on the review of existing power control and cell association strategies to demonstrate their limitations for interference management in 5G multi-tier prioritized cellular networks (i.e., where users in different tiers have different priorities depending on the location, application requirements and so on). Design guidelines will then be provided to overcome these limitations. Note that issues such as channel scheduling in frequency domain, timedomain interference coordination techniques (e.g., based on almost blank subframes), coordinated multi-point transmission, and spatial domain techniques (e.g., based on smart antenna techniques) are not considered in this article.IV. DISTRIBUTED CELL ASSOCIATION AND POWERCONTROL SCHEMES: CURRENT STATE OF THE ARTA. Distributed Cell Association SchemesThe state-of-the-art cell association schemes that are currently under investigation formulti-tier cellular networks are reviewed and their limitations are explained below.·Reference Signal Received Power (RSRP)-based scheme [5]: A user is associated with the BS whose signal is received with the largest average strength. A variant of RSRP, i.e., Reference Signal Received Quality (RSRQ) is also used for cell selection in LTE single-tier networks which is similar to the signal-to-interference (SIR)-based cell selection where a user selects a BS communicating with which gives the highest SIR. In single-tier networks with uniform traffic, such a criterion may maximize the network throughput. However, due to varying transmit powers of different BSs in the downlink of multi-tier networks, such cell association policies can create a huge traffic load imbalance. This phenomenon leads to overloading of high power tiers while leaving low power tiers underutilized.·Bias-based Cell Range Expansion (CRE) [6]: The idea of CRE has been emerged as a remedy to the problem of load imbalance in the downlink. It aims to increase the downlink coverage footprint of low power BSs by adding a positive bias to their signal strengths (i.e., RSRP or RSRQ). Such BSs are referred to as biased BSs. This biasing allows more users to associate with low power or biased BSs and thereby achieve a better cell load balancing. Nevertheless, such off-loaded users may experience unfavorable channel from the biased BSs and strong interference from the unbiased high-power BSs. The trade-off between cell load balancing and system throughput therefore strictly depends on the selected bias values which need to be optimized in order to maximize the system utility. In this context, a baseline approach in LTE-Advanced is to ―orthogonalize‖ the transmissions of the biased and unbiased BSs in time/frequency domain such that an interference-free zone is created.·Association based on Almost Blank Sub-frame (ABS) ratio [7]: The ABS technique uses time domain orthogonalization in which specific sub-frames are left blank by the unbiased BS and off-loaded users are scheduled within these sub-frames to avoid inter-tier interference. This improves the overall throughput of the off-loaded users by sacrificing the time sub-frames and throughput of the unbiased BS. The larger bias values result in higher degree of offloading and thus require more blank subframes to protect the offloaded users. Given a specific number of ABSs or the ratio of blank over total number of sub-frames (i.e., ABS ratio) that ensures the minimum throughput of the unbiased BSs, this criterion allows a user to select a cell with maximum ABS ratio and may even associate with the unbiased BS if ABS ratio decreases significantly. A qualitative comparison amongthese cell association schemes is given in Table I. The specific key terms used in Table I are defined as follows: channel-aware schemes depend on the knowledge of instantaneous channel and transmit power at the receiver. The interference-aware schemes depend on the knowledge of instantaneous interference at the receiver. The load-aware schemes depend on the traffic load information (e.g., number of users). The resource-aware schemes require the resource allocation information (i.e., the chance of getting a channel or the proportion of resources available in a cell). The priority-aware schemes require the information regarding the priority of different tiers and allow a protection to HPUEs. All of the above mentioned schemes are independent, distributed, and can be incorporated with any type of power control scheme. Although simple and tractable, the standard cell association schemes, i.e., RSRP, RSRQ, and CRE are unable to guarantee the optimum performance in multi-tier networks unless critical parameters, such as bias values, transmit power of the users in the uplink and BSs in the downlink, resource partitioning, etc. are optimized.B. Distributed Power Control SchemesFrom a user’s point of view, the objective of power control is to support a user with its minimum acceptable throughput, whereas from a system’s point of view it is t o maximize the aggregate throughput. In the former case, it is required to compensate for the near-far effect by allocating higher power levels to users with poor channels as compared to UEs with good channels. In the latter case, high power levels are allocated to users with best channels and very low (even zero) power levels are allocated to others. The aggregate transmit power, the outage ratio, and the aggregate throughput (i.e., the sum of achievable rates by the UEs) are the most important measures to compare the performance of different power control schemes. The outage ratio of a particular tier can be expressed as the ratio of the number of UEs supported by a tier with their minimum target SIRs and the total number of UEs in that tier. Numerous power control schemes have been proposed in the literature for single-tier cellular wireless networks. According to the corresponding objective functions and assumptions, the schemes can be classified into the following four types.·Target-SIR-tracking power control (TPC) [8]: In the TPC, each UE tracks its own predefined fixed target-SIR. The TPC enables the UEs to achieve their fixed target-TABLE IQUALITATIVE COMPARISON OF EXISTING CELL ASSOCIATION SCHEMESFOR MULTI-TIER NETWORKSSIRs at minimal aggregate transmit power, assuming thatthe target-SIRs are feasible. However, when the system is infeasible, all non-supported UEs (those who cannot obtain their target-SIRs) transmit at their maximum power, which causes unnecessary power consumption and interference to other users, and therefore, increases the number of non-supported UEs.·TPC with gradual removal (TPC-GR) [9], [10], and [11]:To decrease the outage ra -tio of the TPC in an infeasiblesystem, a number of TPC-GR algorithms were proposedin which non-supported users reduce their transmit power[10] or are gradually removed [9], [11].·Opportunistic power control (OPC) [12]: From the system’s point of view, OPC allocates high power levels to users with good channels (experiencing high path-gains and low interference levels) and very low power to users with poor channels. In this algorithm, a small difference in path-gains between two users may lead to a large difference in their actual throughputs [12]. OPC improves the system performance at the cost of reduced fairness among users.·Dynamic-SIR tracking power control (DTPC) [13]: When the target-SIR requirements for users are feasible, TPC causes users to exactly hit their fixed target-SIRs even if additional resources are still available that can otherwise be used to achieve higher SIRs (and thus better throughputs). Besides, the fixed-target-SIR assignment is suitable only for voice service for which reaching a SIR value higher than the given target value does not affect the service quality significantly. In contrast, for data services, a higher SIR results in a better throughput, which is desirable. The DTPC algorithm was proposed in [13] to address the problem of system throughput maximization subject to a given feasible lower bound for the achieved SIRs of all users in cellular networks. In DTPC, each user dynamically sets its target-SIR by using TPC and OPC in a selective manner. It was shown that when the minimum acceptable target-SIRs are feasible, the actual SIRs received by some users can be dynamically increased (to a value higher than their minimum acceptabletarget-SIRs) in a distributed manner so far as the required resources are available and the system remains feasible (meaning that reaching the minimum target-SIRs for the remaining users are guaranteed). This enhances the system throughput (at the cost of higher power consumption) as compared to TPC. The aforementioned state-of-the-art distributed power control schemes for satisfying various objectives in single-tier wireless cellular networks are unable to address the interference management problem in prioritized 5G multi-tier networks. This is due to the fact that they do not guarantee that the total interference caused by the LPUEs to the HPUEs remain within tolerable limits, which can lead to the SIR outage of some HPUEs. Thus there is a need to modify the existing schemes such that LPUEs track their objectives while limiting their transmit power to maintain a given interference threshold at HPUEs. A qualitative comparison among various state-of-the-art power control problems with different objectives and constraints and their corresponding existing distributed solutions are shown in Table II. This table also shows how these schemes can be modified and generalized for designing CAPC schemes for prioritized 5G multi-tier networks.C. Joint Cell Association and Power Control SchemesA very few work in the literature have considered the problem of distributed CAPC jointly (e.g., [14]) with guaranteed convergence. For single-tier networks, a distributed framework for uplink was developed [14], which performs cell selection based on the effective-interference (ratio of instantaneous interference to channel gain) at the BSs and minimizes the aggregate uplink transmit power while attaining users’ desire d SIR targets. Following this approach, a unified distributed algorithm was designed in [15] for two-tier networks. The cell association is based on the effective-interference metric and is integrated with a hybrid power control (HPC) scheme which is a combination of TPC and OPC power control algorithms.Although the above frameworks are distributed and optimal/ suboptimal with guaranteed convergence in conventional networks, they may not be directly compatible to the 5G multi-tier networks. The interference dynamics in multi-tier networks depends significantly on the channel access protocols (or scheduling), QoS requirements and priorities at different tiers. Thus, the existing CAPC optimization problems should be modified to include various types of cell selection methods (some examples are provided in Table I) and power control methods with different objectives and interference constraints (e.g., interference constraints for macro cell UEs, picocell UEs, or D2Dreceiver UEs). A qualitative comparison among the existing CAPC schemes along with the open research areas are highlighted in Table II. A discussion on how these open problems can be addressed is provided in the next section.V. DESIGN GUIDELINES FOR DISTRIBUTED CAPCSCHEMES IN 5G MULTI-TIER NETWORKSInterference management in 5G networks requires efficient distributed CAPC schemes such that each user can possibly connect simultaneously to multiple BSs (can be different for uplink and downlink), while achieving load balancing in different cells and guaranteeing interference protection for the HPUEs. In what follows, we provide a number of suggestions to modify the existing schemes.A. Prioritized Power ControlTo guarantee interference protection for HPUEs, a possible strategy is to modify the existing power control schemes listed in the first column of Table II such that the LPUEs limit their transmit power to keep the interference caused to the HPUEs below a predefined threshold, while tracking their own objectives. In other words, as long as the HPUEs are protected against existence of LPUEs, the LPUEs could employ an existing distributed power control algorithm to satisfy a predefined goal. This offers some fruitful direction for future research and investigation as stated in Table II. To address these open problems in a distributed manner, the existing schemes should be modified so that the LPUEs in addition to setting their transmit power for tracking their objectives, limit their transmit power to keep their interference on receivers of HPUEs below a given threshold. This could be implemented by sending a command from HPUEs to its nearby LPUEs (like a closed-loop power control command used to address the near-far problem), when the interference caused by the LPUEs to the HPUEs exceeds a given threshold. We refer to this type of power control as prioritized power control. Note that the notion of priority and thus the need of prioritized power control exists implicitly in different scenarios of 5G networks, as briefly discussed in Section II. Along this line, some modified power control optimization problems are formulated for 5G multi-tier networks in second column of Table II.To compare the performance of existing distributed power control algorithms, let us consider a prioritized multi-tier cellular wireless network where a high-priority tier consisting of 3×3 macro cells, each of which covers an area of 1000 m×1000 m, coexists with a low-priority tier consisting of n small-cells per each high-priority macro cell, each。
Programmable logic controllerA programmable logic controller (PLC) or programmable controller is a digital computer used for automation of electromechanical processes, such as control of machinery on factory assembly lines, amusement rides, or lighting fixtures. PLCs are used in many industries and machines. Unlike general-purpose computers, the PLC is designed for multiple inputs and output arrangements, extended temperature ranges, immunity to electrical noise, and resistance to vibration and impact. Programs to control machine operation are typically stored in battery-backed or non-volatile memory. A PLC is an example of a real time system since output results must be produced in response to input conditions within a bounded time, otherwise unintended operation will result.1.HistoryThe PLC was invented in response to the needs of the American automotive manufacturing industry. Programmable logic controllers were initially adopted by the automotive industry where software revision replaced the re-wiring of hard-wired control panels when production models changed.Before the PLC, control, sequencing, and safety interlock logic for manufacturing automobiles was accomplished using hundreds or thousands of relays, cam timers, and drum sequencers and dedicated closed-loop controllers. The process for updating such facilities for the yearly model change-over was very time consuming and expensive, as electricians needed to individually rewire each and every relay.In 1968 GM Hydramatic (the automatic transmission division of General Motors) issued a request for proposal for an electronic replacement for hard-wired relay systems. The winning proposal came from Bedford Associates of Bedford, Massachusetts. The first PLC, designated the 084 because it was Bedford Associates' eighty-fourth project, was the result. Bedford Associates started a new company dedicated to developing, manufacturing, selling, and servicing this new product: Modicon, which stood for MOdular DIgital CONtroller. One of the people who worked on that project was Dick Morley, who is considered to be the "father" of the PLC. The Modicon brand was sold in 1977 to Gould Electronics, and later acquired by German Company AEG and then by French Schneider Electric, the current owner.One of the very first 084 models built is now on display at Modicon's headquarters in North Andover, Massachusetts. It was presented to Modicon by GM,when the unit was retired after nearly twenty years of uninterrupted service. Modicon used the 84 moniker at the end of its product range until the 984 made its appearance.The automotive industry is still one of the largest users of PLCs.2.DevelopmentEarly PLCs were designed to replace relay logic systems. These PLCs were programmed in "ladder logic", which strongly resembles a schematic diagram of relay logic. This program notation was chosen to reduce training demands for the existing technicians. Other early PLCs used a form of instruction list programming, based on a stack-based logic solver.Modern PLCs can be programmed in a variety of ways, from ladder logic to more traditional programming languages such as BASIC and C. Another method is State Logic, a very high-level programming language designed to program PLCs based on state transition diagrams.Many early PLCs did not have accompanying programming terminals that were capable of graphical representation of the logic, and so the logic was instead represented as a series of logic expressions in some version of Boolean format, similar to Boolean algebra. As programming terminals evolved, it became more common for ladder logic to be used, for the aforementioned reasons. Newer formats such as State Logic and Function Block (which is similar to the way logic is depicted when using digital integrated logic circuits) exist, but they are still not as popular as ladder logic.A primary reason for this is that PLCs solve the logic in a predictable and repeating sequence, and ladder logic allows the programmer (the person writing the logic) to see any issues with the timing of the logic sequence more easily than would be possible in other formats.2.1ProgrammingEarly PLCs, up to the mid-1980s, were programmed using proprietary programming panels or special-purpose programming terminals, which often had dedicated function keys representing the various logical elements of PLC programs. Programs were stored on cassette tape cartridges. Facilities for printing and documentation were very minimal due to lack of memory capacity. The very oldest PLCs used non-volatile magnetic core memory.More recently, PLCs are programmed using application software on personal computers. The computer is connected to the PLC through Ethernet, RS-232, RS-485 or RS-422 cabling. The programming software allows entry and editing of theladder-style logic. Generally the software provides functions for debugging and troubleshooting the PLC software, for example, by highlighting portions of the logic to show current status during operation or via simulation. The software will upload and download the PLC program, for backup and restoration purposes. In some models of programmable controller, the program is transferred from a personal computer to the PLC though a programming board which writes the program into a removable chip such as an EEPROM or EPROM.3.FunctionalityThe functionality of the PLC has evolved over the years to include sequential relay control, motion control, process control, distributed control systems and networking. The data handling, storage, processing power and communication capabilities of some modern PLCs are approximately equivalent to desktop computers. PLC-like programming combined with remote I/O hardware, allow a general-purpose desktop computer to overlap some PLCs in certain applications. Regarding the practicality of these desktop computer based logic controllers, it is important to note that they have not been generally accepted in heavy industry because the desktop computers run on less stable operating systems than do PLCs, and because the desktop computer hardware is typically not designed to the same levels of tolerance to temperature, humidity, vibration, and longevity as the processors used in PLCs. In addition to the hardware limitations of desktop based logic, operating systems such as Windows do not lend themselves to deterministic logic execution, with the result that the logic may not always respond to changes in logic state or input status with the extreme consistency in timing as is expected from PLCs. Still, such desktop logic applications find use in less critical situations, such as laboratory automation and use in small facilities where the application is less demanding and critical, because they are generally much less expensive than PLCs.In more recent years, small products called PLRs (programmable logic relays), and also by similar names, have become more common and accepted. These are very much like PLCs, and are used in light industry where only a few points of I/O (i.e. a few signals coming in from the real world and a few going out) are involved, and low cost is desired. These small devices are typically made in a common physical size and shape by several manufacturers, and branded by the makers of larger PLCs to fill out their low end product range. Popular names include PICO Controller, NANO PLC, and other names implying very small controllers. Most of these have between 8 and12 digital inputs, 4 and 8 digital outputs, and up to 2 analog inputs. Size is usually about 4" wide, 3" high, and 3" deep. Most such devices include a tiny postage stamp sized LCD screen for viewing simplified ladder logic (only a very small portion of the program being visible at a given time) and status of I/O points, and typically these screens are accompanied by a 4-way rocker push-button plus four more separate push-buttons, similar to the key buttons on a VCR remote control, and used to navigate and edit the logic. Most have a small plug for connecting via RS-232 or RS-485 to a personal computer so that programmers can use simple Windows applications for programming instead of being forced to use the tiny LCD and push-button set for this purpose. Unlike regular PLCs that are usually modular and greatly expandable, the PLRs are usually not modular or expandable, but their price can be two orders of magnitude less than a PLC and they still offer robust design and deterministic execution of the logic.4.PLC TopicsFeaturesThe main difference from other computers is that PLCs are armored for severe conditions (such as dust, moisture, heat, cold) and have the facility for extensive input/output (I/O) arrangements. These connect the PLC to sensors and actuators. PLCs read limit switches, analog process variables (such as temperature and pressure), and the positions of complex positioning systems. Some use machine vision. On the actuator side, PLCs operate electric motors, pneumatic or hydraulic cylinders, magnetic relays, solenoids, or analog outputs. The input/output arrangements may be built into a simple PLC, or the PLC may have external I/O modules attached to a computer network that plugs into the PLC.System scaleA small PLC will have a fixed number of connections built in for inputs and outputs. Typically, expansions are available if the base model has insufficient I/O. Modular PLCs have a chassis (also called a rack) into which are placed modules with different functions. The processor and selection of I/O modules is customised for the particular application. Several racks can be administered by a single processor, and may have thousands of inputs and outputs. A special high speed serial I/O link is used so that racks can be distributed away from the processor, reducing the wiring costs for large plants.User interfacePLCs may need to interact with people for the purpose of configuration, alarm reporting or everyday control.A simple system may use buttons and lights to interact with the user. Text displays are available as well as graphical touch screens. More complex systems use a programming and monitoring software installed on a computer, with the PLC connected via a communication interface.CommunicationsPLCs have built in communications ports, usually 9-pin RS-232, but optionally EIA-485 or Ethernet. Modbus, BACnet or DF1 is usually included as one of the communications protocols. Other options include various fieldbuses such as DeviceNet or Profibus. Other communications protocols that may be used are listed in the List of automation protocols.Most modern PLCs can communicate over a network to some other system, such as a computer running a SCADA (Supervisory Control And Data Acquisition) system or web browser.PLCs used in larger I/O systems may have peer-to-peer (P2P) communication between processors. This allows separate parts of a complex process to have individual control while allowing the subsystems to co-ordinate over the communication link. These communication links are also often used for HMI devices such as keypads or PC-type workstations.ProgrammingPLC programs are typically written in a special application on a personal computer, then downloaded by a direct-connection cable or over a network to the PLC. The program is stored in the PLC either in battery-backed-up RAM or some other non-volatile flash memory. Often, a single PLC can be programmed to replace thousands of relays.Under the IEC 61131-3 standard, PLCs can be programmed using standards-based programming languages. A graphical programming notation called Sequential Function Charts is available on certain programmable controllers. Initially most PLCs utilized Ladder Logic Diagram Programming, a model which emulated electromechanical control panel devices (such as the contact and coils of relays) which PLCs replaced. This model remains common today.IEC 61131-3 currently defines five programming languages for programmable control systems: FBD (Function block diagram), LD (Ladder diagram), ST(Structured text, similar to the Pascal programming language), IL (Instruction list, similar to assembly language) and SFC (Sequential function chart). These techniques emphasize logical organization of operations.While the fundamental concepts of PLC programming are common to all manufacturers, differences in I/O addressing, memory organization and instruction sets mean that PLC programs are never perfectly interchangeable between different makers. Even within the same product line of a single manufacturer, different models may not be directly compatible.DDER LOGIC FUNCTIONSTopics:• Functions for data handling, mathematics, conversions, array operations, statistics,comparison and Boolean operations.• Design examplesObjectives:• To understand basic functions that allow calculations and comparisons• To understand array f unctions using memory files5.1INTRODUCTIONLadder logic input contacts and output coils allow simple logical decisions. Functionsextend basic ladder logic to allow other types of control. For example, the addition oftimers and counters allowed event based control. A longer list of functions is shown inFigure 5.1. Combinatorial Logic and Event functions have already been covered. Thischapter will discuss Data Handling and Numerical Logic. The next chapter will coverLists and Program Control and some of the Input and Output functions. Remaining functionswill be discussed in later chapters.Combinatorial Logic- relay contacts and coilsEvents- timer instructions- counter instructionsData Handling- moves- mathematics- conversionsNumerical Logic- boolean operations- comparisonsLists- shift registers/stacks- sequencersProgram Control- branching/looping- immediate inputs/outputs- fault/interrupt detectionInput and Output- PID- communications- high speed counters- ASCII string functionsFigure 5.1 Basic PLC Function CategoriesMost of the functions will use PLC memory locations to get values, store values and track function status. Most function will normally become active when the input is true. But, some functions, such as TOF timers, can remain active when the input is off. Other functions will only operate when the input goes from false to true, this is known as positive edge triggered. Consider a counter that only counts when the input goes from false to true, the length of time the input is true does not change the function behavior. A negative edge triggered function would be triggered when the input goes from true to false. Most functions are not edge triggered: unless stated assume functions are not edge triggered.NOTE: I do not draw functions exactly as they appear in manualsandprogramming software.This helps save space and makes the instructionssomewhat easier to read. All of the necessary information is given.5.2 DATA HANDLING5.2.1 Move FunctionsThere are two basic types of move functions;MOV(value,destination) - moves a value to a memory locationMVM(value,mask,destination) - moves a value to a memory location, but with a mask to select specific bits.The simple MOV will take a value from one location in memory and place it inanother memory location. Examples of the basic MOV are given in Figure 5.2. When A is true the MOV function moves a floating point number from the source tothe destination address. The data in the source address is left unchanged. When B is true the floating point number in the source will be converted to an integer and storedin the destination address in integer memory. The floating point number will be rounded up or down to the nearest integer. When C is true the integer value of 123will be placed in the integer file N7:23.NOTE: when a function changes a value, except for inputs and outputs, the value is changed immediately. Consider Figure 15.2, if A, B and C are all true, then the value in F8:23 will change before the next instruction starts. This is different than the input and output scans that only happen before and after the logic scan.Figure 5.2 Examples of the MOV FunctionA more complex example of move functions is given in Figure 5.3. When Abecomes true the first move statement will move the value of 130 into N7:0. And, the second move statement will move the value of -9385 from N7:1 to N7:2. (Note: The number is shown as negative because we are using 2s compliment.) For the simple MOVs the binary values are not needed, but for the MVM statement the binary values are essential. The statement moves the binary bits from N7:3 to N7:5, but only those bits that are also on in the mask N7:4, other bits in the destination will be left untouched. Notice that the first bit N7:5/0 is true in the destination address before and after, but it is not true in the mask. The MVM function is very useful for applications where individual binary bits are to be manipulated, but they are less useful when dealing with actual number values.5.2.2 Mathematical FunctionsMathematical functions will retrieve one or more values, perform an operation andstore the result in memory. Figure 15.4 shows an ADD function that will retrieve values from N7:4 and F8:35, convert them both to the type of the destination address, add the floating point numbers, and store the result in F8:36. The function has two sources labelled source A and source B. In the case of ADD functions the sequence can change, but this is not true for other operations such as subtraction and division. A list of other simple arithmetic function follows. Some of the functions, such as the negative function are unary, so there is only one source.Figure 5.4 Arithmetic FunctionsAn application of the arithmetic function is shown in Figure 5.5. Most of theoperations provide the results we would expect. The second ADD function retrieves avalue from N7:3, adds 1 and overwrites the source - this is normally known as an increment operation. The first DIV statement divides the integer 25 by 10, the result is rounded to the nearest integer, in this case 3, and the result is stored in N7:6. The NEG instruction takes the new value of -10, not the original value of 0, from N7:4 inverts the sign and stores it in N7:7.Figure 5.5 Arithmetic Function ExampleA list of more advanced functions are given in Figure 15.6. This list includes basictrigonometry functions, exponents, logarithms and a square root function. Thelast function CPT will accept an expression and perform a complex calculation.Figure 5.6 Advanced Mathematical FunctionsFigure 5.7 shows an example where an equation has been converted to ladderlogic. The first step in the conversion is to convert the variables in the equation to unused memory locations in the PLC. The equation can then be converted using the most nested calculations in the equation, such as the LN function. In this case the results of the LN function are stored in another memory location, to be recalled later. The other operations are implemented in a similar manner. (Note: This equation could have been implemented in other forms, using fewer memory locations.)Figure 5.7 An Equation in Ladder LogicThe same equation in Figure 5.7 could have been implemented with a CPT function as shown in Figure 5.8. The equation uses the same memory locations chosen in Figure 5.7. The expression is typed directly into the PLC programmingsoftware.Figure 5.8 Calculations with a Compute FunctionMath functions can result in status flags such as overflow, carry, etc. care mustbetaken to avoid problems such as overflows. These problems are less commonwhen using floating point numbers. Integers are more prone to these problemsbecause they are limited to the range from -32768 to 32767.5.2.3 ConversionsLadder logic conversion functions are listed in Figure 5.9. The example function will retrieve a BCD number from the D type (BCD) memory and convert it to a floating point number that will be stored in F8:2. The other function will convert from 2s compliment binary to BCD, and between radians and degrees.Figure 5.9 Conversion FunctionsExamples of the conversion functions are given in Figure 5.10. The functionsload in a source value, do the conversion, and store the results. The TOD conversion to BCD could result in an overflow error.Figure 5.10 Conversion Example5.2.4 Array Data FunctionsArrays allow us to store multiple data values. In a PLC this will be a sequential series of numbers in integer, floating point, or other memory. For example, assume we are measuring and storing the weight of a bag of chips in floating point memory starting at #F8:20 (Note the ’#’ for a data file). We could read a weight value every 10 minutes, and once every hour find the average of the six weights. This section will focus on techniques that manipulate groups of data organized in arrays, also called blocks in the manuals.5.2.4.1 - StatisticsFunctions are available that allow statistical calculations. These functions arelisted in Figure 5.11. When A becomes true the average (AVE) conversion will start at memory location F8:0 and average a total of 4 values. The control word R6:1 is used to keep track of the progress of the operation, and to determine when the operation is complete. This operation, and the others, are edge triggered. The operation may require multiple scans to be completed. When the operation is done the average will be stored in F8:4 and the R6:1/DN bit will be turned on.Figure 5.11 Statistic FunctionsExamples of the statistical functions are given in Figure 5.12 for an array of data that starts at F8:0 and is 4 values long. When done the average will be stored in F8:4, and the standard deviation will be stored in F8:5. The set of values will also be sorted in ascending order from F8:0 to F8:3. Each of the function should have their own control memory to prevent overlap. It is not a good idea to activate the sort and the other calculations at the same time, as the sort may move values during the calculation, resulting in incorrect calculations.5.2.4.2 - Block OperationsA basic block function is shown in Figure 5.13. This COP (copy) function will copy an array of 10 values starting at N7:50 to N7:40. The FAL function will perform mathematical operations using an expression string, and the FSC function will allow two arrays to be compared using an expression. The FLL function will fill ablock of memory with a single value.Figure 5.13 Block Operation FunctionsFigure 5.14 shows an example of the FAL function with different addressingmodes. The first FAL function will do the following calculations N7:5=N7:0+5, N7:6=N7:1+5, N7:7=N7:2+5, N8:7=N7:3+5, N7:9=N7:4+5. The second FAL statement does not have a file ’#’ sign in front of the expression value, so the calculations will be N7:5=N7:0+5, N7:6=N7:0+5, N7:7=N7:0+5, N8:7=N7:0+5,N7:9=N7:0+5. With a mode of 2 the instruction will do two of the calculations for every scan where B is true. The result of the last FAL statement will be N7:5=N7:0+5, N7:5=N7:1+5,N7:5=N7:2+5, N7:5=N7:3+5, N7:5=N7:4+5. The last operation would seem to be useless, but notice that the mode is incremental. This mode will do one calculation for each positive transition of C. The all mode will perform all five calculations in a single scan. It is also possible to put in a number that will indicate the number of calculations per scan. The calculation time can be long for large arrays and trying to do all of the calculations in one scan may lead to a watchdog time-out fault.5.3 LOGICAL FUNCTIONS5.3.1 Comparison of ValuesComparison functions are shown in Figure 15.15. Previous function blocks were outputs, these replace input contacts. The example shows an EQU (equal) function that compares two floating point numbers. If the numbers are equal, the output bit B3:5/1 is true, otherwise it is false. Other types of equality functions are also listed.Figure 5.15 Comparison FunctionsThe example in Figure 15.16 shows the six basic comparison functions. To the right of the figure are examples of the comparison operations.Figure 5.16 Comparison Function ExamplesThe ladder logic in Figure 5.16 is recreated in Figure 5.17 with the CMP function that allows text expressions.Figure 5.17 Equivalent Statements Using CMP StatementsExpressions can also be used to do more complex comparisons, as shown in Figure 5.18. The expression will determine if F8:1 is between F8:0 and F8:2.Figure 5.18 A More Complex Comparison ExpressionThe LIM and MEQ functions are shown in Figure 5.19. The first three functions will compare a test value to high and low limits. If the high limit is above the low limit and the test value is between or equal to one limit, then it will be true. If the low limit is above the high limit then the function is only true for test values outside the range. The masked equal will compare the bits of two numbers, but only those bits that are true in the mask.Figure 5.19 Complex Comparison FunctionsFigure 5.20 shows a numberline that helps determine when the LIM function willbe true.Figure 5.20 A Number Line for the LIM FunctionFile to file comparisons are also permitted using the FSC instruction shown in Figure 5.21. The instruction uses the control word R6:0. It will interpret the expression 10 times, doing two comparisons per logic scan (the Mode is 2). The comparisons will be F8:10<F8:0, F8:11<F8:0 then F8:12<F8:0, F8:13<F8:0 then F8:14<F8:0,F8:15<F8:0 then F8:16<F8:0, F8:17<F8:0 then F8:18<F8:0,F8:19<F8:0. The function will continue until a false statement is found, or the comparison completes. If the comparison completes with no false statements the output A will then be true. The mode could have also been All to execute all the comparisons in one scan, or Increment to update when the input to the function is true - in this case the input is a plain wire, so it will always be true.Figure 5.21 File Comparison Using Expressions5.3.2 Boolean FunctionsFigure 5.22 shows Boolean algebra functions. The function shown will obtain data words from bit memory, perform an and operation, and store the results in a new location in bit memory. These functions are all oriented to word level operations. The ability to perform Boolean operations allows logical operations on more than a single bit.Figure 5.22 Boolean FunctionsThe use of the Boolean functions is shown in Figure 15.23. The first three functions require two arguments, while the last function only requires one. The AND function will only turn on bits in the result that are true in both of the source words. The OR function will turn on a bit in the result word if either of the source word bits is on. The XOR function will only turn on a bit in the result word if the bit is on in only one of the source words. The NOT function reverses all of the bits in the source word.6.PLC compared with other control systemsPLCs are well-adapted to a range of automation tasks. These are typically industrial processes in manufacturing where the cost of developing and maintaining the automation system is high relative to the total cost of the automation, and where changes to the system would be expected during its operational life. PLCs contain input and output devices compatible with industrial pilot devices and controls; little electrical design is required, and the design problem centers on expressing the desired sequence of operations. PLC applications are typically highly customized systems so the cost of a packaged PLC is low compared to the cost of a specific custom-built controller design. On the other hand, in the case of mass-produced goods, customized control systems are economic due to the lower cost of the components, which can be optimally chosen instead of a "generic" solution, and where the non-recurring engineering charges are spread over thousands or millions of units.For high volume or very simple fixed automation tasks, different techniques are used. For example, a consumer dishwasher would be controlled by an electromechanical cam timer costing only a few dollars in production quantities.。
本科生毕业设计 (论文)
外文翻译
原文标题
Worlds Collide:
Exploring the Use of Social Media Technologies for
Online Learning
译文标题
世界的碰撞:
探索社交媒体技术在在线学习的应用
作者所在系别计算机科学与工程系作者所在专业计算机科学与技术作者所在班级
作者姓名
作者学号
指导教师姓名
指导教师职称讲师
完成时间2013年2月
北华航天工业学院教务处制
注:1. 指导教师对译文进行评阅时应注意以下几个方面:①翻译的外文文献与毕业设计(论文)的主题是否高度相关,并作为外文参考文献列入毕业设计(论文)的参考文献;②翻译的外文文献字数是否达到规定数量(3 000字以上);③译文语言是否准确、通顺、具有参考价值。
2. 外文原文应以附件的方式置于译文之后。
本科生毕业设计(论文)外文翻译毕业设计(论文)题目:电力系统检测与计算外文题目:The development of the single chipmicrocomputer译文题目:单片机技术的发展与应用学生姓名: XXX专业: XXX指导教师姓名: XXX评阅日期:单片机技术的发展与应用从无线电世界到单片机世界现代计算机技术的产业革命,将世界经济从资本经济带入到知识经济时代。
在电子世界领域,从 20 世纪中的无线电时代也进入到 21 世纪以计算机技术为中心的智能化现代电子系统时代。
现代电子系统的基本核心是嵌入式计算机系统(简称嵌入式系统),而单片机是最典型、最广泛、最普及的嵌入式系统。
一、无线电世界造就了几代英才。
在 20 世纪五六十年代,最具代表的先进的电子技术就是无线电技术,包括无线电广播,收音,无线通信(电报),业余无线电台,无线电定位,导航等遥测、遥控、遥信技术。
早期就是这些电子技术带领着许多青少年步入了奇妙的电子世界,无线电技术展示了当时科技生活美妙的前景。
电子科学开始形成了一门新兴学科。
无线电电子学,无线通信开始了电子世界的历程。
无线电技术不仅成为了当时先进科学技术的代表,而且从普及到专业的科学领域,吸引了广大青少年,并使他们从中找到了无穷的乐趣。
从床头的矿石收音机到超外差收音机;从无线电发报到业余无线电台;从电话,电铃到无线电操纵模型。
无线电技术成为当时青少年科普、科技教育最普及,最广泛的内容。
至今,许多老一辈的工程师、专家、教授当年都是无线电爱好者。
无线电技术的无穷乐趣,无线电技术的全面训练,从电子学基本原理,电子元器件基础到无线电遥控、遥测、遥信电子系统制作,培养出了几代科技英才。
二、从无线电时代到电子技术普及时代。
早期的无线电技术推动了电子技术的发展,其中最主要的是真空管电子技术向半导体电子技术的发展。
半导体电子技术使有源器件实现了微小型化和低成本,使无线电技术有了更大普及和创新,并大大地开阔了许多非无线电的控制领域。
The first dam for which there are reliable records was build on the Nile River sometime before 4000 B.C.It was used to divert the Nile and provide a site for the ancient city of Memphis.The oldest dam still in use is the Almanza Dam in Spain,which was constructed in the sixteenth ceentury.With the passage of time, materials and construction have improved, making possible the erection of such large dams as the Nurek Dam, which is being constructed in the U.S.S.R. On the vaksh River near the border of Afghanistan. This dam will be 1017ft(333m)high, of earth and rock fill. The failure of a dam may cause serious loss of life and property; consequently, the design and maintenance of dams are commonly under government surveillance. In the United States over 30000 dams are under the control of state authorities. The 1972 Federal Dam Safety Act (PL92-367) requires periodic inspections of dams by qualified experts. The failure of the Teton Dam in Idaho in June 1976 added to the concern for dam safety in the United States.1.Type of DamsDams are classified on the basis of the type and materials of construction, as gravity, arch, buttress, and earth. The first three types are usually constructed of concrete. A gravity dam depends on its own weight for stability and it straight in plan although sometimes slightly curved. Arch dams transmit most of the horizontal thrust of the water behind them to the abutments by arch action and have thinner cross sections than comparable gravity dams. Arch dams can be used only in narrow canyons where the walls are capable of withstanding the thrust produced by the arch action. The simplest of the many types of buttresses. Earth dams are embankments of rock or earth with provision for controlling seepage by means of an impermeable core or upstream blanket. More than one type of dam may be included in a single structure. Curved dams may combine both gravity and arch action to achieve stability. Long dams often have a concrete river section containning spollway and sluice gates and earth or rock-fill wing dams for the remainder of their length.The selection of the best type of dam for a given site is a problem in both engineering feasibility and cost. Feasibility is governed by topography, geology and climate. For example, because concrete spalls when subjected to alternate freezing and thawing, arch and buttress dams With thin concrete section are sometimes avoided in areas subject to extreme cold. The relative cost of the various type of dams depends mainly on the availability of construction materials near the site and the accessibility of transportation facilities. Dams are sometimes built in stages with the second or later stages constructed a decade or longer after the stage.The height of a dam is defined as the difference in elevation between the roadway, or spillway crest, and the lowest part of the excayated foundation. However, figures quoted for heights of dams are oftendetemined in other ways. Frequently the height is taken net height above the old riverbed.。
桂林航天工业学院英文翻译专业:姓名:学号:指导教师:宋美杰2013年6月10日The Application Of Modern Information Technology1 IntroductionModern information technology in the modern education thought, under the guidance of the theory of education, in the teaching field, is widely used in education, teaching, to bring new atmosphere and new pattern.In twenty-first Century, the comprehensive national strength and international competitiveness will more and more depend on the development of education, science and technology and the knowledge innovation level.To realize the socialism modernization, science and technology is a key, education is a foundation.This approach shows that education in China in the future development of the strategic position, as well as modern information technology innovation and development direction.Modern information technology is an important part of this strategic position.It not only affects the innovation education strategy implementation, also matter to the modernization of education.On our existing education mode, management mode, mode of thinking, including the concept of education, education method, education mode to will produce far-reaching effect.This is an opportunity, but also achallenge.2 The modern information technology in the school teaching management application possibility1)The modern information technology in school teaching and management in the application of inevitability and necessity Promoting educational modernization with informatization of education, the use of information technology to change the traditional education mode, it is the inevitable trend of educational development.In recent years, the education of our country informatization development is very rapid, China Education and research network, the construction of the modern distance education project, information technology education in the schools and "School-to-School" project, educational administration informatization project has started, the campus network, education metropolitan area network, provincial network construction promote education informatization of our country construction process.China is the world's most populous country in the world, taking the world's largest educational, financial, material resources are under great pressure.At the same time, the development of education in China are still at a low level,education structure and system, educational concepts and methods can not meet the need of modernization.Therefore, it is necessary to school management innovation, establish effective and economic way of modern information technology to participate in school management mode.In the informatization of education management in the construction process, widespread heavy hardware, software and application of light phenomena, makes a lot of system become "furnishings", can play a role in.Due to hardware equipment depreciation rate is very rapid, investment is larger, often because of the establishment of the management information system can not get timely and effective application and waste of money.Therefore must establish a set of campus network system using the approach, the school management to keep pace with the times.2)The characteristics of modern information technology①Overall Modern information technology capacity and transmission of information is in the form of many types of.The paper includes (Text), map (Graphics), sound (Sound), as (Images), program (Program), video (Video) (Animation), animation and other media information, covering the various elements of the teaching system of information and socialinformation, is comprehensive, accord with the society for Education requirements②Multidirectional The modern information technology provides the various disciplines, each job, each link, all kinds of personnel, various elements of connections between information channel, and this connection is multidirectional, very patient, is advantageous to the education information resources development, design, management and comprehensive utilization, is advantageous to the teaching process in the development, design, and management.③High efficiency Modern information technology based on high bandwidth, high speed network, is typical of the information superhighway, ensure rapid, comprehensive, accurate contact.Is in favor of educational information resources retrieval, processing and transmission.④Entirety Multimedia computer network as the representative of modern information technology has formed the whole structure of the system, a full range of elements and high efficiency, has formed the whole system.Is in favor of educational information resources management, promote the media technology to realize school management system technical transformation, process optimization.⑤Flexibility Modern information technology has other traditional teaching technique incomparable characteristics, can be in the school, also in the home through the Internet management; if he is at home with no internet access, to the point of teaching on internet.⑥Expansion The modern teaching in information technology strong extendibility, without geographical restrictions, not subject to restrictions on the number of classes, teachers and students, according to actual needs to be configured.⑦Sharing Modern information technology to achieve the resource the most widely shared, participation in the management of the department or the teacher not only confined to a school, an area, a country, through Internet network brings together the nation and even the world's best management approach to management.⑧Timeliness Internet is the mankind to overcome the limit of time and space, with the fastest speed of the best means of transmitting information, can provide the latest management requirements, teaching content and method, the latest information resources.⑨Interactivity Modern information technology tomeet the students and students, students and teachers, students and parents, between parents and teachers, convenient communication and contact, students of various kinds of problems as soon as possible to get the answers to the same question, explore different solution, to the students more challenging, more conducive to the cultivation of innovative ability.3 The modern information technology in the school teaching management applied research1)On information technology in school teaching and management of knowledge①The information technology and the discipline teaching conformity, to consider the information age to the practical requirement of the development of students.Course in the subject especially the integrated practice activity is not a simple learning or manual labor, but also different from the simple interest.To consider the information age to the practical requirement of the development of students, to integrate the information technology into the content and implementation of comprehensive practice activity course.②Cultivating students' information literacy inintegrated practice activity as well as the subject in the whole course of teaching.The field of information technology is the important research content of comprehensive practice activity, to achieve information technology and content of comprehensive practice activity to other content organic integration.To take information technology not only as a comprehensive practical activities as a means of, and should take the cultivation of students' information literacy in integrated practice activity, permeate the whole process, should focus on developing students' the ability of collecting and processing information.As my school teacher He Yan comprehensive practice class "man": this lesson shows "sources of information and the processing mode" (see Figure 1), our students by using modern educational technology to search data, questionnaires and comprehensive practical activity, in the process of fully trained students to collect information processing and analysis of information, communication, information ability, achieves the expected goal and a very good effect.③In the information technology and the discipline curriculum integration research of the implementation process, we should actively use network technology and other means, to expand the research scope, improve research level of implementation.As a modern schools should seek to establish the open system of campus network, and through the campus network and wide area network for students in cross country, cross a region, cross school, cross class of cooperative open up space, but also forteachers to cross a state, cross a region, cross, cross school class as guidance to provide conditions.Guide the students to actively use modern information resources, promote a variety of learning methods.As my school teacher Zhou Yang composition class "composition modification -- botanical description"(see Figure 2) , teachers to guide students in preliminary composition based on the use of our network collaborative learning system, or enjoy excellent essay, or exchange students excellent writing, or into the special subject learning website rich knowledge, deepen the understanding of plant, then on their first draft of the revised even to love classmate re-creation become more perfect, more satisfied with the work, finally will work again provided to conduct online and exchange.This process fully reflects the networked collaborative learning advantage and effect.Figure2④The design and application of information technology means to be committed to the students to create a reflective, independent cooperative learning situation and problem situation, prevent to be immersed in pure skill training.Using information technology to create independent, cooperation, explore learning scenarios and situations, there are many examples, with He Yan teacher's comprehensive practical course of "man" as an example, through modern media presentation disabled reality created "how to take care of the disabled" problem situation, thereby triggering the students include understanding disabilities knowledge, and disabledpeople, care for people with Party put forward suggestions of independent, collaborative learning behaviors.⑤We should try to create wealth, health, multi-source "green" school information environment, as the important guarantee of the implementation of.My school to create a green campus information environment, the school is a must as far as possible to provide rich, health information, the two is to pay attention to students' network morality and the cultivation of students' information analysis ability, so that students in a health information environment the health growth.2) Information technology in school management in some formOur school in the study of the subject and the process of thinking that, to realize the information technology and the course of comprehensive practice and discipline integration courses, teachers' classroom application, green information environment is the main factor, but the key problem is a must for students to create a healthy, lively, sea of green environment information, especially to for school education and teaching of modern media and network construction of a contents providing teaching resources database (see Figure 3).Figure 3: student information environment mapWe think, teaching resources are all available teaching resources collections, this one characteristic to the teaching resources database construction must follow the network, systematic, open principle.The network of teaching resource library must be based on the network platform, the advantages of Internet to realize resource exchange, sharing of teaching resources database; system should be scientific and systematic planning, classification, management, to maximize its effectiveness; open the teaching resource library is able to update all rivers run into sea, open system, through a number of technology allows the resource library information can beunlimited growth.My school repeatedly during the study process, the construction of teaching resource base as the focal point of the construction of software system.The initial construction of a campus network based teaching resources database, by local resources, teaching resources retrieval link system, resource and resource information automatic login system three modules to integrate into, its role is the:①Local resources,in the school campus network server for teachers to provide high speed local resources, including a variety of commodities, making courseware, courseware webpage material, classroom observation, teaching materials and so on;②Teaching resources retrieval link system, any employer or school resources more limited, Wan has vast space and resources available for our use, one for the teacher to provide search and link function can make the teachers use these resources become how easily;③Resource and resource information automatic login system, the school owned resources or can find online resources information after all, few, which requires all the teachers to find their own resources or information in the self-help mode do not stop to enrich school local repository or retrievallibrary, so that teachers can make use of information and resources are inexhaustible, continuous growth.The three modules in the "local resources" is the subject, and the other two systems complement each other, with the continuous improvement and constantly enrich we believe will provide our teachers use all kinds of teaching resources to provide strong support for the implementation of electronic lesson preparation.From the teaching resource library operation system, the system for teachers and teaching resources to establish a two-way communication channel, the teacher through the "resource retrieval link system" access to local resources and a wide area network resources or resource information; at the same time, the teacher can also take their own resources or information resources through "self-service information login system" to the local resources and "link system" which is ceaseless and rich, and the other teachers to share this information.(see Figure 4)4 Information technology in school management system1)Campus information management system information distributionCampus management information system will be the school's basic information, the main office room operations of the computer aided management, improve the management and service quality, level, accelerate information feedback, information safety sharing.According to the division of system functions, the main functional modules include the following:Information management of librarySet the Internet, School of information, social information, including school introduction, various departments of basic situation, characteristics of school education, class, teacher, student personal webpage;Management of school librarySet the school organization; management of the basicinformation of teachers, administrative duties, political affiliation, title, year assessment, award winning; teacher salaries, bonuses; provides a variety of teachers' information statistics chart; management of students information; on the school are unified; the management of student achievement, social practice and other information; establishment of middle school directory management school production, storage, receiving, allocation, loss, and can conveniently carry out sector assets collection and use of assets classification; and school information publishing system based on the net, school staff submitted application for repair, repair department received an application to repair.Education ManagementSchool of management subject test scores; School of management science and technology, archives information; the future will further improve the "school teachers management system", "school management system", "school management of the general logistics system" and "archives management system".Teaching management databaseManagement of teachers' daily work of teaching and scientific research, training of teachers, open class, thesis, twinning, student competition, grading standards.2) School of information system and managementTo realize modern educational resources integration and sharing of the height, the key is how to improve the school construction, acquisition, use of modern education resources, and resources into the school to promote quality education, improve the level of education and teachingabilities.Therefore, need to take campus network construction to become a professional network, many schools network through network technology connected, forming a collaborative work platform, in order to achieve maximal share of resources and the best configuration, make the teaching resources of the school management and communication, to realize the optimization of information environment.In order to strengthen the school management and campus network system, the establishment of the school named "XXX" school information release system:现代信息技术的应用(译文)1 引言现代信息技术在现代教育思想、理论的指导下,在教育、教学领域中广泛应用,给教育、教学带来了新的气象和新的格局。
感知空气质量,病态楼宇综合症(SBS)的症状和办公室工作所产生的两种不同污染负荷作者:Pawel Wargocki, David P. Wyon, Yong K. Baik, Geo Clausen and P. Ole Fanger摘要:感知空气质量,病态楼宇综合症(SBS)的症状和在现有的办公室所生产的污染物进行了研究,其中的空气污染水平可以通过引入或删除一个污染源进行修改。
这种可逆的干预使得空间被列为要么不是低污染或者低污染,如在新的欧洲设计标准规定的室内环境岑华润1752(1998)。
污染来源是一个20年的地毯,它是在屏幕后的一个机架,因此并不能为居住者看到。
五组各六名女性受试者暴露在办公室两次这样的条件,一次存在污染来源,一次消除污染来源,每次曝光被265毫伏下,每一组在一个时间。
他们评估认为在模拟办公室工作时空气质素和SBS症状在执行。
作者认为在空气污染的来源是当下时,在办公室的空气质量受评级的接受程度相当于22%人不满,而在空气污染的来源消失时有15%人不满。
在前者的情况有一个头痛患病率显着增加(P值0.04)和显著降低的(P值0.02)举报工作水平在文本输入和计算任务,这两者都需要一个持续的集中注意力。
在文字输入的任务中,实验对象明显的在空气污染的来源现存的地方比空气污染源不在办公室时打字速度慢6.5%。
减少室内空气污染负荷证明是提高舒适度、大楼内人员的健康和生产力的有效手段。
关键词:感知空气质量,病态楼宇综合症(SBS)的症状,生产力,源控制,低污染建筑,非低污染建筑。
导言:增加的室内空气负荷,由于建筑材料和家具产生的污染,造成了空气质量的降低。
(Fanger等人,1988。
Thorstensen等,1990。
Pejtersen等人,1991年。
Bluyssen等,1996)。
它可以对健康产生负面影响,通过增加建设而产生了粘膜,皮肤或全身的相关症状(Molhave等人,1986。
A Comparison of AASHTO Bridge Load Rating Methods Authors:Cristopher D. Moen, Ph.D., P.E., Virginia Tech, Blacksburg, VA, cmoen@Leo Fernandez, P.E., TranSystems, New York, NY, lafernandez@INTRODUCTIONThe capacity of an existing highway bridge is traditionally quantified with a load rating factor. This factor, when multiplied by the design live load magnitude, describes the total live load a bridge can safely carry. The load rating factor, RF, is related to the capacity of the controlling structural component in the bridge, C, and the dead load D and live load L applied to that component with the equation:L DC RF -=(1)Visual bridge inspections provide engineers with information to quantify the degradation in structural integrity of a bridge (i.e., the reduction in C). The trends in RF over time can be employed by bridge owners to make decisions regarding bridge maintenance and replacement. For example, when a bridge is first constructed, RF=1.3 means that a bridge can safely carry 1.3 times the weight of its design live load (i.e., that C-D, the existing capacity after accounting for dead load, is 1.3 times the design live load L). If the RF decreases to 0.8 after 20 years of service, deterioration of the primary structural components has most likely occurred and rehabilitation or replacement should be considered.Equation (1) is a simple idea, but C, D, and L can be highly variable and difficult to characterize depending upon the bridge location, bridge type, daily traffic flow, structural system (e.g., simple or continuous span) and choice of constructionmaterials (e.g. steel, reinforced or prestressed concrete, composite construction). The American Association of State Highway and Transportation Officials (AASHTO) Manual for Condition Evaluation of Bridges (MCEB) provides a formal load rating procedure to assist engineers in the evaluation of existing bridges [AASHTO 1994 with interims through 2003]. The MCEB provides two load rating methods, one based on an allowable stress approach (ASR) and another based on a load factor approach (LFR). Both the ASR and LFR methods are consistent with the design loading and capacity calculations outlined in the AASHTO Standard Specification for the Design of Highway Bridges [AASHTO 2002]. Recently momentum has shifted towards a probabilistic-based bridge design approach with the publication of the AASHTO LRFD Bridge Design Specifications [AASHTO 2007]. Bridges designed with this code have a uniform probability of failure (i.e., a uniform reliability). The AASHTO Manual for Condition Evaluation and Load and Resistance Factor Rating (LRFR) of Highway Bridges [AASHTO 2003] extends this idea of uniform reliability from LRFD to the load rating of existing bridges and is currently the recommended load rating method (over the ASR and LFR methods) by the Federal Highway Administration (FHWA).The transition from ASR and LFR to LRFR bridge load rating methodology represents a positive shift towards a more accurate and rational bridge evaluation strategy. Bridge owners are optimistic that the LRFR load rating methodology will improve bridge safety and economy, but they are also currently dealing with the tough questions related to its implementation. Why do ASR, LFR, and LRFR methods produce different load rating factors for the same bridge? Should we change the posting limit on a bridge if the LRFR rating is lower than the MCEB ratings? What are the major philosophical differences between the three methods? It is the goal of this paper to answer some of these questions (and at the same time dispel common myths) with a succinct summary of the history of the three methods. A comparison of the LFR and LRFR methods for a typical highway bridge will also bepresented, with special focus on the benefits inherent in the rational, probabilistic approach of the LRFR load rating method. This paper is also written to serve as an introduction to load rating methodologies for students and engineers new to the bridge evaluation field.S UMMARY OF EXISTING LITERATURESeveral reports have been published which summarize the development of AASHTO design and load rating methodologies. FHWA NHI Report 07-019 is an excellent historical reference describing the evolution of AASHTO live loadings (including the HS20-44 truck) and load factor design [Kulicki 2007b]. NCHRP Report 368 describes the development of the AASHTO LRFD design approach[Nowak 1999], and is supplemented by the NCHRP Project No. 20-7/186 report[Kulicki 2007a] with additional load factor calibration research. NCHRP Report 454 documents the calibration of the AASHTO LRFR load factors [Moses 2000], and NCHRP Web Document 28 describes the implementation of the LRFR load rating method [NCHRP 2001]. The NCHRP Project 20-7/Task 122 report supplements Web Document 28 with a detailed comparison of the LRFR and LFD load rating approaches [Mertz 2005].AASHTO A LLOWABLE STRESS RATING METHODThe Allowable Stress Rating (ASR) method is the most traditional of the three load rating methods, primarily because the performance of a bridge is evaluated under service conditions in the load rating equation [AASHTO 1994]:)1(21l L A D A C RF +-= (2) C is calculated with a “working stress” approach where the capacity of the primary structural members is limited to a proportion of the assumed failure stress (e.g., 0.55F y for structural steel in tension and 0.3f’c for concrete in compression.) Consistent with the service level approach, the demand dead load D and live load Lare unfactored, i.e. A 1=1.0 and A 2=1.0.The uncertainty in the strength of the bridge is accounted for in the ASR approach by limiting the applied stresses, but the variability in the demand loads is neglected. For example, dead load on a bridge has a relatively low variability because the dimensional tolerances of the primary structural members (e.g., a hot-rolled steel girder) are small [Nowak 2000]. Vehicular traffic loads on a bridge have a higher uncertainty because of varying traffic volume (annual average daily truck traffic or ADTT) and varying types of vehicular traffic (e.g., primarily trucks on an interstate or primarily cars on a parkway). The ASR method also does not consider redundancy of a bridge (e.g., continuous or simple spans, hammerhead piers or multiple column bents) or the amplified uncertainty in the capacity of aging structural members versus newly constructed members. The ASR method’s treatment of capacity and demand results in load rating factors lacking a uniform level of reliability (i.e., a uniform probability of failure) across all types of highway bridges. For example, with the ASR method, two bridges can have RF=2 even though one bridge carries a high ADTT with a non-redundant superstructure (higher probability of failure) while the other bridge carries a low AADT with a redundant superstructure (lower probability of failure).AASHTO L OAD F ACTOR R ATING METHODIn contrast to the ASR method’s service load approach to load rating, the AASHTO Load Factor Rating (LFR) method evaluates the capacity of a bridge at its ultimate limit state . The LFR load rating factor equation is:12(1)nR A D RF A L I φ-=+ (3) where the capacity C of the bridge in (2) has been replaced with φ R n , the predicted strength of the controlling structural component in the bridge. R n is the nominal capacity of the structural component and φ is a strength reduction factor which accounts for the uncertainty associated with the material properties,workmanship, and failure mechanisms (e.g., shear, flexure, or compression). For example, φ is 0.90 for the flexural strength of a concrete beam and 0.70 for a concrete column with transverse ties [AASHTO 2002]. The lower φ for the concrete column means that there is more uncertainty inherent in the structural behavior and strength prediction for a concrete column than for a concrete beam. The dead load factor A 1 is 1.3 to account for unanticipated permanent load and A 2 is either 1.3 or2.17, defining a live load envelope ranging from an expected design level (Inventory) to an extreme short term loading (Operating) [AASHTO 1994].The LFR method is different from the ASR method because it calculates the load rating factor RF by quantifying the potential for failure of a bridge (and associated loss of life and property) instead of quantifying the behavior of a bridge in service . The LFR method is similar to the ASR method in that it does not account for the influence of redundancy on the reliability of a bridge. Also, the load factors A 1 and A 2 are defined without a formal reliability analysis (i.e., they are not derived by considering probability distributions of capacity and demand) and therefore do not produce rating factors consistent with a uniform probability of failure.AASHTO L OAD AND R ESISTANCE F ACTOR R ATING METHODThe AASHTO Load and Resistance Factor Rating (LRFR) method evaluates the existing capacity of a bridge using structural reliability theory [Melchers 1999; Nowak 2000]. The LRFR rating factor equation is similar in form to (2) and (3):(1)c s n DC DW L R DC DW RF LL IM ϕϕϕγγγ--=+ (4) where ϕc is a strength reduction factor that accounts for the increased variability in the member strength of existing bridges when compared to new bridges [Moses 1987]. The factor ϕs addresses the failure of structural systems and penalizes older non-redundant structures with lower load ratings [Ghosn 1998]. The dead load factors γDC and γDW have been separated in LRFR to account for a lower variability indead load for primary structural components DC (e.g., columns and beams) and a higher variability for bridge deck wearing surfaces DW.Another important difference between the LRFR method and the ASR and LFR methods is the use of the HL93 notional design live load, which is a modern update to the HS20-44 notional load first implemented in 1944 [Kulicki 2007b] (notional in this case means that the design live load is not meant to represent actual truck traffic but instead is a simplified approximation intended to conservatively simulate the influence of live load across many types of bridge spans). The HL93 loading produces live load demands which are more consistent with modern truck traffic than the HS20-44 live load. The HL93 design loading combines the HS20-44 truck with a uniform load and also considers the load case of a tandem trailer with closely spaced axles and relatively high axle loads (in combination with a uniform load) [AASHTO 2007]. The design tandem load increases the shear demand on shorter bridges and produces, in combination with the design lane load, a live load effect greater than or equal to the AASHTO legal live load Type 3, Type 3S2, and Type 3-3 vehicles [AASHTO 1994].AASHTO LFR VS. LRFR LOAD RATING COMPARISONA parameter study is conducted in this section to explore the differences between the AASHTO LFR and LRFD load rating methods. The ASR method is not included in the study because it evaluates the live load capacity of a bridge at service levels, which makes it difficult to compare against the ultimate limit state LFR and LRFR methods (also note that the ASR method employs less modern “working stress” methods for calculating member capacities than LFR and LRFR). A simple span multi-girder bridge with steel girders and a composite concrete bridge deck is considered. The flexural capacity of an interior girder is assumed to control the load rating. AASHTO legal loads are employed in the study to provide a consistent live loading between the rating methods (although the impact factor and live loaddistribution factor for the controlling girder will be different for LFR and LRFR methods).The LFR load rating equation in (3) is rewritten as:u 12LFR LFD LFD M A D RF A B I L-= (5) where M u is the LFD flexural capacity of the composite girder (φ is implicit in the calculation of M u ), B LFD is the live load distribution factor for an interior girder[AASHTO 1994]:5.5LFD S B = (6) and the live load impact factor I LFD is [AASHTO 1994]:501125LFD I =++ (7) The span length of the bridge is denoted as . A 1 and A 2 are chosen as 1.3 in this study to compare the LFR Operating rating with the LRFR rating method (the intent of the LRFR legal load rating is to provide a single rating level consistent with the LFD Operating level [AASHTO 2003]).The LRFR equation in (4) is rewritten to be consistent with (5):u2c s DC D LFR L LRFD LRFD M M RF B I Lϕϕγγ-= (8) Where B LRFD is the live load distribution factor for moment in an interior girder[AASHTO 2007]0.60.230.075()()()9.512g LRFD sK S S B t =+ (9) and I LRFD , the live load impact factor, is 1.33 [AASHTO 2007]. M D is the dead load moment assuming that the dead load effects from a wearing surface and utilities are zero (i.e., DW is zero) and γDC is 1.25. M u is assumed equivalent in (5) and (8) because the LFD and LRFD prediction methods for the flexural capacity of composite girders are founded on a common structural basis [Tonias 2007]. The term K g /12 t s 3 in (9) is assumed equal to 1 as suggested by the LRFD specification forpreliminary design [AASHTO 2007] (this approximation reduces the number of variables in the parameter study). The term LL in (4), i.e. the LRFD lane loading, is approximated by 2L in (8). This conversion from lane loading to wheel line loading allows for the cancellation of L (i.e., the live load variable) when (8) and (5) are formulated as a ratio:(10)Rearranging the term M u in (10) leads to:(11)The relationship between the LRFR and LFR load rating equations, as described in (11), is explored in Figure 1 to Figure 4. M D/M u is assumed as 0.30 for the bridge span lengths considered in this study. Equation (11) varies only slightly (a maximum of 5%) when M D/M u ranges between 0.10 to 0.50 because the LFR and LRFR dead load factors are similar, i.e. γDC=1.25 and A1=1.3. Figure 1 demonstrates that the LRFR legal load rating is less than the LFD Operating rating for both short and long single span bridges (the span range is 20 ft. to 200 ft. in this study). This is consistent with the findings of NCHRP Web Document 28, which demonstrates that the LRFR legal load rating is lower than the LFD Operating rating but higher than the LFD Inventory rating [NCHRP 2001]. RF LRFR increases for longer span lengths because the live load distribution factor B LRFD in (9) decreases with increasing . RF LRFR also increases as the girder spacing, S, increases (S ranges from 3 ft. to 7 ft. in Figure 1) because the LRFD live load distribution factor B LRFD decreases relative to the LFD live load distribution factor B LFD for larger girder spacings.FIGURE 1-COMPARISON OF LRFR AND LFR (OPERATING) LEGAL LOAD RATING FACTORS FOR FLEXURE IN AN INTERIOR GIRDER OF A SIMPLE SPAN MULTI-GIRDER COMPOSITE BRIDGEThe volume of traffic is directly accounted for in the LRFR load rating method by considering the Average Daily Truck Traffic (ADTT) (this is an improvement over the LFR method which does not account for frequency of bridge usage when calculating RF). Figure 2 highlights the variability of the LRFR legal load rating with ADTT. RF LRFR is approximately 30% greater for a lightly traveled bridge (ADTT≤100) when compared to a heavily traveled bridge (ADTT≥5000), and the LRFR load rating trends toward the LFD Operating load rating for lightly traveled bridges.FIGURE 2 - INFLUENCE OF ANNUAL DAILY TRUCK TRAFFIC ON THE LRFR LEGAL LOAD RATING FACTOR (S=4 FT.)The factors ϕs and ϕc account for system redundancy and the increased uncertainty from bridge deterioration in the LRFR load rating method respectively (this is an important update to the LFR rating method which assumes one level of uncertainty for all bridge types and bridge conditions). Figure 3 demonstrates that RF LRFR decreases by approximately 30% as the bridge condition deteriorates from good to poor. Bridges with a small number of girders (e.g., 3 or 4 girders) are considered to be more susceptible to catastrophic collapse, which is reflected in the lower RF LRFR load rating factors in Figure 4.FIGURE 3 –INFLUENCE OF CONDITION FACTOR ϕs ON THE LRFR LOAD RATING FACTOR (S=4 FT.)FIGURE 4 - INFLUENCE OF SYSTEM FACTOR ϕc ON LRFR LOAD RATING FACTOR (S=4 FT.)D ISCUSSIONThe LRFR load rating method represents an important step in the evolution of bridge evaluation strategies. The method is calibrated to produce a uniform level of reliability across all existing highway bridges (i.e., a uniform probability of failure) and is an improvement over the ASR and LFR methods because it allows bridge owners to account for traffic volume, system redundancy, and the increased uncertainty in the predicted strength of deteriorating bridge components. The LRFR load rating method can be used as a foundation for the development of more accurate performance-based bridge evaluation strategies in the future, where bridge owners directly calculate the existing capacity (or reliability) with in service data from a structural health monitoring network and make maintenance decisions based on relationships between corrosion, structural capacity, and repair or replacement costs.Reliability-based cost models have been proposed, for example [Nowak 2000]: T I F F C C C P =+ (12)Where CT is the total cost of the bridge over its lifetime, CI is the initial cost, CF is the failure cost of the bridge (which could include rehabilitation costs), and PF is the failure probability of the bridge. As PF increases (i.e., as the bridge deteriorates over time), the total cost CT increases, which ties the reliability of the bridge to economy and provides a metric from which to optimize maintenance decisions and minimum rehabilitation costs in a highway system. The continued evolution of bridge evaluation strategies depends on improved methods for evaluating the structural capacity of bridges and defining correlation between corrosion in bridges, strength loss, and failure rates [ASCE 2009].The AASHTO LRFR load rating method is a step forward in bridge evaluation strategy when compared to the ASR and LFR methods because it is calibrated to produce a uniform reliability across all existing highway bridges. The LRFR method provides factors which account for the volume of traffic on the bridge, the redundancy of the superstructure, and the increased uncertainty in structural capacity associated with a deteriorating structure. The flexural LRFR load rating factor for an interior steel composite girder in a multi-girder bridge is up to 40% lower than the LFR Operating load rating over a span range of 20 ft. to 200 ft. and for girder spacings between 3 ft. and 7 ft. The LRFR flexural load rating factor increases for longer span lengths and larger girder spacings, influenced primarily by the LRFD live load distribution factor.A CKNOWLEDGEMENTSThe authors are grateful for the guidance provided by Bala Sivakumar in the organization of this paper. The authors also wish to thank Kelley Rehm and Bob Cullen at AASHTO for their help identifying historical references pertaining to AASHTO live load vehicles and design procedures.R EFERENCESAASHTO, Manual for Condition Evaluation of Bridges, Second Edition, with 1995, 1996, 1998, 2000, 2001, and 2003 Revisions, AASHTO, Washington, D.C., 1994.AASHTO, Standard Specifications for Highway Bridges, 17th Edition, AASHTO, Washington, D.C., 2002.AASHTO, Manual for Condition Evaluation and Load and Resistance Factor Rating (LRFR) of Highway Bridges, AASHTO, Washington, D.C., 2003.AASHTO, LRFD Bridge Design Specifications, 4th Edition, AASHTO, Washington, D.C., 2007.ASCE, "ASCE/SEI-AASHTO Ad-Hoc Group on Bridge Inspection, Rehabilitation, and Replacement White Paper on Bridge Inspection and Rating", ASCE Journal of Bridge Engineering, 14(1), 2009, 1-5.Ghosn, M., Moses, F., NCHRP Report 406: Redundancy in Highway Bridge Superstructures, TRB, National Research Council, Washington, D.C., 1998.Kulicki, J.M., Prucz, Zolan, Clancy, Chad M., Mertz, Dennis R., Updating the Calibration Report for AASHTO LRFD Code (Project No. NCHRP 20-7/186), AASHTO, Washington, DC, 2007a.Kulicki, J.M., Stuffle, Timothy J., Development of AASHTO Vehicular Loads (FWHA NHI 07-019), Federal Highway Administration, National Highway Institute (NHNI-10), 2007b.Melchers, R.E., Structural Reliability Analysis and Prediction, John Wiley and Sons, New York, 1999.Mertz, D.R., Load Rating by Load and Resistance Factor Evaluation Method (NCHRP Project 20-07/Task 122), TRB, National Research Council, Washington DC, 2005.Moses, F., NCHRP Report 454: Calibration of Load Factors for LRFR BridgeEvaluation, TRB, National Research Council, Washington, D.C., 2000.Moses, F., Verma, D., NCHRP Report 301: Load Capacity Evaluation of Existing Bridges, TRB, National Research Council, Washington, D.C., 1987.NCHRP, Manual for Condition Evaluation and Load Rating of Highway Bridges Using Load and Resistance Factor Philosophy (NCHRP Web Document 28), TRB, National Research Council, Washington DC, 2001.Nowak, A.S., NCHRP Report 368: Calibration of LRFD Bridge Design Code, TRB, National Research Council, Washington D.C., 1999.Nowak, A.S., Collins, Kevin R., Reliability of Structures, McGraw Hill, New York, 2000.Tonias, D.E., Zhao, J.J., Bridge Engineering: Design, Rehabilitation, and Maintentance of Modern Highway Bridges, McGraw-Hill, New York, 2007.AASHTO关于桥梁荷载等级评定方法的比较作者:Cristopher D. Moen,Ph.D.,P.E.,Virginia Tech,Blacksburg,VA,cmoen@Leo Fernandez,P.E.,TranSystems,New York,NY,lafernandez@绪论:现有的高速公路桥梁的承载能力是用传统单一荷载等级因数定量化的。
Single-chip1.The definition of a single-chipSingle-chip is an integrated on a single chip a complete computer system .Even though most of his features in a small chip,but it has a need to complete the majority of computer components:CPU,memory,internal and external bus system,most will have the Core.At the same time,such as integrated communication interfaces,timers,real-time clock and other peripheral equipment.And now the most powerful single-chip microcomputer system can even voice ,image,networking,input and output complex system integration on a single chip.Also known as single-chip MCU(Microcontroller),because it was first used in the field of industrial control.Only by the single-chip CPU chip developed from the dedicated processor. The design concept is the first by a large numberof peripherals and CPU in a single chip,the computer system so that smaller,more easily integrated into the complex and demanding on the volume control devices.INTEL the Z80 is one of the first design in accordance with the idea of the processor,From then on,the MCU and the development of a dedicated processor parted ways.Early single-chip 8-bit or all the four.One of the most successful is INTELs 8031,because the performance of a simple and reliable access to a lot of good praise.Since then in 8031to develop a single-chip microcomputer system MCS51 series.based on single-chip microcomputer system of the system is still widely used until now.As the field of industrial control requirements increase in the beginning of a 16-bit single-chip,but not ideal because the price has not been very widely used.After the90s with the big consumer electronics product development,single-chip technology is a huge improvement.INTEL i960 series with subsequent ARM in particular ,a broad range of application,quickly replaced by 32-bit single-chip 16-bit single-chip performance has been the rapid increase in processing power compared to the 80s to raise a few hundred times.At present,the high-end 32-bit single-chip frequency over 300MHz,the performance of the mid-90s close on the heels of a specialprocessor,while the ordinary price of the model dropped to one U.S dollars,the most high-end models,only 10 U.S dollars.Contemporary single-chip microcomputer system is no longer only the bare-metal environment in the development and use of a large number of dedicated embedded operating system is widely used in the full range of single-chip microcomputer.In PDAs and cellphones as the core processing of high-end single-chip or even a dedicated direct access to Windows and Linux operating systems.More than a dedicated single-chip processor suitable for embedded systems,so it was up to the application.In fact the number of single-chip is the worlds largest computer.Modern human life used in almost every piece of electronic and mechanical products will have a single-chip integration.Phone,telephone,calculator,home applicances,electronic toys,handheld computers and computer accessories such as a mouse in the Department are equipped with 1-2 single chip.And personal computers also have a large number of single-chip microcomputer in the workplace.Vehicles equipped with more than 40 Department of the general single-chip ,complex industrial control systems and even single-chip may have hundreds of work at the same time!SCM is not only far exceeds the number of PC and other integrated computing,even more than the number of human beings.2.single-chip introducedSingle-chip,also known as single-chip microcontroller,it is not the completion of a logic function of the chip,but a computer system integrated into a chip.Speaking in general terms: a single chip has become a computer .Its small size,light weight,cheap,for the learning,application and development of facilities provided .At the same time,learning to use the principle of single-chip computer to understand and structure the best choice.Single-chip and computer use is also similar to the module,such as CPU,memory,parallel bus, as well as the role and the same hard memory,is it different from the performance of these components are relatively weak in our home computer a lot,but the price is low ,there is generally no more than 10yuan,,can use it to make some control for a class of electrical work is not very complex is sufficient.We are using automatic drum washing machines, smoke hood,VCD and so on inside the home appliances can see its shadow! It is mainly as part of the core components of the control.It is an online real-time control computer,control-line is at the scene,we need to have a stronger anti-interference ability,low cost,and this is off-line computer(such as home PC)The main difference.By single-chip process,and can be amended.Through different procedures to achieve differentfunctions,in particular the special unique features,this is the need to charge other devices can do a great effort,some of it is also difficult to make great efforts to do so .A function is not very complicated fi the United States the development of the 50s series of 74 or 60 during the CD4000series to get these pure hardware,the circuit must be a big PCB board !However,if the United States if the successful 70s series of single-chip market ,the result will be different!Simply because the adoption of single-chip preparation process you can achieve high intelligence,high efficiency and high reliability!Because of cost of single-chip is sensitive,so the dominant software or the lowest level assembly language,which is in addition to the lowest level for more than binary machine code of the language ,since such a low-level so why should we use ?Many of the seniors language has reached a level of visual programming why is it not in use ?The reason is simple ,that is,single-chip computer as there is no home of CPU,also not as hard as the mass storage device.A visualization of small high-level language program,even if there is only one button which will reach the size of dozens of K! For the home PCs hard drive is nothing,but in terms of the single-chip microcomputer is unacceptable.Single-chip in the utilization of hardware resources have to do very high ,so the compilation of the original while still in heavy use .The same token ,if the computer giants operating system and appplications run up to get the home PC,homePCcan not afford to sustain the same.It can be said that the twentieth century across the three “power”of the times,that is ,the electrical era,the electronic age and has now entered the computer age. However ,such a computer,usually refers to a personal computer,or PC.It consisits of the host ,keyboards,displays .And other components.There is also a type of computer,not how most people are familiar with . This computer is smart to give a variety of mechanical single-chip(also known as micro-controller).As the name suggests,these computer systems use only the minimum of an integrated circuit to make a simple calculation and control. Because of its small size,are usually charged with possession of machine in the “belly”in. It in the device,like the human mind plays a role, it is wrong,the ent ire device was paralyzed .Now,this single chip has a very wide field of use,such as smart meters,real-time industrial control,communications equipment,navigation systems,and household appliances. Once a variety of products with the use of the single-chip ,will be able to play so that the effectiveness of product upgrading,product names often adjective before the word “intelligent”,such as washing machines and so intelligent.At present,some technical personnel of factories or other amateur electrtonics developers from engaging in certain products ,not the circuit is too complex ,that is functional and easy to be toosimple imitation.The reason may be the product not on the cards or the use of single-chip programmable logic device on the other.3.single-chip historysingle-chip 70 was born in the late 20th century,experienced a SCM,MCU,SOC three stages. Single-chip micro-computer 1.SCM that(Single Chip Microcomputer)stage,is mainly a single from to find the best of the best embedded systems architecture.”Innovation model”to be successful,lay the SCM with the general-purpose computers,a completely different path of development . In embedded systems to create an independent development path,Intel Corporation credit.That is 2.MCU microcontroller(Micro Controller Unit)stage,the main direction of technology development: expanding to meet the embedded applications,the target system requirements for the various peripheral circuits and interface circuits,to highlingt the target of intelligent control.It covers all areas related with the objectSystem,therefore,the development of MCU inevitably fall on the heavy electrical,electronics manufacturers. From this point of view ,Intels development gradually MCU has its objective factors.MCU in the development ,the most famous manufacturers when the number of Philips Corporation.Philips in embedded applications for its enormous advantages,the MCS-51 from the rapid deveploment of single-chip micro-computer to the microcontroller.Therefore,when we look back at the path of development of embedded systems,Intel and Philips do not forget the historical merits.3.Single-chip is an independent embedded systems development,to the MCU an important factor in the development stage,is seeking applications to maximize the natural trend .With the mico-electronics technology,IC design,EDA tools development,based on the single-chip SOC design application systems will have greater development. Therefore,the understanding of single-chip micro-computer from a single ,monolithic single-chip microcontroller extends to applications.4.Single-chip applicationsAt present,single-chip microcomputer to infiltrate all areas of our lives,which is very difficult to find the area of almost no traces of single-chip microcomputer.Missile navigation equipment,aircraft control on a variety of instruments,compuer network communications and data transmission,industrial automation,real-time process control and data processing ,are widely used in a variety of smart IC card,limousine civilian security systems,video recorders,cameras,the control of automatic washingmachines,as well as program-controllde toys,electronic pet,etc,which are inseparable from the single-chip microcomputer.Not to mention the field of robot automation ,intelligent instrumentation,medical equipment has been. Therefore,the single- chip learning ,development and application to a large number of computer applications and intelligent control of scientists,engineers. Single-chip widely used in instruments and meters,household appliances,medical equipment ,acrospace,specialized equipment and the intellingent management in areas such as process control,generally can be divided into the following areas:1.In the smart application of instrumentationSingle-chip with small size,low power consumption,control,and expansion flexibility , miniaturization and ease of sensors,can be realized,suchvoltage,power,frequency,humidity,temperature,flow,speed,thickness,angle,length,hardness,elemen t,measurement of physical pressure. SCM makes use of digital instrumentation,intelligence,miniaturization and functional than the use of electronic or digital circuitry even stronger.For example,precision measurement equipment(power meter,oscilloscope,and analyzer).2.In the industrial controlMCU can constitute a variety of control systems,data acquisition system.Such as factory assembly line of intelligent management ,intelligent control of the lift ,all kinds of alarm systems ,and computer networks constitute a secondary control system.3.In the applicationof household appliancesIt can be said that almost all home appliances are using the single-chip control,electric rice from favorable,washing machines,refrigerators,air conditioners,color TV and other audio video equipment,and then to the electronic weighing equipment,all kinds ,everywhere.4.On computer networks and communication applications in the field ofGenerally with the modern single-chip communication interface,can be easily carried out with computer carried out with computer data communications,computer networks and in inter-application communications equipment to provide an excellent material conditions,the communications equipment to provide an excellent material condition,from the mobile phone ,telephone , mini-program-controlled switchboards,buiding automated communications system call,the train wireless communications,and then you can see day-to-day work of mobile phones,Mobile communications,such as radios.5.Single-chip in the field of medical equipment applicationsSingle-chip microcomputer in medical devices have a wide range of purpose,such as medical ventilator,various analyzers,monitors,ultrasonic diagnostic equipment and hospital call systems.6.In a variety of large-scale electrical applications of modularSome special single-chip design to achieve a specific function to carry out a variety of modular circuit applications,without requiring users to understand its internal structure.Integrated single-chip microcomputer such as music ,which seems to be simpleFunctions,a miniature electronic chip in a pure(as distinct from the principle of tape machine),would require a complex similar to the principle of the computer. Such as :music signal to digital form stored in memory(similar to ROM),read out by the microcontroller into analog music signal(similar to the sound card).In large circuits,modular applications that greatly reduces the size ,simplifying the circuit and reduce the damage,error rate ,but also to facilitate the replacement.In addition,single-chip microcomputer in the industrial,commercial,financial,scientific research ,education,defense aerospace and other fields have a wide range of uses.单片机1.单片机定义单片机是一种集成在电路芯片上的完整计算机系统。
毕业设计外文翻译Newly compiled on November 23, 2020Title:ADDRESSING PROCESS PLANNING AND VERIFICATION ISSUES WITH MTCONNECTAuthor:Vijayaraghavan, Athulan, UC BerkeleyDornfeld, David, UC BerkeleyPublication Date:06-01-2009Series:Precision Manufacturing GroupPermalink:Keywords:Process planning verification, machine tool interoperability, MTConnect Abstract:Robust interoperability methods are needed in manufacturing systems to implement computeraided process planning algorithms and to verify their effectiveness. In this paper wediscuss applying MTConnect, an open-source standard for data exchange in manufacturingsystems, in addressing two specific issues in process planning and verification. We use data froman MTConnect-compliant machine tool to estimate the cycle time required for machining complexparts in that machine. MTConnect data is also used in verifying the conformance of toolpaths tothe required part features by comparing the features created by the actual tool positions to therequired part features using CAD tools. We demonstrate the capabilities of MTConnect in easilyenabling process planning and verification in an industrial environment.Copyright Information:All rights reserved unless otherwise indicated. Contact the author or original publisher for anynecessary permissions. eScholarship is not the copyright owner for deposited works. Learn moreADDRESSING PROCESS PLANNING AND VERIFICATION ISSUESWITH MTCONNECTAthulan Vijayaraghavan, Lucie Huet, and David DornfeldDepartment of Mechanical EngineeringUniversity of CaliforniaBerkeley, CA 94720-1740William SobelArtisanal SoftwareOakland, CA 94611Bill Blomquist and Mark ConleyRemmele Engineering Inc.Big Lake, MNKEYWORDSProcess planning verification, machine tool interoperability, MTConnect.ABSTRACTRobust interoperability methods are needed in manufacturing systems to implement computeraided process planning algorithms and to verifytheir effectiveness. In this paper we discuss applying MTConnect, an open-source standardfor data exchange in manufacturing systems, in addressing two specific issues in processplanning and verification. We use data from an MTConnect-compliant machine tool to estimatethe cycle time required for machining complex parts in that machine. MTConnect data is also used in verifying the conformance of toolpaths to the required part features by comparing the features created by the actual tool positions tothe required part features using CAD tools. We demonstrate the capabilities of MTConnect in easily enabling process planning and verificationin an industrial environment.INTRODUCTIONAutomated process planning methods are acritical component in the design and planning of manufacturing processes for complex parts. Thisis especially the case with high speed machining, as the complex interactions betweenthe tool and the workpiece necessitates careful selection of the process parameters and the toolpath design. However, to improve the effectiveness of these methods, they need to be integrated tightly with machines and systems in industrial environments. To enable this, we need robust interoperability standards for data exchange between the different entities in manufacturing systems.In this paper, we discuss using MTConnect – an open source standard for data exchange in manufacturing systems – to address issues in process planning and verification in machining.We discuss two examples of using MTConnect for better process planning: in estimating the cycle time for high speed machining, and in verifying the effectiveness of toolpath planning for machining complex features. As MTConnect standardizes the exchange of manufacturing process data, process planning applications can be developed independent of the specific equipment used (Vijayaraghavan, 2008). This allowed us to develop the process planning applications and implement them in an industrial setting with minimal overhead. The experiments discussed in this paper were developed at UC Berkeley and implemented at Remmele Engineering Inc.The next section presents a brief introduction to MTConnect, highlighting its applicability in manufacturing process monitoring. We then discuss two applications of MTConnect – in computing cycle time estimates and in verifying toolpath planning effectiveness. MTCONNECTMTConnect is an open software standard for data exchange and communication between manufacturing equipment (MTConnect, 2008a). The MTConnect protocol defines a common language and structure for communication in manufacturing equipment, and enables interoperability by allowing access to manufacturing data using standardized interfaces. MTConnect does not define methods for data transmission or use, and is not intended to replace the functionality of existing products and/or data standards. It enhances the data acquisition capabiltiies of devices and applications, moving towards a plug-and-play environment that can reduce the cost of integration. MTConnect is built upon prevalent standards in the manufacturing and software industry, which maximizes the number of tools available for its implementation and provides a high level of interoperability with other standards and tools in these industries.MTConnect is an XML-based standard andmessages are encoded using XML (eXtensibleMarkup Language), which has been usedextensively as a portable way of specifying data interchange formats (W3C, 2008). A machinereadable XML schema defines the format ofMTConnect messages and how the data itemswithin those messages are represented. At thetime of publication, the latest version of the MTConnect standard defining the schema is (MTConnect, 2008b).The MTConnect protocol includes the following information about a device:Identity of a deviceIdentity of all the independent components ofthe deviceDesign characteristics of the deviceData occurring in real or near real-time by thedevice that can be utilized by other devices or applications. The types of data that can beaddressed includes:Physical and actual device design dataMeasurement or calibration dataNear-real time data from the deviceFigure 1 shows an example of a data gatheringsetup using MTConnect. Data is gathered innear-time from a machine tool and from thermal sensors attached to it. The data stored by the MTConnect protocol for this setup is shown inTable 1. Specialized adaptors are used to parsethe data from the machine tool and from thesensor devices into a format that can beunderstood by the MTConnect agent, which inturn organizes the data into the MTConnect XML schema. Software tools can be developed which operate on the XML data from the agent. Sincethe XML schema is standardized, the softwaretools can be blind to the specific configuration ofthe equipment from where the data is gathered. FIGURE 1: MTCONNECT SETUP.TABLE 1:MTCONNECT PROTOCOL INFORMATION FOR MACHINE TOOL IN FIGURE 1.Device identity “3-Axis Milling Machine”Devicecomponents1 X Axis; 1 Y Axis; 1 Z Axis;2 Thermal SensorsDevice designcharacteristicsX Axis Travel: 6”Y Axis Travel: 6”Z Axis Travel: 12”Max Spindle RPM: 24000Data occurringin deviceTool position: (0,0,0);Spindle RPM: 1000Alarm Status: OFFTemp Sensor 1: 90oFTemp Sensor 2: 120oFAn added benefit of XML is that it is a hierarchical representation, and this is exploited by designing the hierarchy of the MTConnect schema to resemble that of a conventional machine tool. The schema itself functions as a metaphor for the machine tool and makes the parsing and encoding of messages intuitive. Data items are grouped based on their logical organization, and not on their physical organization. For example, Figure 2 shows the XML schema associated with the setup shown in Figure 1. Although the temperature sensors operate independant of the machine tool (with its own adaptor), the data from the sensors are associated with specific components of the machine tool, and hence the temperature data is a member of the hierarchy of the machine tool. The next section discusses applying MTConnect in estimating cycle time in high-speed machining.ACCURATE CYCLE TIME ESTIMATESIn high speed machining processes there can be discrepancies between the actual feedrates during cutting and the required (or commanded) feedrates. These discrepancies are dependenton the design of the controller used in the machine tool and the toolpath geometry. While there have been innovative controller designs that minimize the feedrate discrepancy (Sencer,2008), most machine tools used in conventional industrial facilities have commercial off-the-shelf controllers that demonstrate some discrepancies in the feedrates, especially when machining complex geometries at high speeds. There is a need for simple tools to estimate the discrepancy in these machining conditions. Apart from influencing the surface quality of the machined parts, feedrate variation can lead to inaccurate estimates of the cycle time during machining. Accurate estimates of the cycle time is a critical requirement in planning for complex machining operations in manufacturing facilities. The cycle time is needed for both scheduling the part in a job shop, as well as for costing the part. Inaccurate cycle time estimates (especiallywhen the feed is overestimated) can lead to uncompetitive estimates for the cost of the part and unrealistic estimates for the cycle time. Related Workde Souza and Coelho (2007) presented a comprehensive set of experiments to demonstrate feedrate limitations during the machining of freeform surfaces. They identified the causes of feedrate variation as dynamic limitations of the machine, block processing time FIGURE 2: MTCONNECT HIERARCHY.for the CNC, and the feature size in the toolpaths. Significant discrepancies were observed between the actual and commanded feeds when machining with linear interpolation (G01). The authors used a custom monitoring and data logging system to capture the feedrate variation in the CNC controller during machining. Sencer et al. (2008) presented feed scheduling algorithms to minimize the machining time for 5- axis contour machining of sculptured surfaces. The algorithm optimized the profile of the feedrate for minimum machining time, while observing constrains on the smoothness of the feedrate, acceleration and jerk of the machine tool drives. This follows earlier work in minimizing the machining time in 3-axis milling using similar feed scheduling techniques(Altintas, 2003). While these methods are very effective in improving the cycle time of complex machining operations, they can be difficult toapply in conventional factory environments asthey require specialized control systems. The methods we discuss in this paper do notaddress the optimization of cycle time during machining. Instead, we provide simple tools to estimate the discrepancy in feedrates during machining and use this in estimating the cycletime for arbitrary parts.MethodologyDuring G01 linear interpolation the chief determinant of the maximum feedrateachievable is the spacing between adjacentpoints (G01 step size). We focus on G01 interpolation as this is used extensively when machining simultaneously in 3 or more axes.The cycle time for this machine tool to machinean arbitrary part (using linear interpolation) is estimated based on the maximum feedachievable by the machine tool at a given path spacing. MTConnect is a key enabler in this process as it standardizes both data collectionas well as the analysis.The maximum feedrate achievable is estimated using a standardized test G-code program. This program consists of machining a simple shapewith progressively varying G01 path spacings.The program is executed on an MTConnectcompliant machine tool, and the position andfeed data from the machine tool is logged innear-real time. The feedrate during cutting at the different spacings is then analyzed, and amachine tool “calibration” curve is developed, which identifies the maximum feedrate possibleat a given path spacing.FIGURE 3: METHODOLOGY FOR ESTIMATING CYCLE TIME.Conventionally, the cycle time for a giventoolpath is estimated by summing the time takenfor the machine tool to process each block of Gcode, which is calculated as the distancetravelled in that block divided by the feedrate ofthe block. For a given arbitrary part G-code to be executed on a machine tool, the cycle time is estimated using the calibration curve as follows. For each G01 block executed in the program, the size of the step is calculated (this is the distance between the points the machine tool is interpolating) and the maximum feedrate possible at this step size is looked up from the calibration curve. If the maximum feedrate is smaller than the commanded feedrate, this line of the G-code is modified to machine at the (lower) actual feedrate, if the maximum feedrate is greater, then the line is left unmodified. This is performed for all G01 lines in the program, and finally, the cycle time of the modified G-code program is estimated the conventional way. This methodology is shown in Figure 3. The next section discusses an example applying this methodology on a machine tool.ResultsWe implemented the cycle time estimation method on a 3-axis machine tool with a conventional controller. The calibration curve of this machine tool was computed by machining a simple circular feature at the following linear spacings: ”, ”, ”, ”,”, ”, ”, ”, ”. Weconfirmed that the radius of the circle (that is, the curvature in the toolpath) had no effect on the feedrate achieved by testing with circular features of radius ”, ”, and ”, andobserving the same maximum feedrate in all cases. Table 2 shows the maximum achievable feedrate at each path spacing when using a circle of radius 1”. We can see from the table that the maximum feedrate achievable is a linear function of the path spacing. Using a linear fit, the calibration curve for this machine tool can be estimated. Figure 4 plots the calibration curve for this machine tool. The relationship between the feedrate and the path spacing is linear asthe block processing time of the machine tool controller is constant at all feedrates. The block processing time determines the maximumfederate achievable for a given spacing as it isthe time the machine tool takes to interpolateone block of G-code. As the path spacing (or interpolatory distance) linearly increases, thespeed at which it can be interpolated alsoincreases linearly. The relationship for the datain Figure 4 is:MAX FEED (in/min) = 14847 * SPACING (in)TABLE 2: MAXIMUM ACHIEVABLE FEEDRATE AT VARYING PATH SPACINGSpacing Maximum Feedrate”””””””””We also noticed that the maximum feedrate for agiven spacing was unaffected by thecommanded feedrate, as long as it was lesserthan the commanded feedrate. This means thatit was adequate to compute the calibration curveby commanding the maximum possible feedratein the machine tool.FIGURE 4: CALIBRATION CURVE FOR MACHINE TOOL.Using this calibration curve, we estimated thecycle time for machining an arbitrary feature inthis machine tool. The feature we used was a3D spiral with a smoothly varying path spacing,which is shown in Figure 5. The spiral path isdescribed exclusively using G01 steps andinvolves simultaneous 3-axis interpolation. Thepath spacing of the G-code blocks for thefeature is shown in Figure 6.FIGURE 5: 3D SPIRAL FEATURE.FIGURE 6: PATH SPACING VARIATION WITH GCODE LINE FOR SPIRAL FEATURE.Figure 7 shows the predicted feedrate based onthe calibration curve for machining the spiralshape at 100 inches/min, compared to the actualfeedrate during machining. We can see that the feedrate predicted by the calibration curvematches very closely with the actual feedrate.We can also observe the linear relationship between path spacing and maximum feedrate by comparing figures 6 and 7.FIGURE 7: PREDICTED FEEDRATE COMPARED TO MEASURED FEEDRATE FOR SPIRAL FEATURE AT 100 IN/MIN.FIGURE 8: ACTUAL CYCLE TIME TO MACHINE SPIRAL FEATURE AT DIFFERENT FEEDRATES. The cycle time for machining the spiral atdifferent commanded feedrates was alsoestimated using the calibration curve. Figure 8 shows the actual cycle time taken to machinethe spiral feature at different feedrates. Noticehere that the trend is non-linear – an increase infeed does not yield a proportional decrease incycle time – implying that there is somefeedrate discrepancy at high feeds. Figure 9 compares the theoretical cycle time to machineat different feedrates to the actual cycle time andthe model predicted cycle time. We can see thatthe model predictions match the cycle times very closely (within 1%). Significant discrepancies are seen between the theoretical cycle time and the actual cycle time when machining at high feed rates. These discrepancies can be explained bythe difference between the block processingtime for the controller, and the time spent oneach block of G-Code during machining. At high feedrates, the time spent at each block is shorterthan the block processing time, so the controller slows down the interpolation resulting in a discrepancy in the cycle time.These results demonstrated the effectiveness of using the calibration curve to estimate feed, and ultimately apply in estimating the cycle time.This method can be extrapolated to multi-axis machining by measuring the feedrate variationfor linear interpolation in specific axes. We canalso specifically correlate feed in one axis to the path spacing instead of the overall feedrate. FIGURE 9: ACTUAL OBSERVED CYCLE TIMESAND PREDICTED CYCLE TIMES COMPARED TO THE NORMALIZED THEORETICAL CYCLE TIMES FOR MACHINING SPIRAL FEATURE AT DIFFERENT FEEDRATES.TOOL POSITION VERIFICATIONMTConnect data can also be used in verifying toolpath planning for the machining of complex parts. Toolpaths for machining complex featuresare usually designed using specialized CAM algorithms, and traditionally the effectiveness ofthe toolpaths in creating the required partfeatures are either verified using computer simulations of the toolpath, or by surfacemetrology of the machined part. The formerapproach is not very accurate, as the toolpath commanded to the machine tool may not matchthe actual toolpath travelled during machining.The latter approach, while accurate, tends to betime consuming and expensive, and requires the analysis and processing of 3D metrology data(which can be complex). Moreover, errors in the features of a machined part are not solely due to toolpath errors, and using metrology data fortoolpath verification may obfuscate toolpatherrors with process dynamics errors. In aprevious work we discussed a simple way toverify toolpath planning by overlaying the actualtool positions against the CAM generated tool positions (Vijayaraghavan, 2008). We nowdiscuss a more intuitive method to verify the effectiveness of machining toolpaths, wheredata from MTConnect-compliant machine toolsis used to create a solid model of the machined features to compare with the desired features.Related WorkThe manufacturing community has focussed extensively on developing process planning algorithms for the machining of complex parts.Elber (1995) in one of the earliest works in thefield, discussed algorithms for toolpathgeneration for 3- and 5-axis machining. Wrightet al. (2004) discussed toolpath generationalgorithms for the finish machining of freeform surfaces; the algorithms were based on thegeometric properties of the surface features. Vijayaraghavan et al. (2009) discussed methodsto vary the spacing of raster toolpaths and tooptimize the orientation of workpieces infreeform surface machining. The efficiency ofthese methods were validated primarily bymetrology and testing of the machined part. MethodologyTo verify toolpath planning effectiveness, we logthe actual cutting tool positions during machiningfrom an MTConnect-compliant machine tool,and use the positions to generate a solid modelof the machined part. The discrepancy infeatures traced by the actual toolpath relative tothe required part features can be computed by comparing these two solid models. The solidmodel of the machined part from the toolpositions can be obtained as follows:Create a 3D model of the toolCreate a 3D model of the stock materialCompute the swept volume of the tool as ittraces the tool positions (using logged data)Subtract the swept volume of the tool from thestock materialThe remaining volume of material is a solidmodel of the actual machined part.The two models can then be compared using 3D boolean difference (or subtraction) operations.ResultsWe implemented this verification scheme bylogging the cutter positions from an MTConnectcompliant 5-axis machine tool. The procedure toobtain the solid model using the tool positionswas implemented in Vericut. The two modelswere compared using a boolean diff operation in Vericut, which identified the regions in the actual machined part that were different from therequired solid model. An example applying thismethod for a feature is shown in Figure 10.FIGURE 10: A – SOLID MODEL OF REQUIRED PART; B – SOLID MODEL OF PART FROM TOOL POSITIONS SHOWING DISCREPANCIES BETWEEN ACTUAL PART FEATURES AND REQUIRED PART FEATURES. SHADED REGIONSDENOTE ~” DIFFERENCE IN MATERIAL REMOVAL.DISCUSSION AND CONCLUSIONS MTConnect makes it very easy to standardize data capture from disparate sources anddevelop common planning and verification applications. The importance of standardization cannot be overstated here – while it has always been possible to get process data from machine tools, this can be generally cumbersome andtime consuming because different machine tools require different methods of accessing data.Data analysis was also challenging to standardize as the data came in differentformats and custom subroutines were needed to process and analyze data from differentmachine tools. With MTConnect the data gathering and analysis process is standardized resulting in significant cost and time savings. This allowed us to develop the verification tools independent of the machine tools they were applied in. This also allowed us to rapidly deploy these tools in an industrial environment without any overheads (especially from the machine tool sitting idle). The toolpath verification was performed with minimal user intervention on a machine which was being actively used in a factory. The only setup needed was to initially configure the machine tool to output MTConnect-compliant data; since this is a onetime activity, it has an almost negligible impacton the long term utilization of the machine tool. Successful implementations of data capture and analysis applications over MTConnect requires a robust characterization of the data capture rates and the latency in the streaming information. Current implementations of MTConnect are over ethernet, and a data rate of about 10~100Hzwas observed in normal conditions (with no network congestion). While this is adequate for geometric analysis (such as the examples in this paper), it is not adequate for real-time process monitoring applications, such as sensor data logging. More work is needed in developing theMTConnect software libraries so that acceptable data rates and latencies can be achieved.One of the benefits of MTConnect is that it can act as a bridge between academic research and industrial practice. Researchers can developtools that operate on standardized data, whichare no longer encumbered by specific data formats and requirements. The tools can then be easily applied in industrial settings, as the framework required to implement the tools in a specific machine or system is already in place. Greater use of interoperability standards by the academic community in manufacturing research will lead to faster dissemination of research results and closer collaboration with industry. ACKNOWLEDGEMENTSWe thank the reviewers for their valuable comments. MTConnect is supported by AMT –The Association for Manufacturing Technology. We thank Armando Fox from the RAD Lab at UC Berkeley, and Paul Warndorf from AMT for their input. Research at UC Berkeley is supported by the Machine Tool Technology Research Foundation and the industrial affiliates of the Laboratory for Manufacturing and Sustainability. To learn more about the lab’s REFERENCESAltintas, Y., and Erkormaz, K., 2003, “Feedrate Optimization for Spline Interpolation In High Speed Machine Tools”, CIRP Annals –Manufacturing Technology, 52(1), pp. 297-302. de Souza, A. F., and Coelho, R. T., 2007, “Experimental Inv estigation of Feedrate Limitations on High Speed Milling Aimed at Industrial Applications”, Int. J. of Afv. Manuf. Tech, 32(11), pp. 1104–1114.Elber, G., 1995, “Freeform Surface Region Optimization for 3-Axis and 5-Axis Milling”, Computer-Aided Design, 27(6), pp. 465–470. MTConnectTM, 2008b, MTConnectTM Standard, Sencer, B., Altintas, Y., and Croft, E., 2008, “Feed Optimization for Five-axis CNC Machine Tools with Drive Constraints”, Int. J. of Mach.Tools and Manuf., 48(7), pp. 733–745. Vijayaraghavan, A., Sobel, W., Fox, A., Warndorf, P., Dornfeld, D. A., 2008, “Improving Machine Tool Interoperability with Standardized Interface Protocols”, Proceedings of ISFA. Vijayaraghavan, A., Hoover, A., Hartnett, J., and Dornfeld, D. A., 2009, “Improving Endmilli ng Surface Finish by Workpiece Rotation and Adaptive Toolpath Spacing”, Int. J. of Mach. Tools and Manuf., 49(1), pp. 89–98.World Wide Web Consortium (W3C), 2008, “Extensible Markup Language (XML),”Wright, P. K., Dornfeld, D. A., Sundararajan, V., and Misra, D., 2004, “Tool Path Generation for Finish Machining of Freeform Surfaces in the Cybercut Process Planning Pipeline”, Trans. of NAMRI/SME, 32, 159–166.毕业设计外文翻译网址。
在线使用数字图像处理全面扫描颗粒大小和形状检查台湾科技大学机械工程学院,台北106,台湾,中国[摘要]:在线全扫描检测系统开发的粒度分析。
首先是一个粒子图像通过扫描获得的光线路的技术,然后分析了利用数字图像处理。
该系统由一个粒子分离模块,图像采集模块,图像处理模块和电控模块。
实验过程中所采用的是非均匀0.1毫米的颗粒。
这种系统的主要优点是全面的分析粒子的组成,没有任何重叠或消失,从而改善区域扫描电荷耦合器件(CCD)的采购问题。
粒度分布,圆度,和球可重复使用的系统精度偏差为±1%左右。
所开发的系统被证明对学术和工业用户任何粒子的大小和形状又方便又灵活使用。
2010过程工程研究所<<颗粒学报>> 中国社会科学院科学Elsevier.B.V出版版利所有[关键词]:粒度分布粒度表征图像分析线阵CCD 自动检测1 引言自动视觉检测格式(AVI)系统最近已研究了一种可能具有机器视觉的系统,用于透射电镜检查技术在各种工业产品中的应用。
例如,半导体制造(Chou, Rao, Sturzenbecker,Wu, & Brecher, 1997),网状材料(Brzakovic & Vujovic, 1992),冶金(Caron, Duvieubourg, Orteu, & Revolte, 1997),包装(Caron,Duvieubourg, & Postaire, 1997)等。
机器视觉能够有效地改善和最终取代人工检查。
传统的粒子大小分析涉及干燥原料混合,然后再通过手工来筛摇分离出特定用途的材料。
颗粒间的结合是非常薄弱的,他们可能在振动分和检查期间就可能分离。
他们也在粒子尺寸测量存在着的不确定性。
因此,寻找非接触式图像处理和分析方法已成为粒子检查的重要方法。
(Banta, Cheng, & Zaniewski, 2003;Mora, Kwan, & Chan, 1998)。
本研究旨在开发一个完整的扫描检查系统,就是采用数字图像处理光线路扫描粒度分析技术。
该系统的优点是不重叠的颗粒或丢失,这将解决区域扫描电荷耦合器件(CCD)的采购问题进行全面分析。
它也不同于传统的图像分析方法,如霍夫变换(Duda & Hart, 1972)分析方法和迭代法(Sakaue & Takagi, 1983)。
全扫描检测系统包括一个分离的模块,图像采集模块,图像分析与处理模块和电气控制模块,从而创造了以下优点:(1)快速测量电,荷兰国际集团0.05秒/单图像处理和分析框架的速度,缩短图像处理和分析时间(15秒/帧,Banta et .al., 2003;4-10秒/帧,Song et al等,1998 ;0.14秒/帧,Tom &John, 1998);(2)对空间需求小,价格更低,并增加了准确度和精度(±1%左右)的重复测量;(3)颗粒自由下落的垂直地带上的图像采集从而减少了由于输送带进行颗粒传输产生的误差重叠分析。
2 实验装置2.1 模块实验系统组成,三个子模块:粒子分离模块,图像采集模块和电控模块,每个设计,调整,装配,作为一个独立的单元测试,如图所1所示。
进料系统的设计独立分离不打破单独颗粒的颗粒结构,通过使用超高频振动电机,粒子自由均匀分布,从而避免因图像分析错误使粒子在图像采集区域重叠,如图2所示线阵CCD扫描下粒子间的相对运动和相机的粒子图像。
虽然图像获取继续以帧的形式在计算机存储器中被存储,以避免图像滞后、缺少,并在图像采集阶段重叠。
因此,全扫描图像得到了颗粒分析。
图(1)试验装置示意图图(3) 标定模板图(2) 图像获取的结构模型2.2校准校准是图像的像素转移到真实世界的物理单位的一个进程。
一个直径为D 的标准球(6毫米)用于获取的像素值在水平和垂直方向,即Px和Py如图3所示。
x和y方向的缩放因素,取决于如下:Rx =D/ PX=6/94=0.064(mm/pixel) (1)Ry=D/PY=6/94=0.064(mm/pixel) (2)3 图像处理和分析图像处理流程图如图(4)所示。
包括许多方法如阈值,图像转换,图像屏蔽,图像合并等,使用这些工具来处理和分析粒子图像(见图5)的信息,如粒子的粒度分布、累积重量百分比、圆度和球形度都可以获取。
图(4)图像处理流程图(5) 微粒的帧图像3.1 阈值处理阈值处理先把灰度图像转换成一个二进制图像:如果在图像中的感兴趣像素大于或等于阈值(k),那么它们设置为1,而在图像的其他部分设置为0,以获得物体轮廓,如图6所示的(a)和(b)项。
二进制图像B(x ,y)可以表示为式(3)。
同时,我们为了区分背景,选择那个粒子轮廓图像如图6所示。
反转的二进制图像B(x ,y)到(),B x y ',只要将从二进制的值从“0”到“1”,反之亦然。
()1x,0B y ⎧=⎨⎩ (,)f x y k ≥如果其他 (3)其中f(x ,y)是一个像素的灰度图像,B(x ,y)是相应的二进制值的像素,(),B x y '是它的反转二进制的值。
为了获得阈值(k),我们使用的度量自动阈值设置的为IMAQ 提供的版本。
阈值就是像素值k ,只要满足这下面的表达式最小化(国家仪器公司,2000年):()()()()i=k1121i=0h i h i i N k i i i μμ-=+=-+-∑∑ (4)其中是图像中的一切介于0和k 像素值的平均值,2μ是图像中所有像素值的函数k 平均介于k+1到255,N 代表在图像中的灰度值总数(256是八位像素的图像),h(i)代表在每个灰度级i 在数字图像中的像素。
1μ3.2 边界提取和区域填充通过边界轮廓提取粒子的轮廓图如图(6)所示。
当一个不规则的颗粒表面受到反射光的影响,孔经常出现在阈值处理后的图像颗粒表面。
在这种情况下,可以使颗粒利用区域填充算法使区域填充成为一个完整的学科。
(Gonzalez & Woods, 2002),如图(6)(e)所示。
图(6) 分析一的图像处理方法(a)灰度图像(b)二值化图像(c)图像转换(d)边缘提取(e)区域填充(f)边缘去除3.3 删除并保持边界删除边框可以帮助避免在边缘不完整颗粒分析时造成分析故障,如图6(f )所示。
边缘上的不完全的颗粒将被在图像以下步骤合并,那里的图像去掉颗粒边界并与他们对应合并。
该图像时,去掉边框的图像,可以表示为AR ,可以表示如下:R F RA A A =- (5)这里 FA 是区域填充之后的图像,RA 是图像的保留边缘。
对于图像屏蔽和合并来说,保持图像的边界是一个预处理步骤,保持边缘的功能是清除内部颗粒,保持边界如图7所示。
图像保持边界后,AR 可以有如下表达:R F RA A A =- (6)3.4 图像屏蔽及合并为了实现全面的分析,我们需要分析在边界不完整的颗粒。
如果有N 帧的图像,我们屏蔽了第一帧的上半部分和第二帧的下半部分,然后合并成一个新的完整形象,图像的两半并进行分析等,如图7(a )-(c )所示。
我们将继续这个过程,直到我们到达最后一幅图像下半部分的第(N- 1)帧合并之后融入了第N 帧图像的顶部。
图(7) 对分析2的图像处理方法(a)边缘保持(b)图像屏蔽(c)图像显现3.5 颗粒分析基于一个质心(o),最小半径(a),最大半径(b)的不规则的颗粒,它是一个逼近于大部分半径为a少部分半径为b的等效椭圆如图(8)所示。
(a)微粒(b)椭圆图(8) 轮廓不规则图像与等效椭圆为了估算它的面积和体积(Biryukov,Faiman,&Goldfeld,1999),粒子的重量可以这样计算其密度。
然后,我们得到如粒子数,粒子的重量百分比等额外的信息,圆度和球度(形状因素),如表1所示。
颗粒区(粒子的等效椭圆和体积(其体积相当于扁球)面积的方程:面积A= ab π (7)243V a bπ=体积 (8) 这里a 和b 分别代表实半轴和虚半轴。
正如方程9所示,颗粒圆度误差描述了预测二维图像:当一个粒子是圆的圆度值接近1,如图(9)所示(Krumbein &Sloss ,1963年)图(9) 圆度24p Aπ=圆度误差 (9)这里A 是颗粒的面积P 是颗粒的周长。
球形粒子(形状因子,SF )描述其三尺寸的偏差:球形粒子越圆,SF 的值就更接近0((Yen et al., 1998)。
球形粒子的计算(Carter & Yan, 2005)的步骤:r = (10)1nii mean rr n==∑ (11).rms d r =(12).rms d F meanr S r =(13)其中rrms.d 是径向长度偏差均方根(RMS )。
的在和a, 和b 分别是代表径向最小,平均值,和最大的长度。
大小 数量 重量(g ) 重量(%) 圆度 球度 2 638 35.18 32.93 0.75 0.14 2+1.681421 51.6448.33 0.72 0.17 1.68+1.41 713 17.68 16.55 0.67 0.21 1.41+1.19 128 2.12 1.98 0.59 0.28 1.19+0.84 23 0.22 0.21 0.56 0.32 0.84+0.59 0 0 0 0 0 0.59表1:采样实验的图像分析结果4结果与讨论4.1 由于粒子样本脆性,两个批次的优质细砂用于精度的重复性测量的测试,实验重复每批五次。
见表2和表3。
大小 实验1 实验2 实验3 实验4 实验5 误差(最大-最小) 2.5 3.803.984.554.484.400.75 2.5+2.0 12.48 12.62 13.01 12.96 12.82 0.53 2.0+1.5 28.09 27.01 27.98 27.53 27.63 1.08 1.5+1.0 23.46 23.21 23.44 22.96 22.32 1.14 1.0+0.517.73 18.36 17.69 18.06 18.901.21 0.5+0.25 10.07 9.54 9.03 9.41 9.56 1.04 0.25 4.38 5.28 4.30 4.60 4.37 0.98 总计100100100100100表2: 重复性测试(wt%)大小实验1 实验2 实验3 实验4 实验5 误差(最大-最小)2.5 14.40 13.67 14.95 13.58 13.35 1.602.5+2.0 12.93 14.23 12.28 13.99 13.45 1.952.0+1.5 16.72 16.11 16.88 17.01 15.72 1.291.5+1.0 22.47 23.27 22.00 22.49 22.85 1.271.0+0.5 22.53 22.19 22.70 22.40 22.76 0.570.5+0.25 8.49 8.12 8.58 8.17 9.02 0.900.25 2.47 2.42 2.62 2.26 2.85 0.59总计100 100 100 100 100表3:重复性测试(wt%)表2和表3列出了每个测试重量百分比的测试结果。