外文翻译-基于c#的宿舍管理系统的设计与实现
- 格式:pdf
- 大小:3.74 MB
- 文档页数:13
MVC框架中英⽂对照外⽂翻译⽂献中英⽂对照外⽂翻译⽂献(⽂档含英⽂原⽂和中⽂翻译)译⽂:Web 2.0下的Spring MVC框架摘要 - 当要建⽴丰富⽤户体验的WEB应⽤时,有⼤量的WED应⽤框架可以使⽤,却很少有该选择哪⼀种的指导。
WEB 2.0应⽤允许个体管理他们⾃⼰的在线⽹页,并能与其他在线⽤户和服务器共享。
这样分享需要访问控制器来实现。
然⽽,现有的访问控制器解决⽅案不是令⼈很满意。
因为在开放且由⽤户主导的WEB环境下,它满⾜不了⽤户的功能需求。
MVC框架是在所有的WEB开发框架中最受欢迎的。
模型-视图-控制器(MVC)是⼀种软件架构,如今被认为是⼀种体系结构在软件⼯程模式中使⽤。
该模式从⽤户界⾯(输⼊和演⽰)分离出了“领域逻辑”(基于⽤户的应⽤逻辑),它允许独⽴地开发,测试和维护每个分离的部分。
模型-视图-控制器(MVC)模型创建的应⽤分离为不同的层次应⽤,同时在每两者之间建⽴松散的耦合。
关键字 - Spring MVC, 结构, XStudio, SOA, 控制器I.绪论如何确切地定义⼀个⽹站为“WEB 2.0”的呢?关于这有着许多不同见解,使它很难精确地下⼀个确切的定论。
但当我们将所有的WEB开发框架过⼀遍之后它就会变得清晰了。
各种基于WEB开发的架构如下:●Ntier架构(Ntier Architecture)在软件⼯程中,多层架构(常被称为n-tier架构)是⼀种表⽰层,应⽤处理层和数据管理层在逻辑上分开处理的客户端-服务器架构。
例如,⼀个应⽤在⽤户与数据库之间使⽤中间件提供数据请求服务就⽤到了多层体系结构。
最为⼴泛应⽤的多层体系结构是三层架构。
N-tier 应⽤架构为开发者提供了⽤来创建了⼀个灵活且可复⽤的模型。
通过打破应⽤层次,开发者只需修改或添加⼀个特定的层,⽽不是要去重写⼀遍整个应⽤。
它需要有⼀个表⽰层,⼀个业务层或者数据访问层和⼀个数据层。
层(layer)和层(tier)之间的概念常常是可以互换的。
中英文资料翻译中央处理器设计摘要CPU(中央处理单元)是数字计算机的重要组成部分,其目的是对从内存中接收的指令进行译码,同时对存储于内部寄存器、存储器或输入输出接口单元的数据执行传输、算术运算、逻辑运算以及控制操作。
在外部,CPU为转换指令数据和控制信息提供一个或多个总线并从组件连接到它。
在通用计算机开始的第一章,CPU作为处理器的一部分被屏蔽了。
但是CPU有可能出现在很多电脑之间,小,相对简单的所谓微控制器的计算机被用在电脑和其他数字化系统中,以执行限制或专门任务。
例如,一个微控制器出现在普通电脑的键盘和检测器中,但是这些组件也被屏蔽。
在这种微控制器中,与我们在这一章中所讨论的CPU可能十分不同。
字长也许更短,(或者说4或8个字节),编制数量少,指令集有限。
相对而言,性能差,但对完成任务来说足够了。
最重要的是它的微控制器的成本很低,符合成本效益。
在接下去的几页里,我考虑的是两个计算机的CPU,一个是一个复杂指令集计算机( CISC),另一个是精简指令集计算机(RISC)。
在详细的设计检查之后,我们比较了两个CPU的性能,并提交了用来提高性能的一些方法的简要概述。
最后,我们讨论了关于一般数字系统设计的设计思路。
1.双CPU的设计正如我们前一章提到的,一个典型的CPU通常被分成两部分:数据路径和控制单元。
该数据路径由一个功能单元、登记册和内部总线组成,为在功能单元、存储器以及其他计算机组件之间提供转移信息的途径。
这个数据途径有可能是流水线,也有可能不是。
控制单元由一个程序计数器,一个指令寄存器,控制逻辑,和可能有其他硬或微程序组成。
如果数据途径是流水线那么控制单元也有可能是流水线。
电脑的CPU是一个部分,要么是复杂指令集计算机( CISC),要么是精简指令集计算机(RISC),有自己的指令集架构。
本章的目的是提交两个CPU的设计,用来说明指令集,数据路径,和控制单元的构造特征的合并。
该设计将自上而下,但随着先前组件设计的重新使用,来说明指令集构架在数据路径和控制单元上的影响,数据路径上的单元的影响力。
C# 编程语言概述外文文献翻译(含:英文原文及中文译文)文献出处:Barnett M. C# Programming Language Overview [J]Lecture Notes in Computer Science, 2016, 3(4):49-59.英文原文C# Programming Language OverviewBarnett M1. History of C, C++, C#The C# programming language is based on the spirit of the C and C++ programming languages. This account has powerful features and an easy-to-learn curve. It cannot be said that C# is the same as C and C++, but because C# is built on both, Microsoft has removed some features that have become more burdensome, such as pointers. This section looks at C and C++ and tracks their development in C#.The C programming language was originally defined in the UNIX operating system. In the past, we often wrote some UNIX applications, including a C compiler, and finally used to write UNIX itself. It is generally accepted that this academic competition extends to the world that contains this business. The original Windows API was defined to work with C using Windows code, and until now at least the core Windows operating system APIS maintains the C compiler.From a defined point of view, C lacks a single detail, like thelanguage Smalltalk does, and the concept of an object. Y ou will learn more about the contents of the object. In Chapter 8, "Write Object-Oriented Code," an object is collected as a data set and some operations are set. The code can be completed by C, but the concept of the object cannot be Forced to appear in this language. If you want to construct your code to make it like an object, that's fine. If you don't want to do this, C will really not mind. The object is not an intrinsic part. Many people in this language did not spend a lot of time in this program example. When the development of object-oriented perspectives began to gain acceptance, think about the code approach. C++ was developed to include this improvement. It is defined to be compatible with C (just as all C programs are also C++ programs and can be compiled by a C++ compiler) The main addition to the C++ language is to provide this new concept. C++ additionally provides a derivative of the class (object template) behavior.The C++ language is a modified version of the C language. Unfamiliar, infrequent languages such as VB, C, and C++ are very low-level and require a lot of coding to make your application run well. Reason and error checking. And C++ can be handled in some very powerful applications, the code works very smoothly. The goal is set to maintain compatibility with C. C++ cannot break the low-level features of C.Microsoft defined C# retains a lot of C and C++ statements. The code can also want to identify the code quickly. A big advantage for C# is that its designers did not make it compatible with C and C++. When this may seem like a wrong treatment, it is actually good news. C# eliminates something that makes C and C++ difficult to work with. Beginning with quirks and defects found in C. C# is starting a clean slate and does not have any compatibility requirements. So it can maintain the strengths of its predecessors and discard weaknesses that make C and C++ programs difficult to survive.2. Introduce C#C#, the new language introduced in the .NET system, is derived from C++. However, C# is a popular, object-oriented (from beginning to end) type-safe language.Language featuresThe following section provides a quick perspective on some of the features of the C# language. If some of them are unfamiliar to you, don't worry, everything will be explained in detail in the following sections. In C#, all code and data must be attached to a class. Y ou cannot define a variable outside the class, nor can you write any code that is not in the class. When a class object is created and run, the class is constructed. When the object of the class is released, the class is destroyed. The class provides single inheritance, and all the classes eventually get from thebase class is the object. Over time, C# provides versioned techniques to help with the formation of your classes to maintain code compatibility when you use code from your earlier classes.Let's look at an example of a class called Family. This class contains two static fields to hold the first and last names of family members. In the same way, there is a way to return the full name of a family member.Class Class1{Public string FirstName;Public string LastName;Public string FullName(){}Return FirstName + LastName;}Note: Single inheritance means that a C# class can only inherit from a base class.C# is a collection that you can package your class into a namespace called the namespace class. And you can help arrange collection of classes on logical aggregations. When you started learning C#, it was clear that all namespaces were related to .NET type systems. Microsoft also chose to include channels that assist in the compatibility of previouscode and APIs. These classes are also included in Microsoft's namespace.Type of dataC# lets you work with two types of data: value types and reference types. The value type holds the actual value. The reference type saves the actual value stored elsewhere in the memory. Raw data types, such as character, integer, float, enumeration, and structure types, are all value types. Objects and array types are treated as reference types. C# predefines reference types (objects and strings) New, Byte, Unsigned Short, Unsigned Integer, Unsigned Long, Float, Double-Float, Boolean, Character, and The value type and reference type of the decimal type will eventually be executed by a primitive type object. C# also allows you to convert a value or a type to another value or a type. Y ou can use an implicit conversion strategy or an explicit conversion strategy. Implicit conversion strategies are always successful and do not lose any information (for example, you can convert an integer to a long integer without losing any information because long integers are longer than integers) Some data is lost because long integers can hold more value than integers. Conversion occurs.Before and after referenceRefer to Chapter 3 "Working with V ariables" to find out more about explicit and implicit conversion strategies.Y ou can use single-dimensional and multidimensional arrays in C#at the same time. Multidimensional arrays can become a matrix. When this matrix has the same area size as a multidimensional array. Or jagged, when some arrays have different sizes.Classes and structures can have data members called attributes and fields. Y ou can define a structure called Employee. For example, there is a field called Name. If you define an Employee type variable called CurrenrEmployee, you can retrieve the employee's name by writing . What should happen after the code assignment? If the employee's name must be read by a database, for example, you can write a code "When some people ask for the value of the name attribute, read the name from the database and return the name with the string type".FunctionA function is a code that can be used at any time, code. An example of a function will appear earlier than the FullName function, in this chapter, in the Family class. A function is usually combined with some code that returns information, and a method usually does not return information. However, for us, we generally attribute them to functions.The function can have four parameters:•The input parameters have values passed into the function, but the function cannot change their values.•The output parameters have no value when they are passed to thefunction, but the function can give them a value and pass the value back to its caller. ,•The reference parameter passes another value by reference. They have a value into the function, and this value can be changed in the function.•The parameter parameter defines an array variable in the list.C# and CLR work together to provide automatic storage management. Or "Leave enough space for this object to use" code like this. The CLR monitors your memory usage and automatically retrieves it when you need it.C# provides a large number of operators that allow you to write a large number of mathematical and bitwise expressions. Many (but not all) of them can be redefined, and you can change the job of these operators.C# provides a long list of reports that you can define through a variety of processing paths through your code. Through the report's operations, using keywords like switch, while, for, break, and continue enables your code to be split into different paths depending on the value of the variable.Classes can contain code and data. Visibility of each member to other objects. C# provides such accessible ranges as public, protected, internal, protected internal, and private.V ariableV ariables can be defined as constants. The constant has a fixed value and cannot be changed during the execution of your code. The value of PI, for example, is a good example of a constant because her value will not be changed while your code is running. The enumeration type defines a specific name for the constant. For example, you can define an enumerated type of planet using Mercury V in your code. If you use a variable to represent the planet, using the names of this enum type can make your code easier to read.C# provides an embedded mechanism to define and handle some events. If you write a class that performs a long operation, you may want to call an event. When the event ends, the client can sign this time and grab the event in their own code, he can let them be notified When you have completed this long budget, this event handling mechanism uses delegates in C#, a variable that references a function.Note: Event processing is a program in your code that determines what action will take place when a time occurs.For example, the user clicks on a button. If your class holds a value, write some code called a protractor that your class can be accessed as if it were an array. Suppose you write a class called Rainbow. For example, it contains a set of colors in this rainbow. Visitors may want some MYRainbow to retrieve the first color in the rainbow. Y ou can write an indexer in your Rainbow class to define what will be returned when thevisitor accesses your class as if it were an array of values.InterfaceC# provides an interface that aggregates properties, methods, and events that describe a set of functions. The class of C# can execute the interface. It tells the user through the interface a set of function files provided by this class. What existing code can have as few compatibility issues as possible. Once there was an interface exposed, it could not be changed, but it could evolve through inheritance. C# classes can perform many interfaces, even if the class can only inherit from a base class.Let's look at an example of a very clear rule in the real world of C# that helps illustrate the interface. Many applications use the additions provided today. There is the ability to read additional items when executed. To do this, this added item must follow some rules. DLL add items must display a function called CEEntry. And you must use CEd as the beginning of the DLL file name. When we run our code, it scans the directories of all the DLLs that are starting with CEd. When it finds one, it is read. Then it uses GetProcAddress to find the CEEntry function in the DLL. This proves that it is necessary for you to obey all the rules to establish an addition. This kind of creating a read addition is necessary because it carries more unnecessary code responsibility. If we use an interface in this example, your DLL additions can be applied to an interface. This ensures that all necessary methods, properties, and eventsappear in the DLL and are specified as files.AttributesThe attribute declares additional information about your class for the CLR. In the past, if you wanted to describe your classes yourself, you would have to use a few decentralized ways to store them in external files, such as IDL or event HTML files. Through your efforts, the property solves this problem. The developer has constrained some information in the class and any kind of information, for example, in the class, defines how it acts when it is used. The possibilities are endless, which is why Microsoft will contain a lot of predefined attributes in the .NET framework.Compile C#Running your C# code generates two important types of information through the C# compiler: code and metadata. The next section describes these two topics and completes a binary review built on .NET code, which is assembly.Microsoft Intermediate Language (MSIL)The code output by the C# compiler is written in an intermediate language called Microsoft. MSIL is your code that is used to construct a detailed set of instructions to guide you on how to perform. It contains instructions for operations, such as initialization of variables, methods for evoking objects, error handling, and declaring something new. C# is notjust a language from the MSIL source code that changes during the writing process. All .NET-compatible languages, including and C++ management, generate MSIL when their source code is compiled. All .NET languages use the same runtime, so code from different languages and different compilers can easily work together.For physical CPUs, MISL is not a set of explicit instructions. It doesn't know anything about your machine's CPU, and your machine doesn't know anything about MSIL. Then, when your CPU can't read MSIL, explain the code. This sinking is called just enough to write, or JIT. The job of the JIT compiler is to translate your universal MSIL code to the machine so that the CPU can execute your code.Y ou may want to know what an extra step is in the process. When a compiler can immediately generate CPU-interpreted code for why MSIL was generated, the compiler does this later. There are many reasons for this. First, MSIL makes it easier for you to write code as it moves to a different piece of hardware. Suppose you have written some C# code and you want it to run on your desktop and handheld devices at the same time. It is very likely that these two devices have different CPUs. If you only have one C# compiler whose goal is a clear CPU, then you need two C# compilers: one with the desktop CPU and the other with the handheld device CPU. Y ou have to compile your code twice to ensure that your correct code is used on the right device. With MSIL, you only write once.The .NET Framework is installed on your desktop and it contains a JIT compiler that translates your MSIL-specific CPU code to your machine. The .NET Framework is installed on your handheld device and it contains a JIT compiler that translates the same MSIL-specific CPU-specific code to your handheld device. To run MSIL code base on any device that has a .NET JIT compiler. Y ou now have only one MSIL basic code that can run on any device that has a .NET JIT compiler. The JIT compiler on these devices can take care of your code and make them run smoothly.Another reason why the compiler uses MSIL is that the settings of the instruction can be easily read by an authenticated proximity. Part of the compiler's job is to verify your code to make it as clear as possible. When properly accessed, these checks ensure that your code does not execute any instructions that can cause your code to crash. The definition of MSIL directives makes this check process easier to understand. CPU-specific instruction settings are optimized for fast code execution. However, they make the code difficult to read and therefore difficult to check. Having a C# compiler that can output CPU-specific code at once can make code inspection difficult or even impossible. Allow the .NET Framework's JIT compiler to verify your code to ensure that your code accesses memory through a buggy path and that the variable types are used correctly.MetadataThe assembly process outputs the same amount of metadata. This is a very important part of the .NET code sharing story. Whether you use C# to build a client application or use C# to build a library that some people use for your application, you will want to take advantage of some compiled .NET code. That code may have been provided by Microsoft as part of the .NET framework, or it may be provided by some online users. The key to using a foreign code is to let the C# compiler know that the class and that variable are in another base code so that it can be found in the precompilation of your work and match the code you write with the source code.Look at the metadata for the directory for your compiled code. The number of bits of source code compiled by C# exists in the compiled code along with the generation of MSIL. The types of methods and variables are completely described in the metadata and are ready to be read by other applications. For example, can read metadata from a .NET library to provide intelligent sensing of all the methods that can be used effectively for a particular class.If you have already worked with COM, you may be familiar with type libraries. The goal of the type library is to provide the same directory functionality to COM objects. However, the type library is provided from a few limitations, and in fact not all data about the target can be put into the type library. Metadata in .NET does not have this disadvantage. Allthe code used to describe the class's information is placed in the metadata.memberSometimes you need to use C# to build a terminal application. These applications are packaged into an executable file and use .EXE as an extension. C# completely supports the creation of .EXE files. However, there are also times when you do not want to be used in other programs. Y ou may want to create some useful C# classes, such as a developer who wants to use your class in a application. In this case, you will not create an application, instead you will build a component. A component is a metadata package. As a unit to configure, these classes will share the same level of version control, security information, and dynamic requirements. Think of a component as a logical DLL. If you are familiar with Microsoft's translation services or COM+, then you can think of components as equivalent to .NET packages.There are two kinds of components: private components and global components. When you build your own component, you don't need to specify whether you want to create a global component or a private component. Y ou can only make your code accessible by a separate application. Y our component is a package similar to a DLL and is installed into the same directory when your application runs it. The application is only executable when it is in the same directory as yourcomponent.If you want to share your code, more global components in more applications. Global components can be used by any system's .NET application regardless of the directory in which it is installed. Microsoft installs components as part of the .NET structure, and each Microsoft component is installed as a global component. The Microsoft Architecture SDK contains the public functionality to install and remove artifacts from global widget storage.C# can be viewed to some extent as a programming language for the .NET Windows-oriented environment. In the past ten years, although VB and C++ have finally become very powerful languages, some of the content has come. For Visual Basic, its main advantage is that it is easy to understand. Many programming tasks are easy to accomplish and basically hide the connotations of the Windows API and the COM component structure. The downside is that Visual Basic has never implemented an early version of object-oriented, real-world (BASIC is primarily intended to make beginners easier to understand than to write large commercial applications), so it cannot really be structured or object-oriented. Programming language.On the other hand, C++ has its own root in the ANSI C++ language definition. It is not fully compatible with ANSI because Microsoft wrote the C++ compiler before the ANSI definition was standardized, but it isalready quite close. Unfortunately, this leads to two problems. First, ANSI C++ was developed under technical conditions more than a decade ago, so it does not support current concepts (such as Unicode strings and generating XML documents), and some of the older grammatical structures were designed for previous compilers ( For example, the declaration and definition of member functions are separate.) Second, Microsoft also tried to evolve C++ into a language for performing high-performance tasks on Windows - avoiding the addition of large numbers of Microsoft-specific keywords and libraries in the language. The result is that in Windows, the language becomes a very messy language. Let a C++ developer talk about how many strings are defined in this way: char*, LPTSTR, (MFC version), CString (WTL version), wchar_t*, OLECHAR*, and so on.Now entering the .NET era - a new environment, it has made new extensions to both languages. Microsoft added many Microsoft-specific keywords to C++ and evolved VB to , retaining some basic VB syntax, but it is completely different in design. From a practical application perspective, is a New language. Here, Visua l C# .NET. Microsoft describes C# as a simple, modern, object-oriented, type-safe, and C and C++-derived programming language. Most in dependent commentators are “derived from C, C++, and Java” from their claims. C# is very similar to C++ and Java. It uses parentheses ({})to mark blocks of code, and semicolons separate lines of statements. The first impression of C# code is that it is very similar to C++ or Java code. But after these seeming similarities, C# is much easier to learn than C++ but harder than Java. Its design and modern development tools are more adaptable than other languages. It also has Visua Basic's ease of use, high performance, and low memory accessibility of C++. C# includes the following features:●Full support for class and object-oriented programming, including interface and inheritance, virtual functions, and operator overloading.●Define a complete, consistent set of basic types.●Built-in support for automatically generating XML document descriptions.●Automatically clean dynamically allocated memory.●Classes or methods can be marked with user-defined properties. This can be used for documentation purposes and has a certain impact on compilation (for example, marking a method to compile only when debugging).●Full access to the .NET base class library and easy access to the Windows API.●Y ou can use pointers and direct memory access, but the C# language can access memory without them.●Supports attributes and events in VB style.●Changing compiler options, ActiveX controls (COM components) are called by other code in the same way. ●C# can be used to write dynamic Web pages and XML Web services.It should be noted that for most of these features, and Managed C++ are also available. But since C# used .NET from the beginning, support for .NET features was not only complete, but also provided a more suitable syntax than other languages. The C# language itself is very similar to Java, but there are some improvements because Java is not designed for use in a .NET environment. Before ending this topic, we must also point out two limitations of C#. One is that the language is not suitable for writing time-critical or very high-performance codes, such as a loop that runs 1000 or 1050 times, and immediately clears the resources they occupy when they are not needed. In this regard, C++ may still be the best of all low-level languages. The second is that C# lacks the key functions needed for very high-performance applications. The parcels guarantee inlining and destructor functions in specific areas of the code. However, such applications are very few.中文译文C# 编程语言概述作者:Barnett M1. C,C++,C#的历史C#程序语言是建立在C 和C++程序语言的精神上的。
计算机 C 语言专业外文翻译C A History of C C and CThe C programming language was created in the spirit of the C and C programminglanguages. This accounts for its powerful features and easy learning curve. The same cant besaid for C and C but because C was created from the ground up Microsoft took theliberty of removing some of the more burdensome features —such aspointers. This sectiontakes a look at the C and C languages tracing their evolution into C.The C programming language was originally designed for use on the UNIX operating system.C was used to create many UNIX applications including a C compiler and was eventuallyusedto write UNIX itself. Its widespread acceptance in the academic arena expanded toinclude the commercial world and software vendors such as Microsoft and Borland releasedC compilers for personal computers. The original Windows API was designed to work withWindows code written in C and the latest set of the core Windows operating system APIsremain compatible with C to this day.From a design standpoint C lacked a detail that other languages such as Smalltalk had alreadyembraced: the concept of an object. Youll learn more about objects in Chapter 8 quot WritingObject- Oriented Code.quot For now think of an object as a collection of data and a set ofoperations that can be performed onthat data. Object-style coding could be accomplishedusing C but the notion of an object was not enforced by the language. If you wantedtostructure your code to resemble an object fine. If you didnt fine. C really didnt care.Objectswerent an inherent part of the language so many people didnt pay much attention to thisprogramming paradigm.After the notion of object- oriented development began to gain acceptance it became clear thatC needed to be refined to embrace this new way of thinking about code. C was created toembody this refinement. It was designed to be backwardly compatible with C such that all Cprograms would also be C programs and could be compiled with a C compiler. Themajor addition to the C language was support for this new object concept. The Clanguage added support for classes which are quottemplatesquot of objects and enabled an entiregeneration of C programmers to think in terms of objects and their behavior.The C language is an improvement over C but it still has some disadvantages. C and Ccan be hard to get a handle on. Unlike easy-to-use languages like Visual Basic C and C arevery quotlow levelquot and require you to do a lot of coding to make your application run well. Youhave to write your own code to handle issues such as memory management and errorchecking. C and C can result in very powerful applications but you need to ensure thatyour code works well. One bug can make the entire application crash or behave unexpectedly.Because of the C design goal of retaining backward compatibility with C C was unableto break away from the low level nature of C.Microsoft designed C to retain much of the syntax of C and C. Developers who arefamiliar with those languages can pick up C codeand begin coding relatively quickly. Thebig advantage to C however is that its designers chose not to make it backwardlycompatible with C and C. While this may seem like a bad deal its actually good news. Celiminates the things that makes C and C difficult to work with. Because all C code is alsoC code C had to retain all of the original quirks and deficiencies found in C. C isstarting with a clean slate and without any compatibility requirements so it can retain thestrengths of its predecessors and discard the weaknesses that made life hard for C and Cprogrammers.Introducing CC the new language introduced in the .NET Framework is derived from C. However Cis a modern objected-oriented from the ground up type-safenguage featuresThe following sections take a quick look at some of the features of the C language. If someof these concepts dont sound familiar to you dontworry. All of them are covered in detail inlaterchapters.ClassesAll code and data in C must be enclosed in a class. You cant define a variable outside of aclass and you cant write any code thats not in a class. Classes can have constructors whichexecute when an object of the class is created and a destructor which executes when anobject of the class is destroyed. Classes support single inheritance and all classes ultimatelyderive from a base class called object. C supports versioning techniques to help your classesevolve over time while maintaining compatibility with code that uses earlier versions of yourclasses.As an example take a look at a class calledFamily. This class contains the two static fieldsthat hold the first and last name of a family member as well as a method that returns the fullname of the family member.class Class1public string FirstNamepublic string LastNamepublic stringFullNamereturn FirstName LastNameNote Single inheritance means that a C class can inherit from only one base class.C enables you to group your classes into a collection of classes called aspaces have names and can help organize collections of classes into logical groupings.As you begin to learn C it becomes apparent that all namespaces relevant to the .NETFramework begin with System. Microsoft has also chosen to include some classes that aid inbackwards compatibility and API access. These classes are contained within the Microsoftnamespace.Data typesC lets you work with two types of data: value types and reference types. Value types holdactual values. Reference types hold references to values stored elsewhere in memory.Primitive types such as char int and float as well as enumerated values and structures arevalue types. Reference types hold variables that deal with objects and arrays. C comes withpredefined reference types object and string as well as predefined value types sbyte shortint long byte ushort uint ulong float double bool char and decimal. You can also defineyour own value and reference types in your code. All value and reference types ultimatelyderive from a base type called object.C allows you to convert a value of one type into a value of another type. You can work withboth implicit conversions andexplicit conversions. Implicit conversions always succeed anddont lose any information for example you can convert an int to a long without losing anydata because a long is larger than an int. Explicit conversions may cause you to lose data forexample converting a long into an int may result in a loss of data because a long can holdlarger values than an int. You must write a cast operator into your code to make an explicitconversion happen.Cross- ReferenceRefer to Chapter 3 quotWorking with Variablesquot for more informationabout implicit and explicit conversions.You can work with both one-dimensional and multidimensional arrays in C.Multidimensional arrays can be rectangular in which each of the arrays has the samedimensions or jagged in which each of the arrays has different dimensions.Classes and structures can have data members called properties and fields. Fields arevariables that are associated with the enclosing class or structure. You may define a structurecalled Employee for example that has a field called Name. If you define a variable of typeEmployee called CurrentEmployee you can retrieve the employees name by . Properties are like fields but enable you to write code to specifywhat should happen when code accesses the value. If the employees name must be read froma database for example you can write code that says quotwhen someone asks for the value ofthe Name property read the name from the database and return the name as a string.quotFunctionsA function is a callable piece of code that may or may not return a value to the code thatoriginally called it. Anexample of a function would be the FullName function shown earlierin this chapter in theFamily class. A function is generally associated to pieces of code thatreturn information whereas a method generally does not return information. For our purposeshowever we generalize and refer to them both as functions.Functions can have four kinds of parameters: Input parameters have values that are sent into the function but thefunction cannotchange those values. Output parameters have no value when they are sent into the function but the functioncan give them a value and send the value back to the caller. Reference parameters pass in a reference to another value. They have a value comingin to the function and that value can be changed inside the function. Params parameters define a variable number of arguments in a list.C and the CLR work together to provide automatic memory management. You dont need towrite code that says quotallocate enough memory for an integerquot or quotfree the memory that thisobject was using.quot The CLR monitors your memory usage and automatically retrieves morewhen you need it. It also frees memory automatically when it detects that it is no longer beingused this is also known as Garbage Collection.C provides a variety of operators that enable you to write mathematical and bitwiseexpressions. Many but not all of these operators can be redefined enabling you to changehow the operators work.C supports a long list of statements that enable you to define various execution paths withinyour code. Flow control statements that use keywords suchas if switch while for break andcontinue enable your code to branchoff into different paths depending on the values ofyourvariables.Classes can contain code and data. Each class member has something called an accessibilityscope which defines the members visibility to other objects. C supports public protectedinternal protected internal and private accessibility scopes.VariablesVariables can be defined as constants. Constants have values that cannot change during theexecution of your code. The value of pi for instance is a good example of a constant becauseits value wont be changing as your code runs. Enum type declarations specify a type namefor a related group of constants. For example you could define an enum of Planets withvalues of Mercury Venus Earth Mars Jupiter Saturn Uranus Neptune and Pluto and usethose names in your code. Using the enum names in code makes code more readable than ifyou used a number to represent each planet.C provides a built-in mechanism for defining and handling events. If you write a class thatperforms a lengthy operation you may want to invoke an event when the operation iscompleted. Clients can subscribe to that event and catch the event in their code which enablesthem to be notified when you have completed your lengthy operation. The event handlingmechanism in C uses delegates which are variables that reference a function.Note An event handler is a procedure in your code that determines the actions to beperformed when an event occurs such as the user clicking a button.If your class holds a set of values clients may want to access the values as if your classwerean array. You can write a piece of code called an indexer to enable your class to be accessedas if it were an array. Suppose you write a class called Rainbow for example that contains aset of the colors in the rainbow. Callers may want to write MyRainbow0 toretrieve the firstcolor in the rainbow. You can write an indexer into your Rainbow class to define what shouldbe returned when the caller accesses your class as if it were an array ofvalues.InterfacesC supports interfaces which are groups of properties methods and events that specify a setof functionality. C classes can implement interfaces which tells users that the class supportsthe set of functionality documented by the interface. You can develop implementations ofinterfaces without interfering with any existing code which minimizescompatibilityproblems. Once an interface has been published it cannot be changed but it can evolvethrough inheritance. C classes can implement many interfaces although the classes can onlyinherit from a single base class.Lets look at a real-world example that would benefit from interfaces to illustrate its extremelypositive role in C. Many applications available today support add-ins. Assume that you havecreated a code editor for writing applications. This code editor when executed has thecapability to load add-ins. To do this the add-in must follow a few rules. The DLL add-inmust export a function called CEEntry and the name of the DLL must begin with CEd. Whenwe run our code editor it scans its working directory for all DLLs that beginwith CEd. Whenit finds one it is loaded and then it uses the GetProcAddress to locate the CEEntry functionwithin the DLL thus verifying that you followed all the rules necessary to create an add-in.This method of creating and loading add-ins is very burdensome because it burdens the codeeditor with more verification duties than necessary. If an interface were used in this instanceyour add-in DLL could have implemented an interface thus guaranteeing that all necessarymethods properties andevents were present with the DLL itself and functioning asdocumentation specified.AttributesAttributes declare additional information about your class to the CLR. In the past if youwanted to make your class selfdescribing you had to take a disconnected approach in whichthe documentation was stored in external files such as IDL or even HTML files. Attributessolve this problem by enabling you the developer to bind information to classes —any kindof information. For example youcan use anattribute to embed documentation informationinto a class.Attributes can also be used to bind runtime information to a class defining how itshould act when used. The possibilities are endless which is why Microsoft includes manypredefined attributes withinthe .NET piling CRunning your C code through the C compiler produces two important pieces ofinformation: code and metadata. The following sections describe these two items andthenfinish up by examining the binary building block of .NET code: the assembly.Microsoft Intermediate Language MSILThe code that is output by the C compiler is written in a language called MicrosoftIntermediate Language or MSIL. MSIL is made up of a specific set of instructions thatspecify how your code should be executed. It contains instructions for operations such asvariable initialization calling object methods and error handling just to name a few. C isnot the only language in which source code changes into MSIL during the compilationprocess.All .NET-compatible languages including Visual Basic .NET and Managed Cproduce MSIL when their source code is compiled. Because all of the .NET languagescompile to the same MSIL instruction set and because all of the .NET languages use the sameruntime code from different languages and different compilers can work togethereasily.MSIL is not a specific instruction set for a physical CPU. It knows nothing about the CPU inyour machine and your machine knows nothing about MSIL. How then does your .NETcode run at all if your CPU cant read MSIL The answer is that the MSIL code is turned intoCPU-specific code when the code is run for the first time. This process is called quotjust-in- timequotcompilation or JIT. The job of a JIT compiler is to translate your generic MSIL code intomachine code that can be executed by your CPU.You may be wondering about what seems like an extra step in the process. Why generateMSIL when a compiler could generate CPU-specific code directly After all compilers havealwaysdone this in the past. There are a couple of reasons for this. First MSIL enables yourcompiled code to be easily moved to different hardware. Suppose youvewritten some Ccode and youd like it to run on both your desktop and a handheld device.Its very likely thatthose two devices have differ.。
基于单片机的烟雾探测报警器外文翻译编辑整理:尊敬的读者朋友们:这里是精品文档编辑中心,本文档内容是由我和我的同事精心编辑整理后发布的,发布之前我们对文中内容进行仔细校对,但是难免会有疏漏的地方,但是任然希望(基于单片机的烟雾探测报警器外文翻译)的内容能够给您的工作和学习带来便利。
同时也真诚的希望收到您的建议和反馈,这将是我们进步的源泉,前进的动力。
本文可编辑可修改,如果觉得对您有帮助请收藏以便随时查阅,最后祝您生活愉快业绩进步,以下为基于单片机的烟雾探测报警器外文翻译的全部内容。
淮阴工学院毕业设计(论文)外文资料翻译学院:江淮学院专业:电子信息工程姓名:学号:外文出处:International Conference onElectricaland Control Engineering附件:1。
外文资料翻译译文;2.外文原文。
附件1:外文资料翻译译文温度控制系统的设计摘要:研究了基于AT89S 51单片机温度控制系统的原理和功能,温度测量单元由单总线数字温度传感器DS18B 20构成.该系统可进行温度设定,时间显示和保存监测数据。
如果温度超过任意设置的上限和下限值,系统将报警并可以和自动控制的实现,从而达到温度监测智能一定范围内。
基于系统的原理,很容易使其他各种非线性控制系统,只要软件设计合理的改变。
该系统已被证明是准确的,可靠和满意通过现场实践。
关键词:单片机;温度;温度I. 导言温度是在人类生活中非常重要的参数.在现代社会中,温度控制(TC)不仅用于工业生产,还广泛应用于其它领域。
随着生活质量的提高,我们可以发现在酒店,工厂和家庭,以及比赛设备。
而比赛的趋势将更好地服务于整个社会,因此它具有十分重要的意义测量和控制温度。
在AT89S51单片机和温度传感器DS18B20的基础上,系统环境温度智能控制。
温度可设定在一定范围内动任意。
该系统可以显示在液晶显示屏的时间,并保存监测数据,并自动地控制温度,当环境温度超过上限和下限的值。
外文原文From one code base to many platforms using Visual C++Multiple-platform development is a hot issue today. Developers want to be able to support diverse platforms such as the Microsoft® Windows® version 3.x, Microsoft Windows NT®, and Microsoft Windows 95 operating systems, and Apple®, Macintosh®, UNIX, and RISC (reduced instruction set computer) machines. Until recently, developers wanting to build versions of their application for more than one platform had few choices: •Maintain separate code bases for each platform, written to the platform's native application programming interface (API).•Write to a "virtual API" such as those provided by cross-platform toolsets.•Build their own multiple-platform layer and support it.Today, however, a new choice exists. Developers can use their existing code written to the Windows API and, using tools available from Microsoft and third parties, recompile for all of the platforms listed above. This paper looks at the methods and some of the issues involved in doing so.Currently the most lucrative market for graphical user interface (GUI) applications, after Microsoft Windows, is the Apple Macintosh. However, vast differences separate these wholly different operating systems, requiring developers to learn new APIs, programming paradigms, and tools. Generally, Macintosh development requires a separate code base from the Windows sources, increasing the complexity of maintenance and enhancement.Because porting code from Windows to the Macintosh can be the most difficult porting case, this paper concentrates in this area. In general, if your code base is sufficiently portable to enable straightforward recompiling for the Macintosh (excluding any platform-specific, or "edge" code, you may elect to include), you'll find that it will come up on other platforms easily as well.Microsoft Visual C++® Cross-Development Edition for Macintosh (Visual C++ for Mac™) provides a set of Windows NT– or Windows 95–hosted tools for recompiling your Windows code for the Motorola 680x0 and PowerPC processors, and a portability library that implements Windows on the Macintosh. This allows you to develop GUI applications with a single source code base (written to the Win32® API) and implement it on Microsoft Windows or Apple Macintosh platforms.Figure 1, below, illustrates how Visual C++ for Mac works. Your source code is edited, compiled, and linked on a Windows NT–or Windows 95–based (Intel) host machine. Thetools create 68000 and PowerPC native code and Macintosh resources. An Ethernet-based or serial transport layer (TL) moves the resulting binaries to a Macintosh target machine running remotely. The Macintosh application is started on the Macintosh and debugged remotely from the Windows-based machine.Now that Apple has two different Macintosh architectures to contend with (Motorola 680x0 and PowerPC) portability is particularly important.Porting can involve several steps, depending on whether you are working with old 16-bit applications or with new 32-bit sources. In general, the steps to a Macintosh port are as follows:1.Make your application more portable by following some general portability guidelines.This will help insure not only portability to the 680x0-based Macintosh machines, but also to the newer, more powerful PowerPC machines that are based on a RISC chip.2.Port your application from Windows 16-bit code to 32-bit code. This may be the mostcomplex and time-consuming part of the job.3.Segregate those parts of your application that are unique to Windows from similarimplementations that are specific to the Macintosh. This may involve using conditional compilation or it may involve changing the source tree for your project.4.Port your Win32 API code to the Macintosh by using the portability library for theMacintosh and Visual C++ for compiling, linking, and debugging.e the Microsoft Foundation Class Library (MFC) version 4.0 to implement newfunctionality such as OLE 2.0 containers, servers, and clients or database support using open database connectivity (ODBC). Code written using MFC is highly portable to the Macintosh.6.Write Macintosh-specific code to take advantage of unique Macintosh features, suchas Apple Events or Publish and Subscribe.The chief challenge among the families of Windows operating systems is the break from 16 bits (Windows 3.11 and Windows for Workgroups 3.11 operating system with integrated networking) to 32 bits (Windows NT and Windows 95). In general, 16-bit and 32-bit code bases are somewhat incompatible, unless they are written using MFC. Developers have the choice of branching their sources into two trees, or migrating everything to 32 bits. Once the Win32 choice has been made, how are legacy platforms to be run (that is, machines still running Windows 3.11)? The obvious choice is to use the Win32s® API libraries, which thunk 32-bit calls down to their 16-bit counterparts.Developers who want their applications to be able to take advantage of the hot new RISC hardware, such as DEC Alpha AXP machines, can use the special multiple platform editionsof Visual C++. These include versions for the MIPS R4000 series of processors as well as the aforementioned DEC Alpha AXP chip and the Motorola Power PC. These toolsets run under Windows NT 3.51 and create highly optimized native Win32 applications for DEC Alpha and Motorola PowerPC platforms.Developers who have recompiled their Win32 sources using these toolsets are amazed at how simple it is. Since the operating system is identical on all platforms, and the tools are identical, little work has to be done in order to achieve a port. The key difference in the RISC machines from Intel is the existence of a native 64-bit integer, which is far more efficient than on 32-bit (that is, Intel) processors.Microsoft works closely with two third-party UNIX tools providers, Bristol Technology and Mainsoft Corporation, to allow developers to recompile their Win32-based or MFC-based applications for UNIX. Developers seeking additional information should contact those companies directly.You'll have to decide early on whether to write to the native API (Win32) or to MFC. In general you'll find MFC applications will port more quickly than Win32 applications. This is because one of the intrinsic benefits of an application framework is an abstraction of the code away from the native operating system to some extent. This abstraction is like an insurance policy for you. However, developers frequently have questions about MFC, such as: •What if I need an operating system service that isn't part of the framework?Call the Win32 API directly. MFC never prevents you from calling any function in the Win32 API directly. Just precede your function call with the global scope operator (::).•I don't know C++. Can I still use MFC?Sure. MFC is based on C++, but you can mix C and C++ code seamlessly.•How can I get started using MFC?Start by taking some classes and/or reading some books. Visual C++ ships with a fine tutorial on MFC (Scribble). Then, check out the MFC Migration Kit (available on CompuServe or, for a modest shipping and handling fee, from Microsoft). It will help you migrate your C-based application code to MFC and C++.All porting will be easier if you begin today writing more portable programs. Following some basic portability guidelines will make your code less platform-specific.Never assume anything. Particularly, don't make assumptions about the sizes of types, the state of the machine at any time, byte ordering, or alignment.Don't assume the size of primitive types, because these have different sizes on different processors. For example, an int is two bytes in Win16 and four bytes in Win32. At all costs,avoid code that relies on the size of a type. Use sizeof() instead. To determine the offset of a field in a structure, use the offsetof() macro. Don't try to compute this manually.Use programmatic interfaces to access all system or hidden "objects," for example, the stack or heap.Parsing data types to extract individual bytes or even bits can cause problems when porting from Windows to the Macintosh unless you are careful to write code that doesn't assume any particular byte order. LIMITS.H contains constants that can be used to help write platform-independent macros to access individual bytes in a word.This may seem obvious, because nothing could be less portable than assembly language. Compilers, such as Microsoft Visual C++, that provide inline assemblers make it easy to slip in a little assembler code to speed things up. If you want portable code, however, avoid this temptation. It may not be necessary. Modern compilers can often generate code as good as hand-tuned native assembler code. Our own research at Microsoft indicates that performance problems are more often the result of poor algorithms than they are of poor code generation. Indeed, with RISC machines, hand-turned native assembler code may actually be worse than machine-generated code, due to the complexity of instruction scheduling and picking register usage.Write all routines in C first; then, if you absolutely need to rewrite one in assembler, be sure to leave both implementations in your sources, controlled by conditional compiles, and keep both up to date.A major goal of American National Standards Institute (ANSI) C/C++ is to provide a portable implementation of the language. Theoretically, code written to strict ANSI C compliance is completely portable to any compiler that implements the standard correctly. Microsoft Visual C++ provides a compiler option (/Za) to enable strict ANSI compatibility checking.Microsoft Visual C++ provides some language features that are in addition to ANSI C, such as four-character constants and single-line comments. Programs that use the Microsoft C extensions should be portable to all other implementations of Microsoft Visual C++. Thus, you can write programs that use four-character constants, for example, and know that your program is portable to any 16-bit or 32-bit Microsoft Windows platform or to the Macintosh.Compilers normally align structures based on the target machine architecture; some RISC machines, such as the MIPS R4000, are particularly sensitive to alignment. Alignment faults may generate run-time errors or, instead, may silently and seriously degrade the performance of your application. For portability, therefore, avoid packing structures. Limitpacking to hardware interfaces and to compatibility issues such as file formats and on-disk structures.Using function prototypes is mandatory for fully portable code. All functions should be prototyped, and the prototype should exactly match the actual function declaration.Following the guidelines above will make your code a lot more portable. However, if you have 16-bit Windows code, your first step is to make it work properly under Win32. This will require additional changes in your sources.Code written for Win32 can run on any version of Windows, including on the Macintosh, using the portability library. Portable code should compile and execute properly on any platform. Of course, if you use APIs that only function under Windows NT, they will not work when your application runs under Windows 3.x. For example, threads work under Windows NT but not under Windows 3.11. Those types of functionality differences will have to be accounted for in the design of your application.Chief among the differences between Win16 and Win32 is linear addressing. That means pointers are now 32 bits wide and the keywords near and far are no longer supported. It also means code that assumes segmented memory will break under Win32.In addition to pointers, handles and graphic coordinates are now 32 bits. WINDOWS.H will resolve many of these size differences for you, but some work is still necessary.The recommended strategy to get your application running under Win32 is to recompile for 32 bits, noting error messages and warnings. Next, replace complex procedures and assembly language routines with stub procedures. Then, make your main program work properly using the techniques above. Finally, replace each stubbed-out procedure with a portable version.After you successfully convert your Windows-based program from 16 bits to 32 bits, you're ready to embark on porting it to the Macintosh. Because significant differences exist between the two platforms, this task can appear daunting. Before you can begin to port your application, you need to better understand these differences. The Macintosh is differentiated from Windows in three general areas:•Programming model differences•Processor differences•User interface (UI) differencesThese areas of difference are described below. Porting issues that accompany these differences are discussed in the section titled "Porting from Win32 to the Macintosh."The Windows and Macintosh APIs are completely different. For example:•The event models are different. In Windows, you dispatch messages to WindowProcs.You use DefWindowProc to handle messages in which you're not specifically interested. On the Macintosh, you have a big main event loop to handle all possible events.•Windows uses the concept of child windows. The Macintosh uses no child windows.•Windows-based applications can draw using either pens or brushes. Macintosh applications can use only pens.•Controls in Windows are built-in window classes. On the Macintosh, controls are unrelated to windows.•Windows allows for 256 binary raster operations; the Macintosh allows for only 16.Because of the differences between the two platforms, porting a Windows-based application to the Macintosh can be monumental task without powerful tools.Windows has always run on Intel x86 processors (until Windows NT), and the Macintosh has run on Motorola 680x0 processors (of course, the PowerPC-based Macintosh is now available as well). Differences between the processor families include addressing and byte ordering, in addition to the more expected differences like opcodes, instruction sets, and the name and number of registers.The Intel 8086 processor, from which subsequent 80x86 processors are descended, used 16-bit addresses, which unfortunately allowed only 65,536 bytes of memory to be addressed. To allow the use of more memory, Intel implemented a segmented memory architecture to address one megabyte (2^20 bytes) of memory that used an unsigned 16-bit segment register and an unsigned 16-bit offset. This original Intel scheme has been extended to allow much larger amounts of memory to be addressed, but most existing Intel-based programming relies on separating code and data into 64K segments.Although all Intel x86 processors since the 80386 have used 32-bit addressing, for compatibility reasons Microsoft Windows 3.x is actually a 16-bit application, and all Microsoft Windows-based applications had to be written as 16-bit applications. That meant, for example, that most pointers and handles were 16 bits wide. With the advent of Microsoft Windows NT, which is a true 32-bit operating system, all native applications are 32-bit applications, which means that pointers and handles are 32 bits wide. Because Windows NT uses linear addressing, programs can share up to 4 gigabytes of memory.In contrast, the Motorola 68000 and PowerPC processor have always provided the ability to address a "flat" 32-bit memory space. In theory, a flat memory space of this kind simplifies memory addressing. In practice, because 4-byte addresses are too large to use all the time, Macintosh code is generally divided into segments no larger than 32K.Microsoft Windows and Windows NT run only on so-called "little-endian" machines—processors that place the least significant byte first and the most significant byte last. In contrast, the Motorola 680x0 and PowerPC (a so-called "big-endian" architecture) place the most significant byte first, followed by the next most significant byte, and so on, with the least significant byte last.Compilers normally handle all details of byte ordering for your application program. Nevertheless, well-written portable code should never depend on the order of bytes.Microsoft Windows and the Macintosh present quite different user interfaces in many key areas, including menus, filenames, and multiple-document interface (MDI) applications.Only one menu bar exists on the Macintosh, and it is always in the same place, regardless of the number or arrangement of windows on the screen. The "active window" contains the menu, which dynamically changes as necessary when different windows are made active. Windows, on the other hand, gives each top-level window its own menu. In addition, under MDI, each child window can also have its own menu. MDI is discussed in greater detail below.Macintosh applications generally have an "Apple menu" (the leftmost menu) that contains all the installed Desk Accessories and usually contains an About entry for the application. Under System 7, the extreme right side of the Macintosh menu contains an icon for Apple's Balloon Help and the Application menu for switching between applications.Windows-based applications always have a System menu at the upper-left corner of their top-level window. This menu contains system-level functions for sizing, moving, and closing the window, as well as an item that calls the Task Manager for switching applications.Generally, Windows-based applications contain keyboard equivalents in their menus. These are underlined letters in each menu entry that the user can select with the keyboard in lieu of the mouse. This, however, is convention rather than requirement. Although some Macintosh applications have these equivalents, most do not.Filenames and pathnames represent one of the most fundamental differences between Windows and the Macintosh, as well as perhaps the one most difficult to deal with. Many programmers report dealing with filenames as the area of porting in which the most time and energy is spent.Your Windows-based application probably already handles (and expects) filenames such as "C:\ACCTG\DATA\SEPT93.DAT." Applications for the MS-DOS and Windows operating systems are bound by the traditional 8.3 filename format. Macintosh applications, on the other hand, can handle filenames such as "September, 1993 Accounting Data."MDI windows allow for multiple child windows within the borders of a top-level window (the "MDI frame"). Many Windows-based applications, such as the Microsoft Word word processor for Windows, are MDI applications. Characteristic of MDI applications are clipped child windows that can be minimized to an icon within the MDI frame. Each MDI child window can also have its own menu.The Macintosh does not support MDI windows. An application can have multiple windows open; those windows, however, cannot be made into icons, and they share a common menu. Depending on the application, this difference may necessitate significant redesign for a Macintosh port.Finally you can keep doing what you know how to do best, writing to the Windows API, and still allow for versions of your application that run on other platforms. Visual C++ now gives you special versions that allow you to do this. Keeping your code portable, thinking about portability all the time, and using the right tools will help you make the multiple platform jump as effortless as possible..外文翻译在今天,多平台的开发是一个热门课题。
外文文献翻译译稿1基于电阻应变式称重传感器的高精度和低容量电子秤开发Baoxiang He,Guirong Lu ,Kaibin Chu ,Guoqiang Ma摘要:基于称重传感器的应变计优化设计中除了一些先进的稳定技术比如温度的影响之外,静态超载和计算机模式识别(CRT)技术也被用来进行动态模拟与分析。
这种多谐振荡的压力释放方法是在生产中创造性的使用了压力传感器,由于这种技术,量程30G的压力传感器才能做到高精度,高稳定性。
由于使用了这种压力传感器,使得基于传感器的电子秤拥有300,00种分类和小于0.2mg的精度。
这种压力传感器的量程和精度远远高于市场上的同类产品,而其价格却远低于电磁压力传感器。
因此,这种压力传感器的商业前景是十分广阔的。
关键词:设计;电阻应变式称重传感器;精度;电子秤1.介绍众所周知,压力传感器的精度是决定一个的电子秤精度的关键。
目前,用于高精度称重的传感器主要是电磁平衡式称重传感器。
低成本电阻应变式称重传感器仅能用于使低精度的称量。
主要影响精度应变式称重传感器的误差是蠕变和温度漂移,特别是对于低负荷的传感器来说。
一般来说,高精度传感器的负载能力最低是300克。
称重传感器的最大分配平衡只有50K,最小分辨率是不小于0.01克。
总而言之,对于超低容量称重传感器来说设计和制造技术是很难被应用到敏感的称重传感器的加工和生产中的。
因此很难做出足够好的高精度平衡的称重传感器。
使得低量程和高精度的传感器始终是全世界的热门话题。
本文将分析应力释放及补偿技术,探索低量程高精度应变式称重传感器的制造技术。
2.原理与方法A. 残余应力的释放制作压力传感器主要部件的材料是铝棒。
为了获得更好的综合性能,铝条会在挤压后进行淬火。
由于淬火的残余应力不能被自然老化而得到充分释放,此外,机械加工和固化过程中也会造成很大的残余应力,特别是对于超低容量称重传感器来说,如果这个压力不及时释放,可能就会在压力传感器被测试或者是最终使用的时候释放出来。
1 英文原文Reliability-based design optimization of adhesive bondedsteel-Cconcret composite beams with probabilistic andnon-probabilistic uncertaintiesABSTRACT: It is meaningful to account for various uncertainties in the optimization design of the adhesive bonded steel-cconcrete composite beam.Based on the definition of the mixed reliability index for structural safety evaluation with probabilistic and non-probabilistic uncertainties,the reliability-based optimization incorporating such mixed reliability constraints are mathematically formulated as a nested problem.The performance measure approach is employed to improve the convergence and the stability in solving the inner-loop.Moreover,the double-loop optimization problem is transformed into a series of approximate deterministic problems by incorporating the sequential approximate programming and the iteration scheme,which greatly reduces the burdensome computation workloads in seeking the optimal design.The validity of the proposed formulation as well as the efficiency of the presented numerical techniques is demonstrated by a mathematical example.Finally,reliability-based optimization designs of a single span adhesive bonded steel-cconcrete composite beam with different loading cases are achieved throug integrating the present systematic method,the finite element analysis and the optimization package.1.IntroductionThe steel-cconcrete composite beam,which integrates the high tensile strength of steel and the high compressive strength of concrete,has been widely used in multi-storey buildings and bridges all over the world.At the beginning of the 1960s,an efficient adhesive bonding technique[1,2]was introduced to connect the Concrete slab and the steel girder by an adhesive joint,not by the conventional metallic shear connectors.This so-called adhesive bonded steel-concrete composite beam is considered to be a very prospective alternate structure because it has the advantages of relieving stress concentration,avoiding site welding,and using the prefabricated concrete slab.Recently,an umber of studies on the experimental tests and numerical simulation of adhesive bonded steel-concrete composite beams have been presented in literatures[3-5].With the ever increasing computational power,the past two decades have seen arapid development of structural optimization in both theories and engineering applications.In particular,the non-deterministic optimal design of steel or concrete beams incorporating stochastic uncertainties has been intensively studied by using the reliability-based design optimization (RBDO) method [6,7].Based on the classical probability theory,this conventional RBDO method describes uncertainties in structural systems as stochastic variables or random fields with certain probability distribution and thus provides an effective tool for determining the best design solution while explicitly considering the unavoidable effects of parameter variations[8].As the most mature non-deterministic design approach,the RBDO has been successfully used in many real-life engineering applications[9,10].However,the primary challenge to apply the conventional RBDO in practical applications is the availability of the precise statistical characteristics,which are crucial for a successful probabilistic reliability analysis and design.Unfortunately,these accurate data usually cannot be obtained in some practical applications where only a limited number of samples are available.The early treatment [11,12]for insufficient uncertainties is to construction a closest uniform probabilistic distribution by using the principle of maximum entropy.In 1990s,Elishakoff [13,14] explored that a small error in constructing the probabilistic density function for input uncertainties may lead to misleading assessment of the probabilistic reliability in particular cases.This conclusion illuminates that using the traditional probabilistic approach to deal with those problems involving in complete the information might be inconvincible. Consequently, an alternative category,namely the non-probabilistic approach [15],has been rapid developed for describing uncertainty with incomplete statistical information by a fuzzy set or a convexset. In the fuzzy set method[16,17] ,the fuzzy failure probability of structures is assessed based on membership function representation of the observed/measured inputs.In the convex set method[18-20], all possible values of the uncertainties are bounded within a hyper-box or hyper-ellipsoid without assuming any inner probability distributions.Non-probabilistic models have been regarded as attractive supplements to the traditional probabilistic model in the reliability design of structural engineering.The interested readers are referred to research papers bye.g. Moens and Vandepitte [21],Moller and Beer [22],Elishakoff and Ohsaki[23].In a practical engineering problem of adhesive bonded steel-concrete composite beams,the uncertain scatter of structural parameters about their expected values is unavoidable.For example,the applied loads may fluctuate dramatically during its service life-cycle,and the parameters defining the structure,such as geometrical dimensions andmaterial properties,are also subject to inaccuracies or deviations.Among these concerned uncertainties,some can be characterized with precise-enough probability distributions,while others need to be treated as bounded ones due to a lack of sufficient sample data.A typical example of such bounded uncertainties is the load magnitude and the geometrical dimensions of a manufactured product,the variation ranges of which are controlled by specified tolerance bounds.From as early as 1993,attempts have been made to assess and analyze the structural safety in the presence of both stochastic variables and uncertain-but-bounded variables by Elishakoff and Colombi [24]. Recently ,many numerical methods,including the multi-point approximation technique[25],the iterative rescaling method [26],the probability bounds (p-box) approach [27],and the interval truncation method [28],have been proposed for estimating the lower and upper bounds of failure probability of structures with a combination of stochastic and interval variables.Detailed surveys of both known and new algorithms for this safety assessment problem have also been made by Berleant etal.[29] and Kreinovich etal.[30].However,it is noted that a few studies have considered various uncertainties in the reliability-based design optimization problems.Duetal.[31] extended the conventional RBDO method to structural design problems under the combination of random and interval variables.In their study,a procedure for seeking the worst-case combination of the interval variables is embedded into the probabilistic reliability analysis.As the literature survey shows,the existing studies mainly focus on solving the combination of random/interval variables. Basically,the interval set does not account for the dependencies among the bounded uncertainties,which can be regarded as the simplest instance of the set-value based convex model.Due to the unpredictability of structural parameters and the impossibility of the acquisition of sufficient uncertainty information,problems of structural optimization must be solved in the presence of various types of uncertainties,which remains a challenging problem in realistic systems [32].As a consequence,apractical and efficient reliability-based design optimization being capable of quantifying probabilistic and non-probabilistic uncertainties,as well as associated numerical techniques,should be fully developed and adopted in the professional practice of adhesive bonded steel-concrete composite beam design.In this paper,using the mathematical definition of structural reliability index based on probability and convexsetmixed model[33],a nested optimization formulation with constraints on such mixed reliability indices for the adhesive bonded steel-concretecomposite beam is first presented.For improving the convergence and the stability insolving sub-optimization problems,the performance measure approach (PMA)[34]is skillfully employed. Then,the sequential approximate programming approach embedded by an iterative scheme is proposed for converting the nested problem into a serial of deterministic ones, which will greatly reduce the burdensome computation workloads in seeking the optimal design.Through comparing with the direct nested double-loop approach,the applicability and the efficiency of the proposed methods are demonstrated by aclassical mathematical example.Finally,the reliability-based optimization designs of a single span adhesive bonded steel-concrete composite beam are achieved through integrating the present systematic method,the finite element analysis program and the gradient-based design optimization package CFSQP [35].2.RBDO of adhesive bonded steel-concrete composite beams2.1.Description of probabilistic and non-probabilistic uncertaintiesIn practice engineering,the uncertain parameters involved in the design problem can be classified into probabilistic uncertainties (denoted by X= { X1,X2,…,Xm } T)and non-probabilistic uncertainties (denoted by Y= { Y1,Y2,…,Ym } T)according to their available input samples.It is desirable to select the best suitable models to respectively describe these different types of uncertainties.Undoubtedly,the probabilistic uncertainties X can be modelled as stochastic variables with certain distribution characteristics,which are expressedaswhere f x(x)is the joint probability density function. X= { X1,X2,…,Xm } T)represents the realization of the variables X. In the classical probabilistic framework[36],the structural reliability is given aswhere Pr [·] denotes the probability, g(X)is a limit-state function and g(X)≥0 defines the safety events.For the non-probabilistic uncertainties ,the bounds or ranges of parameter variation,compared with precise probability density function,are more easily obtained with the limited measurement results,e.g.the least data envelop set or the manufacturing tolerance specifications.In such circumstances,a multiellipsoid convex model[37]is competent for the non-probabilistic uncertainty description.Following this frequentlyused convex model,all the non-probabilistic parameters are divided into groups with the rule that variations of parameters in different groups are uncorrelated.Herein,each group of uncertainties are bounded by an individual hyper-ellipsoid convex set,respectively,aswhere is the nominal value vector of the i-th group uncertainties ,is the characteristic matrix and it is a symmetric positive-definite real matrix defining theorientation and aspect ratio of the i-th ellipsoid, is a real number defining the magnitude of the parameter variability, n g is the total number of groups of the non-probabilistic uncertainties Y. Supposing n i is the number of uncer- tainties in the i-th group,there isFor an illustrative purpose,three specific multi-ellipsoid cases for a problem with three non-probabilistic parameters,which are divided into three groups,twogroups and one group respectively,are schematicall shown in Fig. 1(a)-(c). .As illustrated in Fig.1(a), the multi-ellipsoid set is reduced to an hyper-box(or interval set) when each group consists of only one uncertainparameter.In Fig.1(c),the single-ellipsoid set represents another special case of the multi-ellipsoid set when all the bounded uncertainties are correlated into one group.Thus,the multi-ellipsoid convex model in (4) provides a generalized framework that extends common interval sets and single-ellipsoid sets for the representation of non-probabilistic uncertainties.2.2. Definition of the structural mixed reliability indexFor the assessment of the structural reliability combining probabilistic and non-probabilistic uncertainties,it is convenient to transform the original non-normal or dependent random variables X= { X1,X2,…,Xm } T into independent normal random ones U= { U1,U2,…,Um } T in U-space via the Rackwitz-Fiessler method [38] or theRosenblatt method [39].In the simplest case,a normal random variable X can be transformed into a standard normal random variable U bywhere X and are the mean value and the standard deviation ofX, respectively.2 中文译文基于钢-混凝土粘接组合梁可靠度优化设计的概率和非概率不确定性摘要在胶粘剂粘结的钢-混凝土组合梁的优化设计中考虑各种不确定的因素是有很重要的意义的。
六自由度并联机器人基于外文翻译、中英对照、英汉互译核准通过,归档资料。
未经允许,请勿外传~六自由度并联机器人基于Grassmann-Cayley代数的奇异性条件Patricia Ben-Horin和Moshe Shoham~会员~IEEE摘要本文研究了奇异性条件大多数的六自由度并联机器人在每一个腿上都有一个球形接头。
首先,确定致动器螺丝在腿链中心。
然后用凯莱代数和相关的分解方法用于确定哪些条件的导数(或刚度矩阵)包含这些螺丝是等级不足。
这些工具是有利的,因为他们方便操纵坐标-简单的表达式表示的几何实体,从而使几何解释的奇异性条件是更容易获得。
使用这些工具,奇异性条件(至少)144种这类的组合被划定在四个平面所相交的一个点上。
这四个平面定义为这个零距螺丝球形关节的位置和方向。
指数Terms-Grassmann-Cayley代数,奇点,三条腿的机器。
一、介绍在过去的二十年里,许多研究人员广泛研究并联机器人的奇异性。
不像串联机器人,失去在奇异配置中的自由度,尽管并联机器人的执行器都是锁着但是他们的的自由度还是可以获得的。
因此,这些不稳定姿势的全面知识为提高机器人的设计和确定机器人的路径规划是至关重要的。
主要的方法之一,用于寻找奇异性并行机器人是基于计算雅可比行列式进行的。
Gosselin和安杰利斯[1]分类奇异性的闭环机制通过考虑两个雅克比定义输入速度和输出速度之间的关系。
当圣鲁克和Gosselin[2]减少了算术操作要求定义的雅可比行列式高夫?斯图尔特平台(GSP),从而使数值计算得到多项式。
另一个重要的工具,为分析螺旋理论中的奇异性,首先阐述了1900的论文[6]和开发机器人应用程序。
几项研究已经应用这个理论找到并联机器人的奇异性,例如,[11]-[14]。
特别注意到情况,执行机构是线性和代表螺丝是零投的。
在这些情况下,奇异的配置是解决通过使用几何,寻找可能的致动器线依赖[15]-[17]。
其他分类方法闭环机制可以被发现在[18]-[22]。
毕业设计(论文)外文文献翻译院系:信息工程学院年级专业:电子信息工程姓名: 装化学号: 20122450236附件:digital filter design外文文献:digital filter designAbstract:With the information age and the advent of the digital world, digital signal processing has become one of today's most important disciplines and door technology。
Digital signal processing in communications,voice,images, automatic control, radar, military,aerospace,medical and household appliances,and many other fields widely applied. In the digital signal processing applications,the digital filter is important and has been widely applied.Keyword:SCM; Proteus, C language;Digital filter1、figures Unit on :Analog and digital filtersIn signal processing,the function of a filter is to remove unwanted parts of the signal,such as random noise, or to extract useful parts of the signal, such as the components lying within a certain frequency range。