当前位置:文档之家› Blackfin内核体系结构(2008年11月)blackfin_core_architecture_overview_transcript

Blackfin内核体系结构(2008年11月)blackfin_core_architecture_overview_transcript

Blackfin内核体系结构(2008年11月)blackfin_core_architecture_overview_transcript
Blackfin内核体系结构(2008年11月)blackfin_core_architecture_overview_transcript

A ? 2007 Analog Devices, Inc. Page 1 of 21

B lackfin O nline L earning & D evelopment

Presentation Title: Blackfin Processor Core Architecture Overview

Presenter: George Kadziolka

Chapter 1: Introduction

Subchapter 1a: Module Overview

Subchapter 1b: Blackfin Family Overview

Subchapter 1c: Embedded Media Applications

Chapter 2: The Blackfin Core

Subchapter 2a: Core Overview

Subchapter 2b: Arithmetic Unit

Subchapter 2c: Addressing Unit

Subchapter 2d: Circular Buffering

Subchapter 2e: Sequencer

Subchapter 2f: Event Handling

Subchapter 2g : FIR Filter example

Chapter 3: The Blackfin Bus and Memory Architecture

Subchapter 3a: Overview

Subchapter 3b: Bus Structure

Subchapter 3c: Configurable Memory

Subchapter 3d: Cache and Memory Management

Chapter 4: Additional Blackfin Core Features

Subchapter 4a: DMA

Subchapter 4b: Dynamic Power Management

Subchapter 4c: Blackfin Hardware Debug Support

Chapter 5: Conclusion

Subchapter 5a: Summary

Subchapter 5b: Resources

Chapter 1: INTRODUCTION

Subchapter 1a: Module Overview

a

? 2007 Analog Devices, Inc. Page 2 of 21 B lackfin O nline L earning & D evelopment Hello, this presentation is entitled The Blackfin Core Architecture Overview. My name is George Kadziolka. I’m the President of Kaztek Systems. I’ve been personally providing the training on the various

Analog Devices architectures since 2000.

This module will introduce the Blackfin family which includes the Blackfin processors, tools and other development support that’s available. Then we’re going to get into the Blackfin Core Architecture and tell you a little bit about what’s going on under the hood.

The Blackfin family will be the first stop, we’re going to talk about the Blackfin processor, some of the

origins, the tools that are available, then we’ll get into the Blackfin core, talk about the arithmetic operation, data fetching, program flow control with the sequencer. We’ll get into the Blackfin bus architecture and hierarchal memory structure. We’ll talk about caching when we get to the memory management section.

Additional Blackfin core features will also be touched on including DMA, Power Management as well as some of the hardware debug support that’s available.

Subchapter 1b: Blackfin Family Overview

The Blackfin family consists of a broad range of Blackfin processors to suit a number of different

applications. There’s also a wide variety of software development tools available, Analog Devices for instance provides VisualDSP++ and supports a uCLinux development environment. There’s also a

number of tools to evaluate hardware, for instance in the form of EZ-KITs, as well as a number of debug tools that are available to debug your application on your target platform

In addition to the support that Analog Devices provides, there’s extensive third party support through the third party collaborative where you can find additional development tools. There’s a number of vendors providing those. A variety of operating systems that can run on the Blackfin, including uCLinux. There’s a number of companies providing network stacks, TCP/IP stacks, CAN stacks, a number of vendors providing hardware building blocks, CPU modules as well as just a vast variety of software solutions.

All Blackfin processors combine extensive DSP capability with high end micro controller functions, all in the same core. This allows you to develop efficient media applications while only using a single tool

a

? 2007 Analog Devices, Inc. Page 3 of 21 B lackfin O nline L earning & D evelopment chain. All Blackfin processors are based on the same core architecture so what this means is once you understand how one Blackfin processor works, you could easily migrate to other variants. The code is

also compatible from one processor to another.

Processors vary in clock speed, the amount of on-chip memory, the mix of peripherals, power

consumption, package size, and of course, price, so there’s a very large selection of Blackfin variants to let you choose which Blackfin processor is most suitable for your particular application.

What we’re looking at here is a variety of the peripherals that are offered within the Blackfin family, such as the EBIU or the micro processor style interface. We also have a PPI synchronous parallel interface typically used for video applications, serial ports or SPORTs, typically used for audio interfaces. GPIO pins are there, timer, UART, Ethernet capabilities built into several of the Blackfin variants as well as CAN or Controller Area Network and, USB. For a complete list of what the mixes are you’re encouraged to go look at the Blackfin selection guide online at Analog’s website.

Subchapter 1C: Embedded Media Applications

In a typical embedded media application the traditional model consists of having a micro controller

providing operations such as control, networking, and a real time clock, usually it’s a byte addressable part as well. Then you might have several blocks to perform signal processing functions, which could be audio compressions, video compression, beam forming or what have you. Then you typically have some sort of ASIC, FPGA or some other logic to provide interface to SDRAM or other peripherals. All these functions are combined into the Blackfin so now with a single core you can perform all these operations.

What does the Blackfin architecture mean to the developer? Well, by combining a very capable 32 bit controller with dual 16 bit DSP capabilities all in the same core, you can develop very efficient media applications such as multimedia over IP, digital cameras, telematics, software radio just to name a few examples. From a development perspective the single core means that you only have one tool chain to worry about so you don’t have a separate controller, separate DSP, you have everything done on one core. Your embedded application which consists of both control and signal processing code is going to be dealt with the same compiler, so the result is very dense control code and high performance DSP code.

a

? 2007 Analog Devices, Inc. Page 4 of 21 B lackfin O nline L earning & D evelopment From a controller perspective the Blackfin has L1 memory space that can be used for stack and heap, so this means single cycle pushes and pops. There’s dedicated stack and frame pointers, byte addressability is a feature, so we could be dealing with 8 bit, 16 bit, or 32 bit data but they’re all residing at some byte

address, as well as simple bit level manipulation.

From a DSP perspective the Blackfin has fast computational capability. But of course, that isn’t very useful unless you can get data moved in and out efficiently as well. Unconstrained data flow is another key

feature. A lot of the DSP operations are sums of products, so the intermediate sums require high dynamic range in order to store them, so extended precision accumulators is also another key feature. Efficient sequencing, being able to deal with interrupts efficiently, looped code, branch operations are dealt with efficiently in the Blackfin, as well as efficient IO processing. Things such as DMA offer an efficient way of communicating with the outside world. The DSP aspect of the Blackfin core is optimized to perform operations such as FFTs, as well as convolutions.

What we’re looking at here is an example of a Blackfin processor, this is one of the members of the 54X family, but we’re really trying to illustrate here is that the core, which consists of the Blackfin processor, internal memory, is common to all of the Blackfin variants. Once you learn how one Blackfin processor works you can easily migrate to others, as we mentioned earlier.

Chapter 2: The Blackfin Core

Subchapter 2a: Core Overview

Let’s get into the Blackfin core. The core itself consists of several units, there’s an arithmetic unit, which allows us to perform SIMD operations, where basically with a single instruction we can operate on multiple data streams. The Blackfin is also a load store architecture which means that the data that we’re going to be working with needs to be read from memory and then we calculate the results and store the results back into memory.

There’s also an addressing unit which supports dual data fetch, so in a single cycle we can fetch two

pieces of information, typically data and filter coefficients. We also have a sequencer which is dealing with program flow on the Blackfin, and we just want to mention that there’s several register files, for instance for data as well as for addressing, and we’ll talk about those as we go through this section.

a

? 2007 Analog Devices, Inc. Page 5 of 21 B lackfin O nline L earning & D evelopment

Subchapter 2b: Arithmetic Unit

The first stop is the arithmetic unit. The arithmetic unit of course performs the arithmetic operations in the Blackfin. It has a dual 40 bit ALU, which performs 16, 32 or 40 bit operations, arithmetic and logical. We also have a pair of 16 by 16 bit multipliers, so we can do up to a pair of 16 bit multiplies at the same time and when combined with the 40 bit ALU accumulators we can do dual MACS in the same cycle. There’s also a barrel shifter which is used to perform shifts, rotates and other bit operations.

The data registers are broken up into sections , what we have here is the register file which consists of eight 32 bit registers from R0 to R7. They can be used to hold 32 bit values. Of course when we’re doing DSP operations, a lot of our data is in 16 bit form so they can hold pairs of 16 bit values as well. The R0.H, R0.L for instance correspond to the upper and lower halves of the R0 register which could hold two 16 bit values. We also have a pair of 40 bit accumulators, A0 and A1, which are typically used with multiply accumulate operations to provide extended precision storage for the intermediate products.

We’re looking at several ALU instructions, in this case 16 bit ALU operations. We’re also taking a look at the algebraic syntax for the assembly language. As we can see the syntax is quite intuitive it makes it very easy to understand what the instruction is doing. In the first example we have a single 16 bit operation, R6.H = R3.H + R2.L:, so this is just the notation of the assembly language. It’s very straight forward, what we’re saying is we’re going to take the upper half of R3 added to the lower half of R2 and place the result into the upper of R6.

Next example is a dual 16 bit operation. R6=R2+|-R3. What we’re saying is we’re going to read R2 and R3 as a pair of 32 bit registers, but that operator in between the +|- basically says it’s a dual 16 bit

operation. We’re going to be taking the lower half of R3 subtracting it from the lower half of R2 with the result going to the lower half of R6. In addition we’re going to take the upper half of R3 add it to the upper half of R2 and place the result in the upper half of R6. This effectively is a single cycle operation.

We also have an example of a quad 16 bit operation where what we’re really doing is just combining two 16 bit operations in the same instruction. There’s a coma here in between these two to indicate that we’re trying to do those four operations in the same cycle. This is done when we want the sum and difference

a

? 2007 Analog Devices, Inc. Page 6 of 21 B lackfin O nline L earning & D evelopment between the pairs of 16 bit values in our two operands. Notice we have R0 and R1 on both sides of the

coma in this example.

Of course we can also do 32 bit arithmetic operations as we see here. R6=R2+R3;. Similar example to what we saw previously, the difference is we only have a single operator so that tells the assembler that we’re doing a single 32 bit operation. In other words R2 and R3 contain 32 bit values not pairs of 16 bit values. We can also do a dual 32 bit operation where we take the sum and difference between, in this case, R1 and R2. These also are effectively single cycle operations.

We mentioned about the multipliers, in this particular example we have a pair of MAC operations going on at the same time, so A1-=R2.H*R3.H, A0+=R2.L*R3.L; Again two separate multiply accumulate

operations, R2 and R3 are the 32 bit registers containing the input data and we can mix and match which half registers we use as our input operands. Again this is effectively a single cycle operation, in addition this could be combined with up to a dual data fetch.

We also have a barrel shifter, the barrel shifter enables shifting, rotating any number of bits within a half register, 32 or 40 bit register, all in a single cycle. It’s also used to perform individual bit operations on a 32 bit register in that register file, for instance we can individually set, clear, or toggle a bit without

effecting any of the other bits in the register. We can also test to see if a particular bit has been cleared.

There’s also instructions to allow field extract and field deposit. With this you specify the starting position of the field, how many bits long you’re interested in, and you can pull that field out from a register and return in the lower bits of another register. Used for bit stream manipulation.

The Blackfin also has four 8 bit video ALUs, and the purpose of these devices is to allow you to perform up to four 8 bit operations in a single cycle. 8 bit because a lot of video operations typically deal with 8 bit values. Some examples here we can do a quad 8 bit add or subtract where we have four bites for each of the operands just add them or subtract them. We can do a quad 8 bit average, where we can average pairs of pixels or groups of four pixels. There’s also an SAA instruction, Subtract, Absolute, Accumulate. What it does is it allows you to take two sets of four pixels, take the absolute value of the difference

between them and accumulate the results in the accumulators. In fact four 16 bit accumulators are used. This is used for motion estimation algorithms.

a

? 2007 Analog Devices, Inc. Page 7 of 21 B lackfin O nline L earning & D evelopment

All of these quad 8 bit ALU instructions still effectively take one cycle to execute, so they’re quite efficient. What we’re showing below is just the typical set up. We need our four byte operands, we start off with a two 32 bit register field, which provides 8 bytes, and we select any contiguous four bytes in a row from each of those fields. There’s one operand the second one being fed to the four 8 bit ALUs.

The Blackfin also supports a number of other additional instructions to help speed up the inner loop of many algorithms, such as bitwise XOR , so if you’re creating linear feedback shift registers, and you might be doing this if you’re doing CRC calculations, or involved with generating pseudo random number

sequences, there’s instructions to basically speed up this process. Also the Bitwise XOR and Bit Stream Multiplexing are used to speed up operations such as convolutional encoders. There’s also add on sign, compare, select which can be used to facilitate the Viterbi decoders in your applications.

There’s also support for being compliant with the IEEE 1180 standard for two dimensional eight by eight discrete cosine transforms. The add subtract with pre-scale up and down, we can add or subtract 32 bit values and then either multiply it by 16 or divide by 16 before returning 16 bit value. There’s also a vector search where you can go through a vector of data and a pair of 16 bit numbers at a time, search for the greatest or the least of that vector.

Subchapter 2c: Addressing Unit

Now we’re moving on to the addressing unit. The addressing unit is responsible for generating addresses for data fetches. There are two DAGs, or Data Address Generator arithmetic units which enable

generation of independent 32 bit addresses. 32 bits allows us to reach anywhere within the Blackfin’s memory space, internally or externally. The other thing to make note of is we can generate up to two addresses, or perform up to two fetches at the same time.

We’re looking at the address registers contained within the addressing unit, there are six general purpose pointer registers, P0 through P5. These are just used for general purpose pointing. We initialize them with 32 bit value to point to any where in the Blackfin’s memory space, and we can use them for doing 8, 16, or 32 bit data fetches. We also have four sets of registers which are used for DSP style data fetches. This includes 16 and 32 bit fetches. DSP style means the types of operations that we typically use in DSP

a

? 2007 Analog Devices, Inc. Page 8 of 21 B lackfin O nline L earning & D evelopment operations such as dual data fetches, circular buffer addressing, or if we want to do, say, bit reversal, that

would be done with these particular registers.

We also have dedicated stack and frame pointers as shown here. The addressing unit also supports the following operations; we can do addressing only where we specify index register, pointer register to point to some address in memory that we’re going to fetch from. We can also do a post modified type of operation where the register that we specify will be modified after the data fetch is done automatically in order to prepare it for the next data fetch operation. Circular buffering, for instance, is supported using this method. We can provide an address with an offset where a point register might be pointed at the base address of the table and we want to fetch with some known offset from that base address. When we do that the modify values do not apply, no pointer update is performed.

We can do an update only, just to prepare a register for an upcoming fetch, or we can modify the address with a reverse carry add, if we’re doing bit reversal types of data fetches.

All addressing on the Blackfin is register indirect, which means we always have to use a pointer register or index register to point to a 32 bit address we want to fetch from. If we are using index registers, I0 through I3, we’re going to be using these for doing either 32 or 16 bit fetches from memory. The general purpose pointer registers, P0 through P5 can be used for 32, 16 and 8 bit fetches. Then stack and frame point registers are for 32 bit accesses only.

All the addresses provided by the addressing unit are byte addresses, whether we’re doing 8, 16 or 32 bit fetches we’re always pointing at the little end of that word that we want to fetch. Addresses need to be aligned. For instance, for fetching 32 bit words, the address must be a multiple of four.

Subchapter 2d: Circular Buffering

We’re going to talk about circular buffering here. First of all, circular buffers are used a lot in DSP

applications, where we have data coming in a continuous stream, but the filter that we’re applying usually only needs to have a small portion of that, so our circular buffers are typically the size of the filter that we’re using. What happens is we have a pointer, pointing to data that we’re fetching from the circular buffer. As we step through the data we need to do a boundary check when we update the pointer to make

a

? 2007 Analog Devices, Inc. Page 9 of 21 B lackfin O nline L earning & D evelopment sure that we’re not stepping outside the bounds of the circular buffer. If we do step outside we need to wrap the pointer to be put back in bounds of the circular buffer. This is done without any additional overhead in hardware in the Blackfin. We can also place these buffers anywhere in memory without

restriction due to the base address registers.

In this particular example what we have is a buffer, which contains eleven 32 bit elements, and our goal is going to be to step through every fourth element and still stay inside the bounds of the circular buffer. What we’re going to do is initialize the base and the index register. The base register is the address of where this buffer is in memory, and the index register is the starting address of where we’re going to start fetching from. They’re both initialized to zero in this example.

The buffer length will be set to 44, so there’s 11 elements, but there are four bytes per elements, four 32 bit words per element. We initialize the buffer length with the number of bytes in the buffer. We also want to step through every fourth element, so because the elements are 32 bits wide, we have to provide a byte offset of 16 in this case.

The first access will occur from address 0 and when we’re done we’re going to bump the pointer and be looking at address 0x10 when we’re done. On the second access we’ll fetch from 0x10 , bump the pointer again automatically to be pointing at 0x20 . There we are. On the third fetch we’ll fetch from 0x20 hex, but when we apply the 0x10 offset, the 0x30 will be out of bounds so what happens is the address generator wraps the pointer to be back in bounds. Where’s it wrap to? Well we want to be four away, but still inside bounds of the circular buffer. If we’re starting at 0x20 , one, two, three, four, we get wrapped to address 0x4. This is handled automatically in the address generator.

Subchapter 2e: Sequencer

The last block that we’re going to talk about in the core is the sequencer itself. The sequencer’s function is to generate addresses for fetching instructions. It uses a variety of registers in which to select what’s the next address that we’re going to fetch from. The Sequencer also is responsible for alignment. It aligns instructions as they’re fetched. The Sequencer always fetches 64 bits at a time from memory, and we’ll talk about this later, but there’s different size op-codes, and what it does is it basically realigns 16, 32 or 64 bit op-codes before sending them to the rest of the execution pipe line.

a

? 2007 Analog Devices, Inc. Page 10 of 21 B lackfin O nline L earning & D evelopment

The Sequencer is also responsible for handling events. Events could be interrupts or exceptions and we’ll talk about that shortly. Also any conditional instruction execution is also handled by the Sequencer. The Sequencer accommodates several types of program flow, linear flow of course is the most common where we just execute code linearly in a straight line fashion. Loop code is handled where you might have a sequence of instructions that you want to execute over and over again, some counted number of times. We can have a permanent redirection program flow with a jump type instruction where we just branch to some address in memory and just executing from that point on. Sub-routine types of operations where you call some sub-routine , some address, and we execute and we return back to the address following the call instruction.

Interrupt is like a call, it’s a temporary redirection except it’s governed typically by hardware. An interrupt comes in during the execution of an instruction, then we vector off to the interrupt service routine, execute the code, the ISR, and return to the address following the instruction that got interrupted in the first place.

We also have an idle instruction. What the idle does essentially just halts the pipeline, all the instructions that are in there get executed, and then we just sort of wait for either wake up event or an interrupt to come along and then we’ll just continue fetching instructions from that point on. It is actually also tied into the Power Management and we’ll touch on that a little bit later on.

The Sequencer has a number of registers that are used in the control of program flow on the Blackfin. There’s an arithmetic status register, any time we do an arithmetic operation you could be affecting bits in here. AZ for zero, AN for negative for instance. The Sequencer uses these status bits for the conditional execution of the instructions. We also have a number of return address registers, RETS, RETI, RETX is shown down here, for sub-routine or any number of events. These hold the 32 bit return address for any flow interruption, either a call or some sort of interrupt. We also have two sets of hardware loop

management registers; LC, LT, LB, so loop counter, address at the top of the loop, address at the bottom of the loop. They’re used to manage zero overhead looping. Zero overhead looping means once we set up these registers, we spend no CPU cycles decrementing the loop counter, checking to see if it’s reached zero, branching back to the top of the loop. This is all handled without any over head at all, just by simply initializing these registers. Two sets of these registers allow us to have two nested levels of zero overhead looping.

a

? 2007 Analog Devices, Inc. Page 11 of 21 B lackfin O nline L earning & D evelopment

The Blackfin also has an instruction pipeline and this of course is responsible for the high clock speeds that are possible. The one thing to notice is the pipeline is fully interlocked. What this means is in the event of a data hazard, such as a couple of instructions going through, one instruction sets up a register that the next one is supposed to be using, if it’s not ready because of where it is in the pipeline, the Sequencer automatically inserts stall cycles and holds off the second instruction until the first one is complete.

Of course the C compiler is very well aware of the pipeline and will automatically rearrange instructions in order to eliminate stalls or reduce the number of stall cycles.

Quickly going through the different stages, we’ve got three instruction fetch stages for the addresses to put out on the bus, we wait for the instruction to come back and the instruction comes in at the IF3 stage. Instructions are decoded. Address calculation stage is where we calculate the address for any data fetches, so for instance if we have a pre-modify this is where the offset would be added to the pointer register. We have a couple of data fetch stages. DF1, if we’re performing a data fetch, this is where the address of the memory location we want to fetch data from is put out on the bus. If we have an arithmetic operation DF2 is the stage where the registers are read to get the operands.

Then we have two cycles for execute, EX1, EX2, this is where we start our two cycle operation for instance like a multiple accumulate. EX2, any single cycle operations or the second part of multiple accumulate would be done here. Finally the last stage is the write back stage where any data that was read from memory or any results from any calculations are going to be committed at this point in time.

Once the pipeline is filled then what we see in this case here at cycle N+9. Once the pipeline is filled every clock tick results in another instruction exiting the pipeline, or leaving the writeback stage. Effectively every clock tick, another instruction is executed, so this is where we get our one instruction per core clock cycle execution speed.

Subchapter 2f: Event Handling

a

? 2007 Analog Devices, Inc. Page 12 of 21 B lackfin O nline L earning & D evelopment

Blackfin also handles events. Events can be categorized as either interrupts, typically hardware generated DMA transfers completed, timer has reached zero. You can also generate software interrupts. Other events would include exceptions. These could be either as a result in an error condition, let’s say an overflow or a CPLB miss, or something like that, or the software requesting service. There’s a way for software to generate exceptions intentionally.

The handling of events is split between the core event controller and the system interrupt controller. The core event controller and core clock domain, system interrupt controllers and the SCLK domain. The core event controller has 16 levels associated with it and deals with the requests on a priority basis. We’ll see this priority on the next slide. The nine lowest levels of the core interrupt controller are basically available for the peripheral interrupt request to be mapped to. Every level has a 32 bit interrupt vector register associated with it. This can be initialized with the starting address of our ISR so that when that particular level is triggered, that’s the address that we’re going to start fetching code from.

We also have the SIC, or the System Interrupt Controller portion of the Interrupt handler. This is where we enable which peripheral interrupt request we want to allow to go through. Also, the mapping of a particular peripheral interrupt request to a specific core interrupt level happens here. What this allows us to do is change the priority of a particular peripheral’s interrupt request. What we see here is an example using the BF533 interrupt handling mechanism. The left hand side is the system interrupt controller side, and here we have the core interrupt controller side. Again 16 levels at the core and what we’re showing is that these are prioritized. Level 0 or IVG0 has the highest priority in the system, level 15 has the lowest. In case multiple interrupts came in at the same time, the highest one would be given the nod.

What we also see over here is we have quite a few different peripheral interrupt requests possible, and we end up mapping multiple peripheral interrupt requests into, say, one GP level because we have more peripheral interrupt requests then we have general purpose levels left over. We also see the default map, in other words coming up out of a reset. These particular peripheral interrupt requests are mapped to these specific core levels. What you can do is if in your particular application you’ve determined that say programmable flag interrupt A must have a higher priority then say the real time clock interrupt, you could remap using the interrupt assignment registers.

a

? 2007 Analog Devices, Inc. Page 13 of 21 B lackfin O nline L earning & D evelopment

The Blackfin also has variable instruction lengths. There’s three main op-code lengths that Blackfin deals with. 16 bit instructions include most of the control type instructions, as well as data fetches, so these are all 16 bits long in order to improve code density. We also have 32 bit instructions. These include most of the control types of instructions that have immediate values, for instance loading register with immediate value or things of that sort. Most arithmetic instructions are also 32 bits in length.

There’s also a multi issue 64 bit instruction, and this allows us to combine a 32 bit op-code with a pair of 16 bit instructions. Typically an arithmetic operation with one or two data fetches. What we do is when we combine them as we see in the example below, we have a dual multiple accumulate operation in parallel with a dual 32 bit data fetch. We use this parallel pipe operator to tell the assembler that we’re trying to combine these operations at the same time. What happens is as a 64 bit op-code, these three instructions go through the pipeline together so they all get executed at the same time.

Instruction packing. When code is compiled and linked, the linker places the code into the next available memory space so instructions aren’t padded out, for instance, to the largest possible instruction that’s handled on the Blackfin. What this means is there’s no wasted memory space. As a programmer you don’t have to worry about this, we mentioned earlier the sequencer has an alignment feature so as we read the 64 bits of instruction from memory, it performs the alignment for us. So it pulls out the individual 16, 32, and 64 bit op-codes. If they happen to straddle an octal address boundary it’ll also realign that before passing it on to the rest of the pipeline.

Subchapter 2g: FIR Filter example

Blackfin is optimized for doing operations such as FIR filtering or FFTs. What we have here is an example of a 16 bit FIR filter. In fact what we’re doing is we have an input data stream and we’re applying two filters to the same data stream. We’re showing here that we’re using R0 and R1 as the two input registers , R0 contains our two pieces of 16 bit input data, R1 contains our 16 bit filter coefficients. As we’re going through here, here’s our data in our delay line, our filter coefficient is being fetched from the buffer, first filter coefficient will be used with the first two pieces of data and what we’re going to do is start fetching another piece of data while we use the next filter coefficient with two other pieces of data. Our code example here at the bottom just shows our two multiply accumulate instructions in parallel with either

single or a dual data fetch. In addition to doing those two data fetches we’re also setting up the pointers to point to the next pieces of data when we’re complete.

a

? 2007 Analog Devices, Inc. Page 14 of 21 B lackfin O nline L earning & D evelopment

Chapter 3: The Blackfin Bus and Memory Architecture

Subchapter 3a: Overview

Next we’re going to get into the bus and memory architecture of the Blackfin. The Blackfin uses a memory hierarchy with the main goal of achieving performance comparable to what we have in the L1 memory, with an overall cost of that of the least expensive memory, which is the L3 in this case. What do we mean by hierarchal memory? First of all; in a hierarchal memory system you basically have levels of memory. The memory that’s closest to the core is faster, but you don’t have as much of it. L1 memory has the smallest capacity but items that are in L1 can be fetched single cycle. Some of the Blackfin variants have L2 memory, this is a larger block of memory but it’s further away from the core, latency is a little bit greater. All Blackfins support L3 or SDRAM so this is where the bulk of your code or data could be stored. Very large capacity but also high latency associated with it.

Portions of L1 can also be configured as cache, and this allows the memory management unit to pre-fetch and store copies of code or data that might be in L2 or L3, stored in L1 that way you get L1 performance. What we’re looking at here is the internal bus structure of, we’re looking at the [ADSP-BF533] processor as an example in this case. We’re going to look at the modified Harvard architecture as well as talk about some of the performance aspects of the fetches work.

Subchapter 3b: Bus Structure

First off we have the two different domains, the core (CCLK) and system (SCLK) clock domain. When we talk about a processor running at 600MHz for instance we’re talking about the CCLK frequency. That’s the rate at which the core can fetch instructions, fetch data from L1 memory. What we see here are the data paths from L1 to the core. In a single cycle we can fetch 64 bits worth of instruction and up to two data fetches, up to two 32 bit fetches from L1 can occur at the same time. In many applications the code and data for instance could fit into L1 and that’s fine, so you’re always running at the highest performance possible. Of course if you have a large application you could store parts of your code and data in external memory, in SDRAM. Because we can generate addresses for any where within the Blackfins 32 bit

memory space, the address for instruction could very well be pointed to something in SDRAM. If that was the case , the core is fetching say 64 bits from SDRAM, we’re going to go across the external access bus,

a

? 2007 Analog Devices, Inc. Page 15 of 21 B lackfin O nline L earning & D evelopment

through the EBIU to the external port bus which in this example is a 16 bit wide bus. To fetch that same 64 bit instruction we’re going to be requiring four fetches from SDRAM. We’re in the SCLK domain, the SCLK domain in the BF533 is limited to 133MHz, so if we’re running at 600MHz this will top out at

120MHz, so you have a five to one difference in clock rates. Those four cycles that we just spent fetching from SDRAM actually cost us 20 core clock cycles. Of course that assumes we’re on the right page in SDRAM. If we have to pre-charge a page and activate a new one, that’s going to add a few more cycles.

Subchapter 3C: Configurable Memory

What we see here is that even though we have the SDRAM it could be potentially expensive executing out of SDRAM. What we do have is a couple of mechanism that allow us to get around this. DMA of course is one where you can move in overlays for instance or data, you can do memory to memory transfers from SDRAM to L1. There’s also caching that we’ll talk about momentarily.

The Blackfin has configurable memory and as we saw from the previous diagram the best over all

performance can be achieved when the code that you want to fetch or the data that you want to process resides in L1. Now there’s two methods that we can use to fill the L1 memory, caching and dynamic downloading, and Blackfin processor supports both. Micro controller programmers typically use the

caching method where you just set up the cache in the memory management unit and any large programs will benefit from this, will just automatically store or cache copies of the code in L1.

DSP programmers have typically used Dynamic Downloading or dealing with code overlays where you anticipate a function that you want to execute and what you do is you set up a DMA transfer in the background, move them to L1 so that by the time it’s there, now you can execute it at the L1 speed.

The Blackfin processor allows the programmer to choose one or even both methods in order to optimize system performance. Again portions of L1 instruction and data memory can be configured to be used as either SRAM or cache . SRAM means that that block of memory appears in your memory map and you have full access to it just by assigning a pointer to it. If its’ configured as cache you no longer have direct access to it, but instead the memory management unit decides what goes in to that block.

a

? 2007 Analog Devices, Inc. Page 16 of 21 B lackfin O nline L earning & D evelopment Once you enable the 16K blocks this does reduce the total amount of L1 memory available as SRAM, however you still have in the case of instructions, you still have code memory left over that you can still put key interrupt service routines or functions such as that and have it resident in L1 always.

Subchapter 3d: Cache and Memory Management

We’re going to talk a bit now about the cache memory management. What is cache? Well cache is the first level of memory that we reach once the address leaves L1. Cache allows the users to take advantage of single cycle memory without having to specifically move instructions or data manually. In other words it takes away that setting up a DMA and moving the data or instructions to the background.

L2 and L3 memory can be used to hold large programs and data sets, also the paths to and from L1

memory are optimized to perform with cache enabled. What happens is once a cache line transfer occurs nothing can interrupt that cache line fill so it does have a high priority over other transactions that occur. Cache automatically optimizes code and data by keeping recently used information in L1. There is an algorithm called LRU, least recently used, what this means if first of all we have several ways to store a cache line in internal memory. Once we have all four ways filled and we need to find space for another cache line, the least recently used algorithm basically says which ever cache line was used the longest time go, that’s going to be the first one to give up it’s space to the new cache line.

Cache is managed by the memory management unit through a number of CPLBs, these are Cache-ability Protection Look-aside Buffers. The idea of the CPLBs are we divide up the Blackfin’s memory space into a number of different regions or pages, and each of these pages can be assigned different types of

protection or cache-ability parameters. For instance you can provide user or supervisor mode protection, perhaps certain pages only supervisor code can access it, not user code. Read write access protection or in the case of caching this is how we define which pages of memory are supposed to be cached in L1 and which ones are not.

Chapter 4: Additional Blackfin Core Features

Subchapter 4a: DMA

a

? 2007 Analog Devices, Inc. Page 17 of 21 B lackfin O nline L earning & D evelopment

What we’re going to do now is talk about some additional Blackfin core features, starting with DMA or Direct Memory Access. All Blackfin processors have at least one DMA controller, some have several. Direct Memory Access allows the Blackfin to move data between memory and a peripheral or between memory and memory without the core being involved. The core does get involved initially to set up the DMA registers or to set up the descriptors in memory but after that there’s no core cycles used to move the data.

The core ends up being interrupted just when the DMA transfer completes. This improves the over all efficiency because now the core is interrupted just when a block of data has been transferred and not dealing with each and every single piece of information. There’s also status registers available with the DMA channel so polling of the DMA status can also be done without going through the interrupt controller.

The DMA controller has dedicated busses to connect between the DMA controller itself and the set of peripherals, the DMA capable peripherals in the Blackfin, the external memory L3 or SDRAM, as well as the core memory, so there’s dedicated busses for all these different paths. The Blackfin also supports various power management options, one of them for instance is built to maintain low active power. Any peripherals that are not used, you typically have to set a bit to enable them are automatically powered down, there by conserving some power.

Subchapter 4b: Dynamic Power Management

There’s also a dynamic power management mechanism that allows you to dynamically change the operating frequency and the core voltage to optimize power for your particular application. For instance there’s an onboard PLL that used to multiple the clock in frequency to whatever your desired operating frequency is so we can step that up as much as 64 times. We can also optimize the core voltage for the desired operating frequency. Typically when you’re operating at a lower frequency you don’t need as high a core voltage.

There’s also a number of stand by power modes, five all together. Full on, as the name implies of course doesn’t have any real power savings, the PLL is running, your operating typically at full speed. You do have an active mode in this mode your running from the clock infrequency, you’re not using the PLL, so this is a lower power operation, both the core and the peripherals are running. You have a sleep mode, in

a

? 2007 Analog Devices, Inc. Page 18 of 21 B lackfin O nline L earning & D evelopment the sleep mode the PLL is running, the SCLK is typically running at full speed, but the CCLK has stopped. The core has been put into an idle state just sitting there waiting for peripheral DMA transfer to complete

for instance.

There’s a deep sleep mode where both the CCLK and SCLK have been shut off. In this case there really is no activity, so we’re going to wait for something like a real time clock for instance to wake us up.

Hibernate is also the deepest sleep mode, in this particular mode in addition to shutting off both clocks, we also shut down the power to the core. Of course what this means is we’ve also lost whatever information there was in L1 so you wouldn’t go into hibernate until after you save off any critical information.

The Blackfin does have a real time clock on most of the variance that has an alarm and wake up features. The real time clock is actually a separate sub system and has it’s own clock, has it’s own power supply. It’s not affected by the any of the software hardware resets. In fact the real time clock is one of the devices that can wake up the Blackfin out of the deepest sleeps including hibernate.

What we’re looking at here is the types of power savings that you can get as your application progresses. What we have is the ability to optimize the power consumption in the different phases or our application. We see here for instance that at the start we might be requiring a fair amount of performance, so of course there’s no power savings there, but we might be going on to another stage where need to go into a much quieter state, so in this case here we want to go down to a lower frequency. We can adjust the PLL, adjust it down, but also when we’re operating at a lower frequency we generally do not need as high a core voltage. What the chart is showing here is that just be reducing the frequency you do get a reduction in power consumed, but if you can also reduce the voltage then the power reduction can be deeper.

Power consumption or at least the dynamic part of power consumption is proportioned with the operating frequency, but it’s also proportioned of the square of the core voltage. What we see here are just the different power mode transitions that Blackfin supports. Say for instance coming up out of a reset we go into full on state where the PLL is running, the CCLK is running, SCLK is running and through controlling bits in the PLL control register we can switch basically from one state to another. For instance we can go to the active state by disabling the PLL, switch back to the full on state by re-enabling the PLL with the by-pass bit. From either the active or the full on states we can go into a sleep mode by shutting down the

a

? 2007 Analog Devices, Inc. Page 19 of 21 B lackfin O nline L earning & D evelopment core clock and we can wake up to one of the other states depending on the state of the by-pass bit. Same thing, go into deep sleep mode by shutting down both clocks.

Of course when you wake up with a real time clock out of a deep sleep you wake up to the active mode where you typically start up the PLL and then transition over to the full on. Also from either of these states, active or full on you can go into hibernate state by setting the onboard switching regulator to zero frequency or shutting it down in other words. This basically allows you to achieve the deepest power savings when you don’t require the Blackfin processor for a particular period of time.

There’s a number of different things that you can use to wake up from hibernate, real time clock, CAN activity, PHY activity in some of the processors. Now a lot of stuff you don’t have to worry about this at a very low level, there is something called System Services. This is a set of functions, libraries that allow you to with a single function call control things like the operating frequency, being able to change from one state to another, a lot of that is basically isolated from you in your application.

Subchapter 4c: Blackfin Hardware Debug Support

Next we’re going to talk a bit about the type of hardware debug support that’s available. The Blackfin

processor has a JTAG port and one of the uses for it is to do in circuit emulation. This allows you to do no intrusive debugging, so your application is running full speed on the target and we’re just sort of watching from the outside. Of course once the processor stops we can go in read memory, read registers, and change things. There’s also BTC or background telemetry channel support with the Visual DSP tools. What this allows us to do is while the processor is running, if we want to say get some information from the memory buffer and display it in Visual DSP, we don’t have to shut the processor down, instead we define what buffers we want to share, we need to link in the BTC library, there’s a small bit of code we need to execute, but essentially we can do a data exchange for debug purposes with a running target which is very useful during the initial stages of your code development.

Additional hardware debug support, there’s some hardware break points built into the Blackfin, eight registers allows you to set up six instruction addresses and two data addresses so when we say match that address through instruction fetch or data fetch, we can generate an exception and track that. We can use pairs of registers to specify addresses and break when we have something, have a fetch within that range or outside that range.

a

? 2007 Analog Devices, Inc. Page 20 of 21 B lackfin O nline L earning & D evelopment

There’s a performance monitor unit, or actually a pair of 32 bit counters which can be assigned to count a number of different performance parameters, like how many cycles you’re spending waiting for data

fetches, how many interrupts are occurring and so on. There’s also an execution trace buffer, 16 locations deep. What it does is it stores, any time there’s a change in program flow, so two pieces of information are stored, one is what address were we at when the branch occurred, and what address did we branch to. This could be for interrupts, jumps, sub-routine calls, or the first iteration of a loop.

These last three points by the way are available even without having anything connected to JTAG port so you can build this into your hardware.

Chapter 5: Conclusion

Subchapter 5a: Summary

In summary I would just like to make few points, the Blackfin core has a number of features and

instruction that support both control and DSP types of operations, and it’s important to know this is all within one core. The high performance memory and bus architecture supports zero wait- state instruction and data fetches from L1 at the core clock rate. These three fetches all occur at the same time. If you have large applications that reside in the larger but slower external memory, they can still benefit from the high performance L1 through the use of caching and or dynamic downloading. Even though your

application doesn’t totally fit in L1 you still get the benefits with those features. The Blackfin architecture enables both efficient implementation of media related applications as well as the efficient development of those applications.

Subchapter 5b: Resources

To wrap up I’d just like to point you to some additional resources that you might find useful, for additional information on the Blackfin architecture please refer to the Analog Devices website which has links to manuals, data sheets, frequently asked questions, the knowledge base, sample code, development tools and much, much more. https://www.doczj.com/doc/8515314955.html,/blackfin is where you can go and start with that.

Linux内核修改与编译图文教程

Linux 内核修改与编译图文教程 1

1、实验目的 针对Ubuntu10.04中,通过下载新的内核版本,并且修改新版本内核中的系统调用看,然后,在其系统中编译,加载新内核。 2、任务概述 2.1 下载新内核 https://www.doczj.com/doc/8515314955.html,/ 2.2 修改新内核系统调用 添加新的系统调用函数,用来判断输入数据的奇偶性。 2.3 进行新内核编译 通过修改新版内核后,进行加载编译。最后通过编写测试程序进行测试 3、实验步骤 3.1 准备工作 查看系统先前内核版本: (终端下)使用命令:uname -r 2

3.2 下载最新内核 我这里使用的内核版本是 3.3 解压新版内核 将新版内核复制到“/usr/src”目录下 在终端下用命令:cd /usr/src进入到该文件目录 解压内核:linux-2.6.36.tar.bz2,在终端进入cd /usr/src目录输入一下命令: bzip2 -d linux-2.6.36.tar.bz2 tar -xvf linux-2.6.36.tar 文件将解压到/usr/src/linux目录中 3

使用命令: ln -s linux-2.6.36 linux 在终端下输入一下命令: sudo apt-get install build-essential kernel-package libncurses5-dev fakeroot sudo aptitude install libqt3-headers libqt3-mt-dev libqt3-compat-headers libqt3-mt 4

美国救市措施一览2008年11月26日105

美国救市措施一览2008年11月26日10:54 [我来说两句] [字号:大中小] 来源:人民网-国际金融报3月11日美联储宣布向银行和投行提供上限2000亿美元的援助贷款。 3月16日美联储向摩根大通公司提供290亿美元资金,以支持其收购投行贝尔斯登。 7月11日美国监管机构接管破产银行IndyMac(印第麦克),美国联邦储蓄保险公司耗资数十亿美元补偿储户损失。 7月30日美国总统布什签署一项法案,允许政府相关机构提供总额3000亿美元新贷款,以支持陷入财务困境的购房者获得条件更优惠的抵押贷款。 9月7日美国财政部接管房贷融资巨头房利美和房地美,并提供高达2000亿美元援助资金。 9月16日美联储再向金融系统注资700亿美元,以缓解信贷紧缩。同日,美联储向摇摇欲坠的保险巨头美国国际集团提供850亿美元资金。 9月19日美国财政部提供500亿美元资金,以对货币市场基金提供短期支持。 9月29日美联储向全球其他央行提供3300亿美元。至此,该机构通过货币互换协议方式向其他央行提供的美元资金达到6200亿。 10月3日美国总统布什签署7000亿美元金融救援计划。 10月6日美联储将针对银行的短期贷款额度提高到1500亿美元。该机构还表示将对银行的存款准备金支付利息。 10月8日美联储降息0.5个百分点,至1.5%。同日,美联储同意向美国国际集团再提供378亿美元资金,使政府对其注资总额超过近1200亿美元。 10月14日美国财政部说将使用7000亿美元金融救援资金中的2500亿美元购买银行股权,其中1250亿美元将按比例注入美国9家大型银行。 10月21日美联储表示其将提供上限5400亿美元的融资,以保持货币市场共同基金的流动性。 10月29日美联储降息至1%。此为1958年以来美国基准利率的最低水平。 11月10日美国财政部和美联储调整对美国国际集团的援助方案,将资金支持额度

面向服务的软件体系架构总体设计分析

面向服务的软件体系架构总体设计分析 计算机技术更新换代较为迅速,软件开发也发生较多改变,传统软件开发体系已经无法满足当前对软件生产的需求。随着计算机不断普及,软件行业必须由传统体系向面向服务架构转变。随着软件应用范围不断增大,难度逐渐上升,需要通过成本手段,提高现有资源利用率。通过面向服务体系结构可提高软件行业应对敏捷性,实现软件生产的规模化、产业化、流水线化。 1 软件危机的表现 1.1 软件成本越来越高 计算机最初主要用作军事领域,其软件开发主要由国家相关部分扶持,因此无需考虑软件开发成本。随着计算机日益普及,计算机已经深入到人们生活中,软件开发大多面向民用,因此软件开发过程中必须考虑其开发成本,且计算机硬件成本出现跳水现象,由此导致软件开发成本比例不断提升。 1.2 开发进度难以控制 软件属于一种智力虚拟产品,软件与其他产品最大不同是其存在前提为内在逻辑关系。相较于计算机硬件粗生产情况,传统工作中的加班及倒班无法应用到软件开发中,提升软件开发进度无法通过传统生产方法实现。且在软件开发过程中会出现一些意料不到的因素,影响软件开发流程,导致软件开发未按照预期计划展开。由此可见不仅软件项目开发难度不断增加,软件系统复杂复杂性也不断提升,即使增加

开发人手也未必能取得良好效果。 1.3 软件质量难以令人满意 软件开发另一常见问题就是在软件开发周期内将产品开发出来,但软件本身表现出的性能却未达到预期目标,难以满足用户多方位需求。该问题属于软件行业开发通病,当软件程序出现故障时会导致巨大损失。在此过程中软件开发缺乏有效引导,开发人员在开发过程中往往立足于自身想法展开软件开发,因此软件开发具有较强主观性,与客户想法不一致,因此导致软件产品质量难以让客户满意。 1.4 软件维护成本较高 与硬件设施一样,软件在使用过程中需要对其进行维护。软件被开发出来后首先进行公测,发现其软件存在的问题,并对其重新编辑提升软件性能,从而为客户提供更好服务。其次软件需要定时更新,若程序员在开发过程中并未按照相关标准执行会导致其缺乏技术性文档,提升软件使用过程中的维护难度。另外在新增或更新软件过程中可能导致出现新的问题,影响软件正常使用,并可能造成新的问题。由此可见软件开发成功后仍旧需要花费较高成本进行软件维护。 2 面向服务体系架构原理 2.1 面向服务体系架构定义 面向服务体系构架从本质上是一种应用体系架构,体系所有功能均是一种独立服务,所有服务均通过自己的可调用接口与程序相连,因此可通过服务理论实现相关服务的调动。面向服务体系构架从本质上来说就是为一种服务,是服务方通过一系列操作后满足被服务方需求的

面向对象中包括哪些UML图及每件图的作用

面向对象中包括哪些UML图及每件图的作用UML面向对象分析及其包括的图、建模步骤 一、叙述基于UML的面向对象分析设计过程 1)识别系统的用例和角 首先对项目进行需求调研,依据项目的业务流程图和数据流程图以及项目中涉及的各级操作人员,通过分析,识别出系统中的所有用例和角色;接着分析系统中各角色和用例间的联系,再使用UML建模工具画出系统的用例图,同时,勾画系统的概念层模型,借助UML 建模工具描述概念层类图和活动图。 2)进行系统分析,并抽象出类 系统分析的任务是找出系统中所有需求并加以描述,同时建立特定领域模型。建立域模型有助于开发人员考察用例,从中抽取出类,并描述类之间的关系。 3)设计系统和系统中的类及其行为 设计阶段由结构设计和详细设计组成。①结构设计是高层设计,其任务是定义包(子系统),包括包间的依赖关系和主要通信机制。包有利于描述系统的逻辑组成部分以及各部分之间的依赖关系。②详细设计就是要细化包的内容,清晰描述所有的类,同时使用UML的动态模型描述在特定环境下这些类的实例的行为。 二、面向对象中包括哪些UML图及每件图的作用 UML图包括九种:用例图、类图、对象图、状态图、时序图、协作图、活动图、组件图、配置图。 1)用例图(UseCaseDiagram) 它是UML中最简单也是最复杂的一种UML图。说它简单是因为它采用了面向对象的思想,又是基于用户视角的,绘制非常容易,简单的图形表示让人一看就懂。说它复杂是因为用例图往往不容易控制,要么过于复杂,要么过于简单。 用例图表示了角色和用例以及它们之间的关系。 2)类图(ClassDiagram) 是最常用的一种图,类图可以帮助我们更直观的了解一个系统的体系结构。通过关系和类表示的类图,可以图形化的方式描述一个系统的设计部分。

如何自行编译一个Linux内核的详细资料概述

如何自行编译一个Linux内核的详细资料概述 曾经有一段时间,升级Linux 内核让很多用户打心里有所畏惧。在那个时候,升级内核包含了很多步骤,也需要很多时间。现在,内核的安装可以轻易地通过像 apt 这样的包管理器来处理。通过添加特定的仓库,你能很轻易地安装实验版本的或者指定版本的内核(比如针对音频产品的实时内核)。 考虑一下,既然升级内核如此容易,为什么你不愿意自行编译一个呢?这里列举一些可能的原因: 你想要简单了解编译内核的过程 你需要启用或者禁用内核中特定的选项,因为它们没有出现在标准选项里 你想要启用标准内核中可能没有添加的硬件支持 你使用的发行版需要你编译内核 你是一个学生,而编译内核是你的任务 不管出于什么原因,懂得如何编译内核是非常有用的,而且可以被视作一个通行权。当我第一次编译一个新的Linux 内核(那是很久以前了),然后尝试从它启动,我从中(系统马上就崩溃了,然后不断地尝试和失败)感受到一种特定的兴奋。 既然这样,让我们来实验一下编译内核的过程。我将使用Ubuntu 16.04 Server 来进行演示。在运行了一次常规的 sudo apt upgrade 之后,当前安装的内核版本是 4.4.0-121。我想要升级内核版本到 4.17,让我们小心地开始吧。 有一个警告:强烈建议你在虚拟机里实验这个过程。基于虚拟机,你总能创建一个快照,然后轻松地从任何问题中回退出来。不要在产品机器上使用这种方式升级内核,除非你知道你在做什么。 下载内核 我们要做的第一件事是下载内核源码。在 Kernel 找到你要下载的所需内核的URL。找到URL 之后,使用如下命令(我以 4.17 RC2 内核为例)来下载源码文件: wget https://git.kernel/torvalds/t/linux-4.17-rc2.tar.gz

面向对象框架技术及应用

面向对象框架技术及应用 面向对象框架技术是软件重用的一种重要方式。本文以面向对象开发方法为基础,结合防空C I通信网仿真系统,介绍了开发特定领域应用框架的方法。 引言 在现代软件工程中,软件重用已经成为其中一个主要目标。代码重用通过面向对象语言的继承机制和编译技术已成为 现实。随着面向对象技术的日趋成熟,像这样低层次的复用已经不适合于特定领域大型软件生产的需求。为了提高软件生产过程的重用力度,软件领域的先驱者们开始进行一种新的尝试来提高软件生产力,他们不仅要重用旧的代码,而且要重用相似的分析设计结果和体系结构,来减少构造新软件系统的代价并提高软件的可靠性。基于框架的方式就是这样一种面向特定领域的重用技术。 框架由于提供了大力度的重用而被认为是一种最有前途的 面向对象技术。单独的类的重用,尽管有用,但由于重用力度小而不具备有意义的生产力的飞跃,只有把特定领域的体系结构作为一个整体进行重用才能取得引人注目的成就。 在仿真领域中,面向对象使得映射问题域到方案域变得很容易。方法和数据可以绑定到面向对象风格的程序中。仿真领域中的一个具体的实体都可以作为一个主动或被动对象,因

此采用面向对象技术来解决仿真问题是明智的。本文将结合建立C3I通讯子网仿真来讨论建立面向对象框架的方法和步骤。 ■面向对象框架 1.什么是面向对象框架 一个面向对象框架是指在特定领域中的应用软件的半成品。框架是对于那些试图在他们所关心的领域构造一个复杂软 件系统的用户而言的。因为它是处于特定领域中,所以应用系统的体系结构在许多不同的方面具有一定的相似性。框架利用一系列的对象和它们之间的接口来对应静态和恒定结 构的端口,并保留友好界面使用户能够很容易完成变化的、不稳定的剩余部分而得到一个新应用程序。任何框架都是特定领域的框架,一个框架可以包含一个或多个模式。 一般来说,如图1所示,框架定义了一个应用程序的骨架并提供可以放置于该骨架中的标准用户界面实现。作为一个程序员,你的工作只是在骨架中填入你的应用程序中特定的部分。目前有关面向对象框架尚未形成一个严格而精确的定义,国外著名的软件设计大师Ralph Johnson 教授对面向对象 技术进行了长期而深入的研究,在他写的许多关于面向对象的论文中对框架进行了如下定义: 图1 特定领域的框架

嵌入式Linux系统内核的配置、编译和烧写

实验二 嵌入式Linux系统内核的配置、编译和烧写 1.实验目的 1)掌握交叉编译的基本概念; 2)掌握配置和编译嵌入式Linux操作系统内核的方法; 3)掌握嵌入式系统的基本架构。 2.实验环境 1)装有Windows系统的计算机; 2)计算机上装有Linux虚拟机软件; 3)嵌入式系统实验箱及相关软硬件(各种线缆、交叉编译工具链等等)。 3.预备知识 1)嵌入式Linux内核的配置和裁剪方法; 2)交叉编译的基本概念及编译嵌入式Linux内核的方法; 3)嵌入式系统的基本架构。 4.实验内容和步骤 4.1 内核的配置和编译——配置内核的MMC支持 1)由于建立交叉编译器的过程很复杂,且涉及汇编等复杂的指令,在这里 我们提供一个制作好的编译器。建立好交叉编译器之后,我们需要完成 内核的编译,首先我们要有一个完整的Linux内核源文件包,目前流行 的源代码版本有Linux 2.4和Linux 2.6内核,我们使用的是Linux 2.6内核; 2)实验步骤: [1]以root用户登录Linux虚拟机,建立一个自己的工作路径(如用命令 “mkdir ‐p /home/user/build”建立工作路径,以下均采用工作路径 /home/user/build),然后将“cross‐3.3.2.tar.bz2、dma‐linux‐2.6.9.tar.gz、 dma‐rootfs.tar.gz”拷贝到工作路径中(利用Windows与虚拟机Linux 之间的共享目录作为中转),并进入工作目录; [2]解压cross‐3.3.2.tar.bz2到当前路径:“tar ‐jxvf cross‐3.3.2.tar.bz2”; [3]解压完成后,把刚刚解压后在当前路径下生成的“3.3.2”文件夹移 动到“/usr/local/arm/”路径下,如果在“/usr/local/”目录下没有“arm” 文件夹,用户创建即可; [4]解压“dma‐linux‐2.6.9.tar.gz”到当前路径下:

08年11月二级理财专业能力试题及答案

2008年11月理财规划师二级专业能力 (1-100题,共100道题,满分为100分) 一、单项选择题(1-70题,每题1分,共70分。每小题只有一个最恰当的答案,请在答题卡上将所选答案的相应字母涂黑) 1、小王购买了1000元的货币市场基金,下列选项不属于影响货币市场基金收益率的主要因素的是( B )。 A 利率因素 B 规模因素 C 收益率趋同趋势 D 成本因素 2、张先生想办一张信用卡,于是向理财规划师了银信用卡的有关功能。下列选项不属于信用卡功能的是( B )。 A 符合条件的免息透支 B 储蓄存款 C 预借现金 D 免息分期付款 3、孙先生两年前购买了一份人寿保险合同,保单目肖的现金价值达到10万元,由于一时资金紧张,孙先生准备向保险公司申请质押贷款。则孙先生的贷款额度大约为()万元。 A 7 B 10 C 9 D 11 4、王先生的生意最近资金紧张,所以准备通过典当自己的动产和不动产来融通资金。王先生了银到可以进行典当的动产、不动产通常有三种,则下列选项不必呈其中之一的是()。 A 汽车典当 B 房产典当 C 债券典当 D 股票典当 5、刘先生的下列负债中,不属于中长期负债的是( B )。 A 房贷余额 B 医疗欠费 C 车贷余额 D 创业贷款余额 6、个人汽车消费贷款的年限是3-5年,汽车消费贷款的首付款不得低于所购车辆价格的()。 A 20% B 25% C 30% D 35% 7、随着时代的发展,大件耐用消费品也可以通过贷款进行购买,此种贷款被称为个人耐用消费品贷款。下列关于个人耐用消费品贷款的贷款额度的说法错误的是()。 A 贷款起点不低于人民币3000元(不含3000元) B 最高贷款额度不超过人民币5万元(含5万元) C 采取抵押方式担保的,贷款额度不得超过抵押物评估价值的70% D 采邓质押方式担保的,贷款额度不得超过质押权利票面价值的90% 8、刘先生采用抵押担保方式向银行申请个人耐用消费品贷款,则刘先生能够获得的贷款额度不超过()。 A 抵押物评估价值的40% B抵押物评估价值的60% C 抵押物评估价值的70% D 抵押物评估价值的80% 资料:张先生的儿子小强今年8岁,预计18岁上大学,张先生计划将儿子送往国外进行高等教育,预计学费共需800000元,假设学费上涨率每年为5%。张先生目前打算用20000元作为儿子的教育启动资金,这笔资金的年投资收益率为10%。 根据资料回答9-11题: 9、小强上大学时所需的教育费为()元。 A 1303115.70 B 1380927.68 C 1397742.70 D 1580872.68 10、这笔教育启动资金到小强上大学时的终值为()元。 A 39187.45 B 41926.85 C 48192.45 D 51874.85 11、如果张先生采定期定投的方式积累儿子的教育金,每年末需投入()元。 A 74208.35 B 78509.60 C 78208.35 D 80509.60 12、我国学生贷款政策主要包括三种贷款形式,下列选项不属于这三种贷款形式的是()。 A 留学贷款 B 学校学生贷款 C 国家助学贷款 D 一般性商业助学贷款 13、小宋从理财规划师那里了解到传统的教育规划工具主要有()。 A 教育储蓄和学校贷款 B 教育储蓄和政府贷款 C 教育储蓄和教育保险 D 教育保险和银行贷款 14、王先生向理财规划师咨询子女教育规划问题,则理财规划师在帮助王先生选择教育理财产品时应该考虑的因素不包括()。 A 理财产品的安全性 B 理财产品的收益性 C 利率变动的风险 D 王先生的风险偏好

linux、内核源码、内核编译与配置、内核模块开发、内核启动流程

linux、内核源码、内核编译与配置、内核模块开发、内核启动流程(转) linux是如何组成的? 答:linux是由用户空间和内核空间组成的 为什么要划分用户空间和内核空间? 答:有关CPU体系结构,各处理器可以有多种模式,而LInux这样的划分是考虑到系统的 安全性,比如X86可以有4种模式RING0~RING3 RING0特权模式给LINUX内核空间RING3给用户空间 linux内核是如何组成的? 答:linux内核由SCI(System Call Interface)系统调用接口、PM(Process Management)进程管理、MM(Memory Management)内存管理、Arch、 VFS(Virtual File Systerm)虚拟文件系统、NS(Network Stack)网络协议栈、DD(Device Drivers)设备驱动 linux 内核源代码 linux内核源代码是如何组成或目录结构? 答:arc目录存放一些与CPU体系结构相关的代码其中第个CPU子目录以分解boot,mm,kerner等子目录 block目录部分块设备驱动代码 crypto目录加密、压缩、CRC校验算法 documentation 内核文档 drivers 设备驱动 fs 存放各种文件系统的实现代码 include 内核所需要的头文件。与平台无关的头文件入在include/linux子目录下,与平台相关的头文件则放在相应的子目录中 init 内核初始化代码 ipc 进程间通信的实现代码 kernel Linux大多数关键的核心功能者是在这个目录实现(程序调度,进程控制,模块化) lib 库文件代码 mm 与平台无关的内存管理,与平台相关的放在相应的arch/CPU目录net 各种网络协议的实现代码,注意而不是驱动 samples 内核编程的范例 scripts 配置内核的脚本 security SElinux的模块 sound 音频设备的驱动程序 usr cpip命令实现程序 virt 内核虚拟机 内核配置与编译 一、清除 make clean 删除编译文件但保留配置文件

(9)2008年11月北京成人学位英语真题(A)卷及答案

2008年11月北京地区成人本科学士学位 英语统考(A卷)题 2008-11-22 Part I Reading Comprehension (30%) Directions: There are three passages in this part. Each passage is followed by some questions or unfinished statements. For each of them there are four choices marked A, B, C, and D. You should decide

on the best choice and blacken the corresponding letter on the Answer Sheet. Passage 1 Questions 1 to 5 are based on the following passage: Scientists in India have invented a new way to produce electricity. Their invention does not get its power from oil, coal or other fuels. It produces electricity with the power of animals. India has about eighty million bullocks. They do all kinds of jobs. They work in the fields. They pull vehicles through the streets. They carry water containers. (76)Indian energy officials have been seeking ways to use less imported oil to provide energy. Scientists at the National Institute for Industrial Engineering in Bombay wondered whether the millions of bullocks could help. Many villages in India lack

linux内核编译和生成makefile文件实验报告

操作系统实验报告 姓名:学号: 一、实验题目 1.编译linux内核 2.使用autoconf和automake工具为project工程自动生成Makefile,并测试 3.在内核中添加一个模块 二、实验目的 1.了解一些命令提示符,也里了解一些linux系统的操作。 2.练习使用autoconf和automake工具自动生成Makefile,使同学们了解Makefile的生成原理,熟悉linux编程开发环境 三、实验要求 1使用静态库编译链接swap.c,同时使用动态库编译链接myadd.c。可运行程序生成在src/main目录下。 2要求独立完成,按时提交 四、设计思路和流程图(如:包括主要数据结构及其说明、测试数据的设计及测试结果分析) 1.Makefile的流程图: 2.内核的编译基本操作 1.在ubuntu环境下获取内核源码 2.解压内核源码用命令符:tar xvf linux- 3.18.12.tar.xz 3.配置内核特性:make allnoconfig 4.编译内核:make 5.安装内核:make install

6.测试:cat/boot/grub/grub.conf 7.重启系统:sudo reboot,看是否成功的安装上了内核 8.详情及结构见附录 3.生成makefile文件: 1.用老师给的projec里的main.c函数。 2.需要使用automake和autoconf两个工具,所以用命令符:sudo apt-get install autoconf 进行安装。 3.进入主函数所在目录执行命令:autoscan,这时会在目录下生成两个文件 autoscan.log和configure.scan,将configure.Scan改名为configure.ac,同时用gedit打开,打开后文件修改后的如下: # -*- Autoconf -*- # Process this file with autoconf to produce a configure script. AC_PREREQ([2.69]) AC_INIT([FULL-PACKAGE-NAME], [VERSION], [BUG-REPORT-ADDRESS]) AC_CONFIG_SRCDIR([main.c]) AC_CONFIG_HEADERS([config.h]) AM_INIT_AUTOMAKE(main,1.0) # Checks for programs. AC_PROG_CC # Checks for libraries. # Checks for header files. # Checks for typedefs, structures, and compiler characteristics. # Checks for library functions. AC_OUTPUT(Makefile) 4.新建Makefile文件,如下: AUTOMAKE_OPTIONS=foreign bin_PROGRAMS=main first_SOURCES=main.c 5.运行命令aclocal 命令成功之后,在目录下会产生aclocal.m4和autom4te.cache两个文件。 6.运行命令autoheader 命令成功之后,会在目录下产生config.h.in这个新文件。 7.运行命令autoconf 命令成功之后,会在目录下产生configure这个新文件。 8.运行命令automake --add-missing输出结果为: Configure.ac:11:installing./compile’ Configure.ac:8:installing ‘.install-sh’ Configure.ac:8:installing ‘./missing’ Makefile.am:installing ‘./decomp’ 9. 命令成功之后,会在目录下产生depcomp,install-sh和missing这三个新文件和执行下一步的Makefile.in文件。 10.运行命令./configure就可以自动生成Makefile。 4.添加内核模块

2008年11月CATTI三级笔译实务真题(英译汉部分、附答案)

2008年11月英语三级笔译实务试题 Section 1 English-Chinese Translation (英译汉) (60 points) Translate the following passage into Chinese. The time for this section is 120 minutes. LONGYEARBYEN, Norway —With plant species disappearing at an alarming rate, scientists and governments are creating a global network of plant banks to store seeds and sprouts, precious genetic resources that may be needed for man to adapt the world’s food supply to climate change. This week, the flagship of that effort, the Global Seed V ault near here, received its first seeds, millions of them. Bored into the middle of a frozen Arctic mountain topped with snow, the vault’s goal is to store and protect samples of every type of seed from every seed collection in the world. As of Thursday, thousands of neatly stacked and labeled gray boxes of seeds — peas from Nigeria, corn from Mexico — reside in this glazed cavelike structure, forming a sort of backup hard drive, in case natural disasters or human errors erase the seeds from the outside world. Descending almost 500 feet under the permafrost, the entrance tunnel to the seed vault is designed to withstand bomb blasts and earthquakes. An automated digital monitoring system controls temperature and provides security akin to a missile silo or Fort Knox. No one person has all the codes for entrance. The Global V ault is part of a broader effort to gather and systematize information about plants and their genes, which climate change experts say may indeed prove more valuable than gold. In Leuven, Belgium, scientists are scouring the world for banana samples and preserving their shoots in liquid nitrogen before they become extinct. A similar effort is under way in France on coffee plants. A number of plants, most from the tropics, do not produce seeds that can be stored. For years, a hodgepodge network of seed banks has been amassing seed and shoot collections in a haphazard manner. Labs in Mexico banked corn species. Those in Nigeria banked cassava. Now these scattershot efforts are being urgently consolidated and systematized, in part because of better technology to preserve plant genes and in part because of the rising alarm about climate change and its impact on world food production. “We started thinking about this post-9/11 and on the heels of Hurric ane Katrina,” said Cary Fowler, president of the Global Crop Diversity Trust, a nonprofit group that runs the vault. “Everyone was saying, why didn’t anyone prepare for a hurricane before? We knew it was going to happen. “Well, we are losing biodiversity every day —it’s a kind of drip, drip, drip. It’s also inevitable. We need to do something about it.”

软件体系结构复习题及答案

概述部分 1、请分析软件危机的主要表现和原因。 表现: a)软件成本日益增加:开发、部署与应用成本高 b)开发进度难以控制:不能按期完成 c)软件质量差:错误率高,不能满足用户的需求,没有生命力 d)软件维护困难:成本高,维护效果不理想,可能带来潜在的错误 原因: 1.用户需求不明确 2.缺乏正确的理论指导 3.软件规模越来越大 4.软件复杂度越来越高 2、请说明软件规模与复杂度对软件过程的影响及解决方法。 软件规模与复杂度增加后,软件开发和维护成本增加,开发进度难以控制,软件质量差,软件维护变得困难。应更多地采用科学的分析、设计和实现方法以及辅助工具,增强软件分析和设计的力度,并通过构件化提高软件的重用能力。 3、什么是软件体系结构,由哪三个部分组成?(构件、连接件、约束) 软件体系结构为软件系统提供了一个结构、属性和行为的高级抽象。它不仅指定了系统的组织结构和拓扑结构,并且显示了系统需求和构成系统的元素之间的对应关系,提供了一些设计决策的基本原理。 4、请简述软件重用的含义和意义。可重用元素包括哪些种类? 软件重用是指在多次不同的软件开发过程中重复使用相同或相近软件元素的过 程。(含义) 可重用的元素包括程序代码、测试用例、设计文档、需求分析文档甚至领域知识。 (种类) 可重用的元素越大,我们就说重用的粒度(Granularity)越大。 软件重用是软件产业工业化、工程化的重要手段。软件重用对提高生产率,降低 开发成本,缩短开发周期,改善软件质量以及提高灵活性和标准化程度大有帮助。 (意义) 5、请简述常用的构件实现模型及其意义。 实现模型:

1.CORBA 2.EJB https://www.doczj.com/doc/8515314955.html, / DCOM / COM+ 意义: 这些模型通常都定义了构件的实现方式、接口定义、访问方法等。符合这些标准的任何构件都有很高的重用能力。 描述部分 6、请用图示法说明4+1模型的5种视图之间的关系及关注点和涉众用户。 数据视图 风格部分

面向服务的体系结构

面向服务的体系结构 面向服务的体系结构(S ervice-O riented A rchitecture,SOA,也叫面向服务架构)是指为了解决在Internet环境下业务集成的需要,通过连接能完成特定任务的独立功能实体实现的一种软件系统架构。SOA是一个组件模型,它将应用程序的不同功能单元(称为服务)通过这些服务之间定义良好的接口和契约联系起来。接口是采用中立的方式进行定义的,它应该独立于实现服务的硬件平台、操作系统和编程语言。这使得构建在各种这样的系统中的服务可以以一种统一和通用的方式进行交互。 传统的Web(HTML/HTTP)技术有效的解决了人与信息系统的交互和沟通问题,极大的促进了B2C模式的发展。WEB服务(XML/SOAP/WSDL)技术则是要有效的解决信息系统之间的交互和沟通问题,促进B2B/EAI/CB2C的发展。SOA 则是采用面向服务的商业建模技术和WEB服务技术,实现系统之间的松耦合,实现系统之间的整合与协同。WEB服务和SOA的本质思路在于使得信息系统个体在能够沟通的基础上形成协同工作。 对于面向同步和异步应用的,基于请求/响应模式的分布式计算来说,SOA是一场革命。一个应用程序的业务逻辑(Business Logic)或某些单独的功能被模块化并作为服务呈现给消费者或客户端。这些服务的关键是他们的松耦合特性。例如,服务的接口和实现相独立。应用开发人员或者系统集成者可以通过组合一个或多个服务来构建应用,而无须理解服务的底层实现。举例来说,一个服务可以用.NET或J2EE来实现,而使用该服务的应用程序可以在不同的平台之上,使用的语言也可以不同。 SOA的生命周期 建模 建模是面向服务的体系结构项目的第一步,几乎和技术没有任何关系,所有事项都和具体的业务相关。请记住,面向服务的方法将业务所执行的活动视为服务,因此第一步是要确定这些业务活动或流程实际是什么。对您的业务体系结构进行记录,这些记录不仅可以用于规划SOA,还可以用于对实际业务流程进行优化。通过在编写代码前模拟或建模业务流程,您可以更深入地了解这些流程,从而有利于构建帮助执行这些流程的软件。 建模业务流程的程度将依赖于预期实现的深度。另外,这个程度还依赖于您在开发团队中担任的角色。如果您是企业架构师,您将会对实际的业务服务进行建模。如果您是软件开发人员,您将可能对单个服务进行建模 组装 部署

软件体系结构(考试习题集含答案)

1.面向对象的方法优势体现在(ABD ) A.简化软件开发过程 B.支持软件复用 C.提高软件运行效率 D.改善软件结构 2.用户界面设计中的三条“黄金规则”是(ABC ) A.使系统处于用户控制之中 B.减少用户的记忆负担 C.保持界面的一致性 D.保证用户的易学性 3.用户界面的分析和设计过程是迭代的,其中包括的活动是 (ABCD ) A.用户、任务以及环境的分析和建模 B.界面设计 C.界面实现 D.界面确认 4.界面确认需要注意三个方面(ABC ) A.界面正确完成了用户的任务,适应用户的任务变化 B.易学性和易用程度 C.用户的接受程度 D.用户的习惯 5.用户界面分析时通常采用的信息获取方式包括(ABCD ) A.用户会谈 B.销售人员信息采集 C.市场分析 D.用户支持人员信息收集 6.(C )把完成一个特定功能的动作序列抽象为一个过程名和参数表 A.数据抽象 B.动作抽象 C.过程抽象 D.类型抽象 7.(A)把一个数据对象的定义抽象为一个数据类型名 A.数据抽象 B.动作抽象 C.过程抽象 D.类型抽象 8.软件体系结构设计需要考虑以下(ABCD )

A.适用性 B.结构稳定性 C.可扩展性 D.可复用性 9.模块设计时应该考虑(AB ) A.模块功能独立 B.模块信息的隐藏 C.模块接口的简单 D.模块实现简单 10.一个完整的软件设计的主要活动包括有(ABCD ) A.体系结构设计 B.界面设计 C.模块/子系统设计、 D.数据模型、过程/算法设计等 11.模块化是指把一个复杂的问题分割成若干个可管理的小问题后,更易 于理解,模块化正是以此为依据的,在划分模块的过程中应该考虑到(ABC ) A.模块的可分解性、可组装型 B.模块的可理解性、连续性、 C.模块保护 D.尽可能低分割模块,使得问题的难度降到最 1.什么是软件工程?构成软件工程的要素是什么? 软件工程是将系统化的、规范的、可度量的方法应用于软件的开发、运行和维护过程,即将工程化应用于软件开发和管理之中,对所选方法的研究。软件工程的要素由方法、工具和过程组成。方法支撑过程和工具,而过程和工具促进方法学的研究。 2.什么是软件生存周期?软件开发过程模型与软件生存周期之间是何关 系? 软件产品从形成概念开始,经过开发、使用和维护,直到最后退役的全过程叫软件生存周期。软件开发过程模型表示软件开发中各个活动的安排方式,出来软件开发各个活动之间关系,是软件开发过程的概括,是软件工程的重要内容,其为软件管理提供里程碑和进度表,为

2008年11月最新入党志愿书范文一.

2008年11月最新入党志愿书范文一 关键字:入党志愿书 敬爱的党组织: 我志愿加入中国共产党,拥护党的纲领,遵守党的章程,履行党员义务,执行党的决定,严守党的纪律,保守党的秘密,对党忠诚,积极工作,为共产主义奋斗终身,随时准备为党和人民牺牲一切,永不叛党。 中国共产党是中国工人阶级的先锋队,是中国各族人民利益的忠实代表,是中国社会主义事业的领导核心。党的最终目标是实现共产主义的社会制度。中国共产党以马列主义、毛泽东思想、邓小平理论作为自己的行动指南。 自1921年建党至今,我们的党已经走过了近80年光荣的斗争道路。这几十年,中国共产党从小到大、从弱到强、从幼稚到成熟,不断发展壮大。从建党之初的50多名党员,逐步发展到今天这一个拥有六千万党员的执政党。入党申请书范文网并在长期的革命过程中,先后形成了分别以毛泽东、邓小平、江泽民为核心的三代党中央领导集体。正如江泽民总书记所说:\"党领导全国各族人民为中国社会主义进步和发展做了三件大事:第一件是完成了反帝反封建的新民主主义革命任务,结束了中国半封建、半殖民地的历史;第二件是消灭了剥削制度和剥削阶级,确立了社会主义制度;第三件是开辟建设有中国特色的社会主义道路,逐步实现社会主义现代化,这件大事现在继续在做。\" 党的辉煌历史,是中国共产党为民族解放和人民幸福,前赴后继,入党申请书范文网英勇奋斗的历史;是马克思主义普遍原理同中国革命和建设的具体实践相结合的历史;是坚持真理,修正错误,战胜一切困难,不断发展壮大的历史。中国共产党无愧是伟大、光荣、正确的党,是中国革命和建设事业的坚强领导核心。 最近一个时期,江泽民总书记站在党和国家发展的历史高度,反复强调:\"要把中国的事情办好,关键取决于我们党,取决于党的思想、作风、组织、纪律状况和战斗力、领导水平。只要我们党始终成为中国先进社会生产力的发展要求、中国先进文化的前进方向、中国最广大人民群众的根本利益的忠实代表,我们党就能永远立于不败之地,永远得到全国各族人民的衷心拥护,并带领人民不断前进?quot; 我们每名共产党员都应该认真学习、深刻领会\"三个代表\"的重要思想,

微内核

微内核 1.简介 微内核结构由一个非常简单的硬件抽象层和一组比较关键的原语或系统调用组成,这些原语仅仅包括了建立一个系统必需的几个部分,如线程管理,地址空间和进程间通信等。 微核的目标是将系统服务的实现和系统的基本操作规则分离开来。例如,进程的输入/输出锁定服务可以由运行在微核之外的一个服务组件来提供。这些非常模块化的用户态服务用于完成操作系统中比较高级的操作,这样的设计使内核中最核心的部分的设计更简单。一个服务组件的失效并不会导致整个系统的崩溃,内核需要做的,仅仅是重新启动这个组件,而不必影响其它的部分。 微内核 在微内核结构中,操作系统的内核只需要提供最基本、最核心的一部分操作(比如创建和删除任务、内存管理、中断管理等)即可,而其他的管理程序(如文件系统、网络协议栈等)则尽可能的放在内核之外。这些外部程序可以独立运行,并对外部用户程序提供操作系统服务,服务之间使用进程间通信机制(IPC)进行交互,只在需要

内核的协助时,才通过一套接口对内核发出调用请求。 2.特点 在微内核结构中,操作系统的内核只需要提供最基本、最核心的一部分操作(比如创建和删除任务、内存管理、中断管理等)即可,而其他的管理程序(如文件系统、网络协议栈等)则尽可能的放在内核之外。这些外部程序可以独立运行,并对外部用户程序提供操作系统服务,服务之间使用进程间通信机制(IPC)进行交互,只在需要内核的协助时,才通过一套接口对内核发出调用请求。 3.优点 微内核系统的优点时操作系统具有良好的灵活性。它使得操作系统内部结构简单清晰。程序代码的维护非常之方便。但是也有不足之处。微内核系统由于核心态只实现了最基本的系统操作,这样内核以外的外部程序之间由于独立运行使得系统难以进行良好的整体优化。另外,进程间互相通信的开销也较单一内核系统要大许多。从整体上看,在当前的硬件条件下,微内核在效率上的损失小于其在结构上获得的收益,故而选取微内核成为操作系统的一大潮流。 https://www.doczj.com/doc/8515314955.html,/logs/6204606.html

相关主题
文本预览
相关文档 最新文档