计算机操作系统教程--核心与设计原理习题5答案
- 格式:pdf
- 大小:195.79 KB
- 文档页数:8
第1章计算机系统概述1.1列出并简要地定义计算机的四个主要组成部分。
主存储器,存储数据和程序;算术逻辑单元,能处理二进制数据;控制单元,解读存储器中的指令并且使他们得到执行;输入/输出设备,由控制单元管理。
1.2定义处理器寄存器的两种主要类别。
用户可见寄存器:优先使用这些寄存器,可以使机器语言或者汇编语言的程序员减少对主存储器的访问次数。
对高级语言而言,由优化编译器负责决定把哪些变量应该分配给主存储器。
一些高级语言,如C语言,允许程序言建议编译器把哪些变量保存在寄存器中。
控制和状态寄存器:用以控制处理器的操作,且主要被具有特权的操作系统例程使用,以控制程序的执行。
1.3 一般而言,一条机器指令能指定的四种不同操作是什么?处理器-寄存器:数据可以从处理器传送到存储器,或者从存储器传送到处理器。
处理器-I/O :通过处理器和I/O模块间的数据传送,数据可以输出到外部设备,或者从外部设备输入数据。
数据处理:处理器可以执行很多关于数据的算术操作或逻辑操作。
控制:某些指令可以改变执行顺序。
1.4什么是中断?中断:其他模块(I/O ,存储器)中断处理器正常处理过程的机制。
1.5多中断的处理方式是什么?处理多中断有两种方法。
第一种方法是当正在处理一个中断时,禁止再发生中断。
第二种方法是定义中断优先级,允许高优先级的中断打断低优先级的中断处理器的运行。
1.6内存层次的各个元素间的特征是什么?存储器的三个重要特性是:价格,容量和访问时间。
1.7什么是高速缓冲存储器?高速缓冲存储器是比主存小而快的存储器,用以协调主存跟处理器,作为最近储存地址的缓冲区。
1.8列出并简要地定义I/O操作的三种技术。
可编程I/O :当处理器正在执行程序并遇到与I/O相关的指令时,它给相应的I/O模块发布命令(用以执行这个指令);在进一步的动作之前,处理器处于繁忙的等待中,直到该操作已经完成。
中断驱动I/O :当处理器正在执行程序并遇到与I/O相关的指令时,它给相应的I/O模块发布命令,并继续执行后续指令,直到后者完成,它将被I/O模块中断。
操作系统精髓与设计原理课后答案操作系统精髓与设计原理课后答案第1章计算机系统概述1.1列出并简要地定义计算机的四个主要组成部分。
主存储器,存储数据和程序;算术逻辑单元,能处理二进制数据;控制单元,解读存储器中的指令并且使他们得到执行;输入/输出设备,由控制单元管理。
1.2定义处理器寄存器的两种主要类别。
用户可见寄存器:优先使用这些寄存器,可以使机器语言或者汇编语言的程序员减少对主存储器的访问次数。
对高级语言而言,由优化编译器负责决定把哪些变量应该分配给主存储器。
一些高级语言,如C语言,允许程序言建议编译器把哪些变量保存在寄存器中。
控制和状态寄存器:用以控制处理器的操作,且主要被具有特权的操作系统例程使用,以控制程序的执行。
1.3一般而言,一条机器指令能指定的四种不同操作是什么?处理器-寄存器:数据可以从处理器传送到存储器,或者从存储器传送到处理器。
处理器-I/O:通过处理器和I/O模块间的数据传送,数据可以输出到外部设备,或者从外部设备输入数据。
数据处理:处理器可以执行很多关于数据的算术操作或逻辑操作。
控制:某些指令可以改变执行顺序。
1.4什么是中断?中断:其他模块(I/O,存储器)中断处理器正常处理过程的机制。
1.5多中断的处理方式是什么?处理多中断有两种方法。
第一种方法是当正在处理一个中断时,禁止再发生中断。
第二种方法是定义中断优先级,允许高优先级的中断打断低优先级的中断处理器的运行。
1.6内存层次的各个元素间的特征是什么?存储器的三个重要特性是:价格,容量和访问时间。
1.7什么是高速缓冲存储器?高速缓冲存储器是比主存小而快的存储器,用以协调主存跟处理器,作为最近储存地址的缓冲区。
1.8列出并简要地定义I/O操作的三种技术。
可编程I/O:当处理器正在执行程序并遇到与I/O相关的指令时,它给相应的I/O模块发布命令(用以执行这个指令);在进一步的动作之前,处理器处于繁忙的等待中,直到该操作已经完成。
•2.1 情况(a)和情况(b)具有相同的答案。
•假设处理器的操作不能重叠,但I/O操作可以。
•1job:时间周期=NT•处理器利用率=50%;•2jobs:时间周期=NT•处理器利用率=100%;•4jobs:时间周期=(2N-1)NT•处理器利用率=100%•2.2 I/O限制程序只用相对较少的处理时间,因此,受到短期调度算法的偏爱。
然而,如果一个处理器限制程序在一段很长的时间内被处理器时间拒绝,那同样的这个短期调度算法则会允许处理机去处理过去一段时间一直没有使用处理机的程序,所以,并不是永远不受理处理器限制程序所需的处理器时间。
•2.3 关于分时系统,我们所关注的是周转时间。
•首选的是时间片,因为它在一个很短的时间给•所有的程序一个访问权限去使用处理器。
在批•处理系统,我们所关注的是吞吐量和更少量的上•下文转换,对于进程来说获得了更多的处理时•间。
因此,最小化上下文转换的处理是有优势•的。
•2.4 应用程序运用系统调用去调用操作系统所•提供的功能。
关键的是,系统调用导致转换到•进入内核模式的系统程序。
操作系统第三章习题解答•3.1 系统和用户进程的创建和删除:在系统中进程对于信息共享,加速计算,模块性和便利性都能并发执行。
并发的执行需要进程的创建和删除机制。
进程所需要的资源在进程被创建时获得或者在其运行的时候分配。
当进程结束时,操作系统需要收回任何可重用资源。
•进程的挂起和恢复:在进程调度中,当进程在等待某些资源时,操作系统需要把进程状态改变成等待或者就绪状态。
当进程所要求的资源可用时,操作系统需要把它的状态变为运行状态恢复它的执行。
•进程同步机制:协调进程分享数据。
并发访问使用共享数据可能导致数据不一致性,操作系统不得不为其提供一种进程同步机制用来确保协作进程有序的实行,从而保证数据的一致性。
•进程通信机制:在操作系统下执行的进程要么是独立的进程要么是协作的进程。
协作进程必须使用某些方法来实现进程间的通信。
第1章计算机系统概述1.1 列出并简要地定义计算机的四个主要组成部分。
主存储器,存储数据和程序;算术逻辑单元,能处理二进制数据;控制单元,解读存储器中的指令并且使他们得到执行;输入/输出设备,由控制单元管理。
1.2 定义处理器寄存器的两种主要类别。
用户可见寄存器:优先使用这些寄存器,可以使机器语言或者汇编语言的程序员减少对主存储器的访问次数。
对高级语言而言,由优化编译器负责决定把哪些变量应该分配给主存储器。
一些高级语言,如C语言,允许程序言建议编译器把哪些变量保存在寄存器中。
控制和状态寄存器:用以控制处理器的操作,且主要被具有特权的操作系统例程使用,以控制程序的执行。
1.3 一般而言,一条机器指令能指定的四种不同操作是什么?处理器-寄存器:数据可以从处理器传送到存储器,或者从存储器传送到处理器。
处理器-I/O:通过处理器和I/O模块间的数据传送,数据可以输出到外部设备,或者从外部设备输入数据。
数据处理:处理器可以执行很多关于数据的算术操作或逻辑操作。
控制:某些指令可以改变执行顺序。
1.4 什么是中断?中断:其他模块(I/O,存储器)中断处理器正常处理过程的机制。
1.5 多中断的处理方式是什么?处理多中断有两种方法。
第一种方法是当正在处理一个中断时,禁止再发生中断。
第二种方法是定义中断优先级,允许高优先级的中断打断低优先级的中断处理器的运行。
1.6 存层次的各个元素间的特征是什么?存储器的三个重要特性是:价格,容量和访问时间。
1.7 什么是高速缓冲存储器?高速缓冲存储器是比主存小而快的存储器,用以协调主存跟处理器,作为最近储存地址的缓冲区。
1.8 列出并简要地定义I/O操作的三种技术。
可编程I/O:当处理器正在执行程序并遇到与I/O相关的指令时,它给相应的I/O模块发布命令(用以执行这个指令);在进一步的动作之前,处理器处于繁忙的等待中,直到该操作已经完成。
中断驱动I/O:当处理器正在执行程序并遇到与I/O相关的指令时,它给相应的I/O模块发布命令,并继续执行后续指令,直到后者完成,它将被I/O 模块中断。
操作系统精髓与设计原理第五版课后题答案C HAPTER 2O PERATING S YSTEMO VERVIEWReview Questions2.1 Convenience: An operating system makes a computer more convenientto use. Efficiency: An operating system allows the computer systemresources to be used in an efficient manner. Ability to evolve: Anoperating system should be constructed in such a way as to permit theeffective development, testing, and introduction of new systemfunctions without interfering with service.2.5 The execution context, or process state, is the internal data by which theoperating system is able to supervise and control the process. Thisinternal information is separated from the process, because theoperating system has information not permitted to the process. Thecontext includes all of the information that the operating system needsto manage the process and that the processor needs to execute theprocess properly. The context includes the contents of the variousprocessor registers, such as the program counter and data registers. Italso includes information of use to the operating system, such as thepriority of the process and whether the process is waiting for thecompletion of a particular I/O event.Problems2.1 The answers are the same for (a) and (b). Assume that althoughprocessor operations cannot overlap, I/O operations can.1 Job: TAT = NT Processor utilization = 50%2 Jobs: TAT = NT Processor utilization = 100%4 Jobs: TAT = (2N – 1)NT Processor utilization = 100% 2.4 A system call is used by an application program to invoke a functionprovided by the operating system. Typically, the system call results intransfer to a system program that runs in kernel mode.C HAPTER 3P ROCESS D ESCRIPTION ANDC ONTROLReview Questions3.5 Swapping involves moving part or all of a process from main memoryto disk. When none of the processes in main memory is in the Ready state, the operating system swaps one of the blocked processes out onto disk into a suspend queue, so that another process may be brought into main memory to execute.3.10 The user mode has restrictions on the instructions that can be executedand the memory areas that can be accessed. This is to protect theoperating system from damage or alteration. In kernel mode, theoperating system does not have these restrictions, so that it canperform its tasks.Problems3.1 •Creation and deletion of both user and system processes. Theprocesses in the system can execute concurrently for informationsharing, computation speedup, modularity, and convenience.Concurrent execution requires a mechanism for process creation and deletion. The required resources are given to the process when it iscreated, or allocated to it while it is running. When the processterminates, the OS needs to reclaim any reusable resources.•Suspension and resumpti on of processes. In process scheduling, theOS needs to change the process's state to waiting or ready state when it is waiting for some resources. When the required resources areavailable, OS needs to change its state to running state to resume itsexecution.•Provision of mechanism for process synchronization. Cooperatingprocesses may share data. Concurrent access to shared data mayresult in data inconsistency. OS has to provide mechanisms forprocesses synchronization to ensure the orderly execution ofcooperating processes, so that data consistency is maintained.•Provision of mechanism for process communication. The processesexecuting under the OS may be either independent processes orcooperating processes. Cooperating processes must have the meansto communicate with each other.•Provision of mechanisms for deadlock handling. In amultiprogramming environment, several processes may compete fora finite number of resources. If a deadlock occurs, all waitingprocesses will never change their waiting state to running state again, resources are wasted and jobs will never be completed.3.3Figure 9.3 shows the result for a single blocked queue. The figurereadily generalizes to multiple blocked queues.C HAPTER 4P ROCESS D ESCRIPTION ANDC ONTROLReview Questions4.2 Less state information is involved.4.5 Address space, file resources, execution privileges are examples.4.6 1. Thread switching does not require kernel mode privileges becauseall of the thread management data structures are within the useraddress space of a single process. Therefore, the process does notswitch to the kernel mode to do thread management. This saves theoverhead of two mode switches (user to kernel; kernel back to user). 2.Scheduling can be application specific. One application may benefit most from a simple round-robin scheduling algorithm, while another might benefit from a priority-based scheduling algorithm. Thescheduling algorithm can be tailored to the application withoutdisturbing the underlying OS scheduler. 3. ULTs can run on anyoperating system. No changes are required to the underlying kernel to support ULTs. The threads library is a set of application-level utilities shared by all applications.4.7 1. In a typical operating system, many system calls are blocking. Thus,when a ULT executes a system call, not only is that thread blocked, but also all of the threads within the process are blocked. 2. In a pure ULT strategy, a multithreaded application cannot take advantage ofmultiprocessing. A kernel assigns one process to only one processor ata time. Therefore, only a single thread within a process can execute at atime.Problems4.2Because, with ULTs, the thread structure of a process is not visible to theoperating system, which only schedules on the basis of processes.C HAPTER 5C ONCURRENCY:M UTUALE XCLUSION ANDS YNCHRONIZATIONReview Questions5.1 Communication among processes, sharing of and competing forresources, synchronization of the activities of multiple processes, and allocation of processor time to processes.5.9 A binary semaphore may only take on the values 0 and 1. A generalsemaphore may take on any integer value.Problems5.2 ABCDE; ABDCE; ABDEC; ADBCE; ADBEC; ADEBC;DEABC; DAEBC; DABEC; DABCE5.5Consider the case in which turn equals 0 and P(1) sets blocked[1] totrue and then finds blocked[0] set to false. P(0) will then setblocked[0] to true, find turn = 0, and enter its critical section. P(1) will then assign 1 to turn and will also enter its critical section.C HAPTER 6C ONCURRENCY:D EADLOCK ANDS TARVATIONReview Questions6.2 Mutual exclusion. Only one process may use a resource at a time. Holdand wait. A process may hold allocated resources while awaitingassignment of others. No preemption. No resource can be forciblyremoved from a process holding it.6.3 The above three conditions, plus: Circular wait. A closed chain ofprocesses exists, such that each process holds at least one resourceneeded by the next process in the chain.Problems6.4 a. 0 0 0 00 7 5 06 6 2 22 0 0 20 3 2 0b. to d. Running the banker's algorithm, we see processes can finishin the order p1, p4, p5, p2, p3.e. Change available to (2,0,0,0) and p3's row of "still needs" to (6,5,2,2).Now p1, p4, p5 can finish, but with available now (4,6,9,8) neitherp2 nor p3's "still needs" can be satisfied. So it is not safe to grantp3's request.6.5 1. W = (2 1 0 0)2. Mark P3; W = (2 1 0 0) + (0 1 2 0) = (2 2 2 0)3. Mark P2; W = (2 2 2 0) + (2 0 0 1) = (4 2 2 1)4. Mark P1; no deadlock detectedReview Questions7.1 Relocation, protection, sharing, logical organization, physicalorganization.7.7 A logical address is a reference to a memory location independent ofthe current assignment of data to memory; a translation must be made to a physical address before the memory access can be achieved. A relative address is a particular example of logical address, in which the address is expressed as a location relative to some known point, usually the beginning of the program. A physical address, or absolute address, is an actual location in main memory.Problems7.6 a. The 40 M block fits into the second hole, with a starting address of80M. The 20M block fits into the first hole, with a starting address of 20M. The 10M block is placed at location 120M.40M 40M 60M 40M 40M 40M 30Mb. The three starting addresses are 230M, 20M, and 160M, for the 40M, 20M, and 10M blocks, respectively. 40M 60M 60M 40M 40M 40M 30Mc. The three starting addresses are 80M, 120M, and 160M, for the 40M,20M, and 10M blocks, respectively. C HAPTER 7M EMORY M ANAGEMENT7.12 a. The number of bytes in the logical address space is (216 pages) (210bytes/page) = 226 bytes. Therefore, 26 bits are required for the logical address.b. A frame is the same size as a page, 210 bytes.c. The number of frames in main memory is (232 bytes of mainmemory)/(210 bytes/frame) = 222 frames. So 22 bits is needed tospecify the frame.d. There is one entry for each page in the logical address space.Therefore there are 216 entries.e. In addition to the valid/invalid bit, 22 bits are needed to specify theframe location in main memory, for a total of 23 bits.30M40M40M60M40M40M40Md. The three starting addresses are 80M, 230M, and 360M, for the 40M,20M, and 10M blocks, respectively.C HAPTER 8V IRTUAL M EMORYReview Questions8.1 Simple paging: all the pages of a process must be in main memory forprocess to run, unless overlays are used. Virtual memory paging: not all pages of a process need be in main memory frames for the process to run.; pages may be read in as needed8.2 A phenomenon in virtual memory schemes, in which the processorspends most of its time swapping pieces rather than executinginstructions.Problems8.1 a. Split binary address into virtual page number and offset; use VPNas index into page table; extract page frame number; concatenateoffset to get physical memory addressb. (i) 1052 = 1024 + 28 maps to VPN 1 in PFN 7, (7 ⨯ 1024+28 = 7196)(ii) 2221 = 2 ⨯ 1024 + 173 maps to VPN 2, page fault(iii) 5499 = 5 ⨯ 1024 + 379 maps to VPN 5 in PFN 0, (0 ⨯ 1024+379 =379)8.4 a. PFN 3 since loaded longest ago at time 20b. PFN 1 since referenced longest ago at time 160c. Clear R in PFN 3 (oldest loaded), clear R in PFN 2 (next oldestloaded), victim PFN is 0 since R=0d. Replace the page in PFN 3 since VPN 3 (in PFN 3) is used furthestin the futuree. There are 6 faults, indicated by **4 0 0 0 *2*4 2*1**3 2VPN of pages in memory in LRU order 32143243434242241241243122Review Questions9.1 Long-term scheduling: The decision to add to the pool of processes tobe executed. Medium-term scheduling: The decision to add to thenumber of processes that are partially or fully in main memory.Short-term scheduling: The decision as to which available process willbe executed by the processor9.3 Turnaround time is the total time that a request spends in the system(waiting time plus service time. Response time is the elapsed timebetween the submission of a request until the response begins toappear as output.Problems9.1 Each square represents one time unit; the number in the square refersto the currently-running process.FCFS A A A B B B B B C C D D D D D E E E E E RR, q = 1 A B A B C A B C B D B D E D E D E D E E RR, q = 4 A A A B B B B C C B D D D D E E E E D E SPN A A A C C B B B B B D D D D D E E E E E SRT A A A C C B B B B B D D D D D E E E E E HRRN A A A B B B B B C C D D D D D E E E E E Feedback, q = 1 A B A C B C A B B D B D E D E D E D E EFeedback, q = 2i A B A A C B B C B B D D E D D E E D E EC HAPTER 9U NIPROCESSORS CHEDULINGA B C D ET a0 1 3 9 12T s 3 5 2 5 5 FCFS T f 3 8 10 15 20T r 3.00 7.00 7.00 6.00 8.00 6.20T r/T s 1.00 1.40 3.50 1.20 1.60 1.74 RR qT f 6.00 11.00 8.00 18.00 20.00= 1T r 6.00 10.00 5.00 9.00 8.00 7.60T r/T s 2.00 2.00 2.50 1.80 1.60 1.98RR qT f 3.00 10.00 9.00 19.00 20.00= 4T r 3.00 9.00 6.00 10.00 8.00 7.20T r/T s 1.00 1.80 3.00 2.00 1.60 1.88 SPN T f 3.00 10.00 5.00 15.00 20.00T r 3.00 9.00 2.00 6.00 8.00 5.60T r/T s 1.00 1.80 1.00 1.20 1.60 1.32SRT T f 3.00 10.00 5.00 15.00 20.00T r 3.00 9.00 2.00 6.00 8.00 5.60T r/T s 1.00 1.80 1.00 1.20 1.60 1.32 HRRT f 3.00 8.00 10.00 15.00 20.00NT r 3.00 7.00 7.00 6.00 8.00 6.20T r/T s 1.00 1.40 3.50 1.20 1.60 1.74FB qT f7.00 11.00 6.00 18.00 20.00= 1T r7.00 10.00 3.00 9.00 8.00 7.40T r/T s 2.33 2.00 1.50 1.80 1.60 1.85 FB T f 4.00 10.00 8.00 18.00 20.00q = 2i T r 4.00 9.00 5.00 9.00 8.00 7.00 T r/T s 1.33 1.80 2.50 1.80 1.60 1.819.16 a. Sequence with which processes will get 1 min of processor time:1 2 3 4 5 Elapsed timeA A A A A A A A A A A A A A BBBBBBBBCCDDDDDEEEEEEEEEEE1015192327303336384042434445The turnaround time for each process:A = 45 min,B = 35 min,C = 13 min,D = 26 min,E = 42 minThe average turnaround time is = (45+35+13+26+42) / 5 = 32.2 min b.Priority Job Turnaround Time3 4 6 7 9 BEACD99 + 12 = 2121 + 15 = 3636 + 3 = 3939 + 6 = 45The average turnaround time is: (9+21+36+39+45) / 5 = 30 min c.Job Turnaround TimeA B C D E 1515 + 9 = 24 24 + 3 = 27 27 + 6 = 33 33 + 12 = 45The average turnaround time is: (15+24+27+33+45) / 5 = 28.8 min d.RunningTimeJob Turnaround Time6 9 12 15 DBEA3 + 6 = 99 + 9 = 1818 + 12 = 3030 + 15 = 45The average turnaround time is: (3+9+18+30+45) / 5 = 21 minC HAPTER 10M ULTIPROCESSOR AND R EAL-T IMES CHEDULINGReview Questions10.1 Fine: Parallelism inherent in a single instruction stream. Medium: Parallelprocessing or multitasking within a single application. Coarse:Multiprocessing of concurrent processes in a multiprogrammingenvironment. Very Coarse: Distributed processing across network nodes toform a single computing environment. Independent: Multiple unrelatedprocesses.10.4 A hard real-time task is one that must meet its deadline; otherwise it willcause undesirable damage or a fatal error to the system. A soft real-timetask has an associated deadline that is desirable but not mandatory; it stillmakes sense to schedule and complete the task even if it has passed itsdeadline.Problems10.1 For fixed priority, we do the case in which the priority is A, B, C. Eachsquare represents five time units; the letter in the square refers to thecurrently-running process. The first row is fixed priority; the secondrow is earliest deadline scheduling using completion deadlines.A AB B A AC C A A B B A A C C A AA AB B AC C A C A A B B A A C C C A AFor fixed priority scheduling, process C always misses its deadline.10.4normal executionexecution in critical sectionT 1T 2T 3s locked by T 3s unlockeds locked by T 1Once T 3 enters its critical section, it is assigned a priority higher than T1. When T3 leaves its critical section, it is preempted by T 1.C HAPTER 11I/O M ANAGEMENT AND D ISK S CHEDULING Review Questions11.1 Programmed I/O: The processor issues an I/O command, on behalf of aprocess, to an I/O module; that process then busy-waits for theoperation to be completed before proceeding. Interrupt-driven I/O:The processor issues an I/O command on behalf of a process,continues to execute subsequent instructions, and is interrupted by the I/O module when the latter has completed its work. The subsequent instructions may be in the same process, if it is not necessary for that process to wait for the completion of the I/O. Otherwise, the process is suspended pending the interrupt and other work is performed. Direct memory access (DMA): A DMA module controls the exchange of data between main memory and an I/O module. The processor sends arequest for the transfer of a block of data to the DMA module and is interrupted only after the entire block has been transferred.11.5 Seek time, rotational delay, access time.Problems11.1 If the calculation time exactly equals the I/O time (which is the mostfavorable situation), both the processor and the peripheral devicerunning simultaneously will take half as long as if they ran separately.Formally, let C be the calculation time for the entire program and let T be the total I/O time required. Then the best possible running timewith buffering is max(C, T), while the running time without buffering is C + T; and of course ((C + T)/2) ≤ max(C, T) ≤ (C + T). Source:[KNUT97].11.3 Disk head is initially moving in the direction of decreasing tracknumber:FIFO SSTF SCAN C-SCANNext track accessed Numberof trackstraversedNexttrackaccessedNumberof trackstraversedNexttrackaccessedNumberof trackstraversedNexttrackaccessedNumberof trackstraversed27 73 110 10 64 36 64 36129 102 120 10 41 23 41 23 110 19 129 9 27 14 27 14 186 76 147 18 10 17 10 17 147 39 186 39 110 100 186 17641 106 64 122 120 10 147 3910 31 41 23 129 9 129 1864 54 27 14 147 18 120 9120 56 10 17 186 39 110 10 Average 61.8 Average 29.1 Average 29.6 Average 38If the disk head is initially moving in the direction of increasing tracknumber, only the SCAN and C-SCAN results change:SCAN C-SCANNext track accessed Numberof trackstraversedNexttrackaccessedNumberof trackstraversed110 10 110 10120 10 120 10129 9 129 9147 18 147 18186 39 186 3964 122 10 17641 23 27 1727 14 41 1410 17 64 23 Average 29.1 Average 35.1Review Questions12.1 A field is the basic element of data containing a single value. A recordis a collection of related fields that can be treated as a unit by some application program.12.5 Pile: Data are collected in the order in which they arrive. Each recordconsists of one burst of data. Sequential file: A fixed format is used for records. All records are of the same length, consisting of the same number of fixed-length fields in a particular order. Because the length and position of each field is known, only the values of fields need to be stored; the field name and length for each field are attributes of the file structure. Indexed sequential file: The indexed sequential file maintains the key characteristic of the sequential file: records are organized in sequence based on a key field. Two features are added; an index to the file to support random access, and an overflow file. The index provides a lookup capability to reach quickly the vicinity of a desired record. The overflow file is similar to the log file used with a sequential file, but is integrated so that records in the overflow file are located by following a pointer from their predecessor record. Indexed file: Records are accessed only through their indexes. The result is that there is now no restriction on the placement of records as long as a pointer in at least one index refers to that record. Furthermore,variable-length records can be employed. Direct, or hashed, file: The direct file makes use of hashing on the key value.Problems12.1 Fixed blocking: F = largest integer B RWhen records of variable length are packed into blocks, data formarking the record boundaries within the block has to be added to separate the records. When spanned records bridge block boundaries, some reference to the successor block is also needed. One possibility is a length indicator preceding each record. Another possibility is a special separator marker between records. In any case, we can assume that each record requires a marker, and we assume that the size of a marker is about equal to the size of a block pointer [WEID87]. For spanned blocking, a block pointer of size P to its successor block may C HAPTER 12F ILE M ANAGEMENTbe included in each block, so that the pieces of a spanned record can easily be retrieved. Then we haveVariable-length spanned blocking: F=B-P R+PWith unspanned variable-length blocking, an average of R/2 will be wasted because of the fitting problem, but no successor pointer is required:Variable-length unspanned blocking: F=B-R2 R+P12.3 a. Indexedb. Indexed sequentialc. Hashed or indexed。
操作系统精髓与设计原理课后答案第1章计算机系统概述1.1 列出并简要地定义计算机的四个主要组成部分。
主存储器,存储数据和程序;算术逻辑单元,能处理二进制数据;控制单元,解读存储器中的指令并且使他们得到执行;输入/输出设备,由控制单元管理。
1.2 定义处理器寄存器的两种主要类别。
用户可见寄存器:优先使用这些寄存器,可以使机器语言或者汇编语言的程序员减少对主存储器的访问次数。
对高级语言而言,由优化编译器负责决定把哪些变量应该分配给主存储器。
一些高级语言,如C语言,允许程序言建议编译器把哪些变量保存在寄存器中。
控制和状态寄存器:用以控制处理器的操作,且主要被具有特权的操作系统例程使用,以控制程序的执行。
1.3 一般而言,一条机器指令能指定的四种不同操作是什么?处理器-寄存器:数据可以从处理器传送到存储器,或者从存储器传送到处理器。
处理器-I/O:通过处理器和I/O模块间的数据传送,数据可以输出到外部设备,或者从外部设备输入数据。
数据处理:处理器可以执行很多关于数据的算术操作或逻辑操作。
控制:某些指令可以改变执行顺序。
1.4 什么是中断?中断:其他模块(I/O,存储器)中断处理器正常处理过程的机制。
1.5 多中断的处理方式是什么?处理多中断有两种方法。
第一种方法是当正在处理一个中断时,禁止再发生中断。
第二种方法是定义中断优先级,允许高优先级的中断打断低优先级的中断处理器的运行。
1.6 内存层次的各个元素间的特征是什么?存储器的三个重要特性是:价格,容量和访问时间。
1.7 什么是高速缓冲存储器?高速缓冲存储器是比主存小而快的存储器,用以协调主存跟处理器,作为最近储存地址的缓冲区。
1.8 列出并简要地定义I/O操作的三种技术。
可编程I/O:当处理器正在执行程序并遇到与I/O相关的指令时,它给相应的I/O模块发布命令(用以执行这个指令);在进一步的动作之前,处理器处于繁忙的等待中,直到该操作已经完成。
中断驱动I/O:当处理器正在执行程序并遇到与I/O相关的指令时,它给相应的I/O模块发布命令,并继续执行后续指令,直到后者完成,它将被I/O 模块中断。
操作系统精髓与设计原理课后答案第1章计算机系统概述1.1 列出并简要地定义计算机的四个主要组成部分。
主存储器,存储数据和程序;算术逻辑单元,能处理二进制数据;控制单元,解读存储器中的指令并且使他们得到执行;输入/输出设备,由控制单元管理。
1.2 定义处理器寄存器的两种主要类别。
用户可见寄存器:优先使用这些寄存器,可以使机器语言或者汇编语言的程序员减少对主存储器的访问次数。
对高级语言而言,由优化编译器负责决定把哪些变量应该分配给主存储器。
一些高级语言,如C语言,允许程序言建议编译器把哪些变量保存在寄存器中。
控制和状态寄存器:用以控制处理器的操作,且主要被具有特权的操作系统例程使用,以控制程序的执行。
1.3 一般而言,一条机器指令能指定的四种不同操作是什么?处理器-寄存器:数据可以从处理器传送到存储器,或者从存储器传送到处理器。
处理器-I/O:通过处理器和I/O模块间的数据传送,数据可以输出到外部设备,或者从外部设备输入数据。
数据处理:处理器可以执行很多关于数据的算术操作或逻辑操作。
控制:某些指令可以改变执行顺序。
1.4 什么是中断?中断:其他模块(I/O,存储器)中断处理器正常处理过程的机制。
1.5 多中断的处理方式是什么?处理多中断有两种方法。
第一种方法是当正在处理一个中断时,禁止再发生中断。
第二种方法是定义中断优先级,允许高优先级的中断打断低优先级的中断处理器的运行。
1.6 内存层次的各个元素间的特征是什么?存储器的三个重要特性是:价格,容量和访问时间。
1.7 什么是高速缓冲存储器?高速缓冲存储器是比主存小而快的存储器,用以协调主存跟处理器,作为最近储存地址的缓冲区。
1.8 列出并简要地定义I/O操作的三种技术。
可编程I/O:当处理器正在执行程序并遇到与I/O相关的指令时,它给相应的I/O模块发布命令(用以执行这个指令);在进一步的动作之前,处理器处于繁忙的等待中,直到该操作已经完成。
中断驱动I/O:当处理器正在执行程序并遇到与I/O相关的指令时,它给相应的I/O模块发布命令,并继续执行后续指令,直到后者完成,它将被I/O模块中断。
第五章并发性:互斥和同步复习题:5.1列出与并发相关的四种设计问题答:进程间的交互,共享资源之间的竞争,多个进程的同步问题,对进程的处理器时间分配问题5.2列出并发的三种上下文答:多个应用程序,结构化应用程序,操作系统结构5.3执行并发进程的最基本要求是什么?答:加强互斥的能力5.4列出进程间的三种互相知道的程度,并简单地给出各自的定义。
答:进程间互相不知道对方:这是一些独立的进程,他们不会一起工作。
进程间间接知道对方:这些进程并不需要知道对方的进程ID号,但他们共享访问某些对象,如一个I/O缓冲区。
进程间直接知道对方:这些进程可以通过进程ID号互相通信,用于合作完成某些活动。
5.5竞争进程和合作进程进程间有什么区别。
答:竞争进程需要同时访问相同的资源,像磁盘,文件或打印机。
合作进程要么共享访问一个共有的资源,像一个内存访问区,要么就与其他进程相互通信,在一些应用程序或活动上进行合作。
5.6列出与竞争进程相关的三种控制问题,并简单地给出各自的定义。
答:互斥:竞争进程仅可以访问一个临界资源(一次仅有一个进程可以访问临界资源),并发机制必须满足一次只有一个进程可以访问临界资源这个规则。
死锁:如果竞争进程需要唯一的访问多于一个资源,并且当一个进程控制着一个进程,且在等待另一个进程,死锁可能发生。
饥饿:一组进程的一个可能会无限期地拒绝进入到一个需要资源,因为其他成员组成垄断这个资源。
5.7列出对互斥的要求。
答:1.必须强制实施互斥:在具有关于相同资源或共享对象的临界区的所有进程中,一次只允许一个进程进入临界区。
2.一个在临界区停止的进程必须不干涉其他进程。
3.绝不允许出现一个需要访问临界区的进程被无限延迟的情况,即不会饿死或饥饿。
4.当没有进程在临界区中时,任何需要进入临界区的进程必须能够立即进入。
5.对相关进程的速度和处理器的数目没有任何要求和限制。
6.一个进程驻留在临界区中的时间是有限的。
5.8在信号量上可以执行什么操作。
操作系统精髓与设计原理课后答案操作系统精髓与设计原理课后答案第1章计算机系统概述1、1 列出并简要地定义计算机得四个主要组成部分。
主存储器,存储数据与程序;算术逻辑单元,能处理二进制数据;控制单元,解读存储器中得指令并且使她们得到执行;输入/输出设备,由控制单元管理。
1、2 定义处理器寄存器得两种主要类别。
用户可见寄存器:优先使用这些寄存器,可以使机器语言或者汇编语言得程序员减少对主存储器得访问次数。
对高级语言而言,由优化编译器负责决定把哪些变量应该分配给主存储器。
一些高级语言,如C语言,允许程序言建议编译器把哪些变量保存在寄存器中。
控制与状态寄存器:用以控制处理器得操作,且主要被具有特权得操作系统例程使用,以控制程序得执行。
1、3 一般而言,一条机器指令能指定得四种不同操作就是什么?处理器-寄存器:数据可以从处理器传送到存储器,或者从存储器传送到处理器。
处理器-I/O:通过处理器与I/O模块间得数据传送,数据可以输出到外部设备,或者从外部设备输入数据。
数据处理:处理器可以执行很多关于数据得算术操作或逻辑操作。
控制:某些指令可以改变执行顺序。
1、4 什么就是中断?中断:其她模块(I/O,存储器)中断处理器正常处理过程得机制。
1、5 多中断得处理方式就是什么?处理多中断有两种方法。
第一种方法就是当正在处理一个中断时,禁止再发生中断。
第二种方法就是定义中断优先级,允许高优先级得中断打断低优先级得中断处理器得运行。
1、6 内存层次得各个元素间得特征就是什么?存储器得三个重要特性就是:价格,容量与访问时间。
1、7 什么就是高速缓冲存储器?高速缓冲存储器就是比主存小而快得存储器,用以协调主存跟处理器,作为最近储存地址得缓冲区。
1、8 列出并简要地定义I/O操作得三种技术。
可编程I/O:当处理器正在执行程序并遇到与I/O相关得指令时,它给相应得I/O模块发布命令(用以执行这个指令);在进一步得动作之前,处理器处于繁忙得等待中,直到该操作已经完成。