现代操作系统(原书第3版)部分课后答案-第6章
- 格式:pdf
- 大小:59.94 KB
- 文档页数:2
现代操作系统第三版中文答案【篇一:操作系统课后答案】>思考与练习题1. 2. 3. 4. 5. 6. 7. 8. 9.什么是操作系统?它的主要功能是什么?什么是多道程序设计技术?多道程序设计技术的主要特点是什么?批处理系统是怎样的一种操作系统?它的特点是什么?什么是分时系统?什么是实时系统?试从交互性,及时性,独立性,多路性,可靠性等几个方面比较分时系统和实施系统。
实时系统分为哪俩种类型?操作系统主要特征是什么?操作系统也用户的接口有几种?它们各自用在什么场合?“操作系统是控制硬件的软件”这一说法确切吗?为什么?设内存中有三道程序,a,b,c,它们按a~b~c的先后顺序执行,它们进行“计算”和“i/o操作”的时间如表1-2所示,假设三道程序使用相同的i/o设备。
(1) 试画出单道运行时三道程序的时间关系图,并计算完成三道程序要花多少时间。
(2) 试画出多道运行时三道程序的时间关系图,并计算完成三道程序要花多少时间。
10.将下列左右两列词连接起来形成意义最恰当的5对。
dos 网络操作系统 os/2自由软件 unix多任务 linux单任务11.选择一个现代操作系统,查找和阅读相关的技术资料,写一篇关于操作系统如何进行内存管理、存储管理、设备管理和文件管理的文章。
答案1.答:操作系统是控制和管理计算机的软、硬件资源,合理地组织计算机的工作流程,以方便用户使用的程序集合。
2.答:把多个独立的程序同时放入内存,使她们共享系统中的资源。
1)多道,即计算机内存中同时放多道相互独立的程序。
2)宏观上并行,是指共识进入系统的多道程序都处于运行过程。
3)微观上串行,是指在单道处理机环境下,内存中的多道程序轮流地占有cpu,交替执行。
3.答:批处理操作系统是一种基本的操作系统类型。
在该系统中用户的作业被成批地输入到计算机中,然后在操作系统的控制下,用户的作业自动的执行。
特点是:资源利用率高。
系统吞吐量大。
MODERNOPERATINGSYSTEMSTHIRD EDITION PROBLEM SOLUTIONSANDREW S.TANENBAUMVrije UniversiteitAmsterdam,The NetherlandsPRENTICE HALLUPPER SADDLE RIVER,NJ07458Copyright Pearson Education,Inc.2008SOLUTIONS TO CHAPTER1PROBLEMS1.Multiprogramming is the rapid switching of the CPU between multiple proc-esses in memory.It is commonly used to keep the CPU busy while one or more processes are doing I/O.2.Input spooling is the technique of reading in jobs,for example,from cards,onto the disk,so that when the currently executing processes arefinished, there will be work waiting for the CPU.Output spooling consists offirst copying printablefiles to disk before printing them,rather than printing di-rectly as the output is generated.Input spooling on a personal computer is not very likely,but output spooling is.3.The prime reason for multiprogramming is to give the CPU something to dowhile waiting for I/O to complete.If there is no DMA,the CPU is fully occu-pied doing I/O,so there is nothing to be gained(at least in terms of CPU utili-zation)by multiprogramming.No matter how much I/O a program does,the CPU will be100%busy.This of course assumes the major delay is the wait while data are copied.A CPU could do other work if the I/O were slow for other reasons(arriving on a serial line,for instance).4.It is still alive.For example,Intel makes Pentium I,II,and III,and4CPUswith a variety of different properties including speed and power consumption.All of these machines are architecturally compatible.They differ only in price and performance,which is the essence of the family idea.5.A25×80character monochrome text screen requires a2000-byte buffer.The1024×768pixel24-bit color bitmap requires2,359,296bytes.In1980these two options would have cost$10and$11,520,respectively.For current prices,check on how much RAM currently costs,probably less than$1/MB.6.Consider fairness and real time.Fairness requires that each process be allo-cated its resources in a fair way,with no process getting more than its fair share.On the other hand,real time requires that resources be allocated based on the times when different processes must complete their execution.A real-time process may get a disproportionate share of the resources.7.Choices(a),(c),and(d)should be restricted to kernel mode.8.It may take20,25or30msec to complete the execution of these programsdepending on how the operating system schedules them.If P0and P1are scheduled on the same CPU and P2is scheduled on the other CPU,it will take20mses.If P0and P2are scheduled on the same CPU and P1is scheduled on the other CPU,it will take25msec.If P1and P2are scheduled on the same CPU and P0is scheduled on the other CPU,it will take30msec.If all three are on the same CPU,it will take35msec.2PROBLEM SOLUTIONS FOR CHAPTER19.Every nanosecond one instruction emerges from the pipeline.This means themachine is executing1billion instructions per second.It does not matter at all how many stages the pipeline has.A10-stage pipeline with1nsec per stage would also execute1billion instructions per second.All that matters is how often afinished instruction pops out the end of the pipeline.10.Average access time=0.95×2nsec(word is cache)+0.05×0.99×10nsec(word is in RAM,but not in cache)+0.05×0.01×10,000,000nsec(word on disk only)=5002.395nsec=5.002395μsec11.The manuscript contains80×50×700=2.8million characters.This is,ofcourse,impossible tofit into the registers of any currently available CPU and is too big for a1-MB cache,but if such hardware were available,the manuscript could be scanned in2.8msec from the registers or5.8msec from the cache.There are approximately27001024-byte blocks of data,so scan-ning from the disk would require about27seconds,and from tape2minutes7 seconds.Of course,these times are just to read the data.Processing and rewriting the data would increase the time.12.Maybe.If the caller gets control back and immediately overwrites the data,when the writefinally occurs,the wrong data will be written.However,if the driverfirst copies the data to a private buffer before returning,then the caller can be allowed to continue immediately.Another possibility is to allow the caller to continue and give it a signal when the buffer may be reused,but this is tricky and error prone.13.A trap instruction switches the execution mode of a CPU from the user modeto the kernel mode.This instruction allows a user program to invoke func-tions in the operating system kernel.14.A trap is caused by the program and is synchronous with it.If the program isrun again and again,the trap will always occur at exactly the same position in the instruction stream.An interrupt is caused by an external event and its timing is not reproducible.15.The process table is needed to store the state of a process that is currentlysuspended,either ready or blocked.It is not needed in a single process sys-tem because the single process is never suspended.16.Mounting afile system makes anyfiles already in the mount point directoryinaccessible,so mount points are normally empty.However,a system admin-istrator might want to copy some of the most importantfiles normally located in the mounted directory to the mount point so they could be found in their normal path in an emergency when the mounted device was being repaired.PROBLEM SOLUTIONS FOR CHAPTER13 17.A system call allows a user process to access and execute operating systemfunctions inside the er programs use system calls to invoke operat-ing system services.18.Fork can fail if there are no free slots left in the process table(and possibly ifthere is no memory or swap space left).Exec can fail if thefile name given does not exist or is not a valid executablefile.Unlink can fail if thefile to be unlinked does not exist or the calling process does not have the authority to unlink it.19.If the call fails,for example because fd is incorrect,it can return−1.It canalso fail because the disk is full and it is not possible to write the number of bytes requested.On a correct termination,it always returns nbytes.20.It contains the bytes:1,5,9,2.21.Time to retrieve thefile=1*50ms(Time to move the arm over track#50)+5ms(Time for thefirst sector to rotate under the head)+10/100*1000ms(Read10MB)=155ms22.Block specialfiles consist of numbered blocks,each of which can be read orwritten independently of all the other ones.It is possible to seek to any block and start reading or writing.This is not possible with character specialfiles.23.System calls do not really have names,other than in a documentation sense.When the library procedure read traps to the kernel,it puts the number of the system call in a register or on the stack.This number is used to index into a table.There is really no name used anywhere.On the other hand,the name of the library procedure is very important,since that is what appears in the program.24.Yes it can,especially if the kernel is a message-passing system.25.As far as program logic is concerned it does not matter whether a call to a li-brary procedure results in a system call.But if performance is an issue,if a task can be accomplished without a system call the program will run faster.Every system call involves overhead time in switching from the user context to the kernel context.Furthermore,on a multiuser system the operating sys-tem may schedule another process to run when a system call completes, further slowing the progress in real time of a calling process.26.Several UNIX calls have no counterpart in the Win32API:Link:a Win32program cannot refer to afile by an alternative name or see it in more than one directory.Also,attempting to create a link is a convenient way to test for and create a lock on afile.4PROBLEM SOLUTIONS FOR CHAPTER1Mount and umount:a Windows program cannot make assumptions about standard path names because on systems with multiple disk drives the drive name part of the path may be different.Chmod:Windows uses access control listsKill:Windows programmers cannot kill a misbehaving program that is not cooperating.27.Every system architecture has its own set of instructions that it can execute.Thus a Pentium cannot execute SPARC programs and a SPARC cannot exe-cute Pentium programs.Also,different architectures differ in bus architecture used(such as VME,ISA,PCI,MCA,SBus,...)as well as the word size of the CPU(usually32or64bit).Because of these differences in hardware,it is not feasible to build an operating system that is completely portable.A highly portable operating system will consist of two high-level layers---a machine-dependent layer and a machine independent layer.The machine-dependent layer addresses the specifics of the hardware,and must be implemented sepa-rately for every architecture.This layer provides a uniform interface on which the machine-independent layer is built.The machine-independent layer has to be implemented only once.To be highly portable,the size of the machine-dependent layer must be kept as small as possible.28.Separation of policy and mechanism allows OS designers to implement asmall number of basic primitives in the kernel.These primitives are sim-plified,because they are not dependent of any specific policy.They can then be used to implement more complex mechanisms and policies at the user level.29.The conversions are straightforward:(a)A micro year is10−6×365×24×3600=31.536sec.(b)1000meters or1km.(c)There are240bytes,which is1,099,511,627,776bytes.(d)It is6×1024kg.SOLUTIONS TO CHAPTER2PROBLEMS1.The transition from blocked to running is conceivable.Suppose that a processis blocked on I/O and the I/Ofinishes.If the CPU is otherwise idle,the proc-ess could go directly from blocked to running.The other missing transition, from ready to blocked,is impossible.A ready process cannot do I/O or any-thing else that might block it.Only a running process can block.PROBLEM SOLUTIONS FOR CHAPTER25 2.You could have a register containing a pointer to the current process tableentry.When I/O completed,the CPU would store the current machine state in the current process table entry.Then it would go to the interrupt vector for the interrupting device and fetch a pointer to another process table entry(the ser-vice procedure).This process would then be started up.3.Generally,high-level languages do not allow the kind of access to CPU hard-ware that is required.For instance,an interrupt handler may be required to enable and disable the interrupt servicing a particular device,or to manipulate data within a process’stack area.Also,interrupt service routines must exe-cute as rapidly as possible.4.There are several reasons for using a separate stack for the kernel.Two ofthem are as follows.First,you do not want the operating system to crash be-cause a poorly written user program does not allow for enough stack space.Second,if the kernel leaves stack data in a user program’s memory space upon return from a system call,a malicious user might be able to use this data tofind out information about other processes.5.If each job has50%I/O wait,then it will take20minutes to complete in theabsence of competition.If run sequentially,the second one willfinish40 minutes after thefirst one starts.With two jobs,the approximate CPU utiliza-tion is1−0.52.Thus each one gets0.375CPU minute per minute of real time.To accumulate10minutes of CPU time,a job must run for10/0.375 minutes,or about26.67minutes.Thus running sequentially the jobsfinish after40minutes,but running in parallel theyfinish after26.67minutes.6.It would be difficult,if not impossible,to keep thefile system consistent.Sup-pose that a client process sends a request to server process1to update afile.This process updates the cache entry in its memory.Shortly thereafter,anoth-er client process sends a request to server2to read thatfile.Unfortunately,if thefile is also cached there,server2,in its innocence,will return obsolete data.If thefirst process writes thefile through to the disk after caching it, and server2checks the disk on every read to see if its cached copy is up-to-date,the system can be made to work,but it is precisely all these disk ac-cesses that the caching system is trying to avoid.7.No.If a single-threaded process is blocked on the keyboard,it cannot fork.8.A worker thread will block when it has to read a Web page from the disk.Ifuser-level threads are being used,this action will block the entire process, destroying the value of multithreading.Thus it is essential that kernel threads are used to permit some threads to block without affecting the others.9.Yes.If the server is entirely CPU bound,there is no need to have multiplethreads.It just adds unnecessary complexity.As an example,consider a tele-phone directory assistance number(like555-1212)for an area with1million6PROBLEM SOLUTIONS FOR CHAPTER2people.If each(name,telephone number)record is,say,64characters,the entire database takes64megabytes,and can easily be kept in the server’s memory to provide fast lookup.10.When a thread is stopped,it has values in the registers.They must be saved,just as when the process is stopped the registers must be saved.Multipro-gramming threads is no different than multiprogramming processes,so each thread needs its own register save area.11.Threads in a process cooperate.They are not hostile to one another.If yield-ing is needed for the good of the application,then a thread will yield.After all,it is usually the same programmer who writes the code for all of them. er-level threads cannot be preempted by the clock unless the whole proc-ess’quantum has been used up.Kernel-level threads can be preempted indivi-dually.In the latter case,if a thread runs too long,the clock will interrupt the current process and thus the current thread.The kernel is free to pick a dif-ferent thread from the same process to run next if it so desires.13.In the single-threaded case,the cache hits take15msec and cache misses take90msec.The weighted average is2/3×15+1/3×90.Thus the mean re-quest takes40msec and the server can do25per second.For a multithreaded server,all the waiting for the disk is overlapped,so every request takes15 msec,and the server can handle662/3requests per second.14.The biggest advantage is the efficiency.No traps to the kernel are needed toswitch threads.The biggest disadvantage is that if one thread blocks,the en-tire process blocks.15.Yes,it can be done.After each call to pthread create,the main programcould do a pthread join to wait until the thread just created has exited before creating the next thread.16.The pointers are really necessary because the size of the global variable isunknown.It could be anything from a character to an array offloating-point numbers.If the value were stored,one would have to give the size to create global,which is all right,but what type should the second parameter of set global be,and what type should the value of read global be?17.It could happen that the runtime system is precisely at the point of blocking orunblocking a thread,and is busy manipulating the scheduling queues.This would be a very inopportune moment for the clock interrupt handler to begin inspecting those queues to see if it was time to do thread switching,since they might be in an inconsistent state.One solution is to set aflag when the run-time system is entered.The clock handler would see this and set its ownflag, then return.When the runtime systemfinished,it would check the clockflag, see that a clock interrupt occurred,and now run the clock handler.PROBLEM SOLUTIONS FOR CHAPTER27 18.Yes it is possible,but inefficient.A thread wanting to do a system callfirstsets an alarm timer,then does the call.If the call blocks,the timer returns control to the threads package.Of course,most of the time the call will not block,and the timer has to be cleared.Thus each system call that might block has to be executed as three system calls.If timers go off prematurely,all kinds of problems can develop.This is not an attractive way to build a threads package.19.The priority inversion problem occurs when a low-priority process is in itscritical region and suddenly a high-priority process becomes ready and is scheduled.If it uses busy waiting,it will run forever.With user-level threads,it cannot happen that a low-priority thread is suddenly preempted to allow a high-priority thread run.There is no preemption.With kernel-level threads this problem can arise.20.With round-robin scheduling it works.Sooner or later L will run,and eventu-ally it will leave its critical region.The point is,with priority scheduling,L never gets to run at all;with round robin,it gets a normal time slice periodi-cally,so it has the chance to leave its critical region.21.Each thread calls procedures on its own,so it must have its own stack for thelocal variables,return addresses,and so on.This is equally true for user-level threads as for kernel-level threads.22.Yes.The simulated computer could be multiprogrammed.For example,while process A is running,it reads out some shared variable.Then a simula-ted clock tick happens and process B runs.It also reads out the same vari-able.Then it adds1to the variable.When process A runs,if it also adds one to the variable,we have a race condition.23.Yes,it still works,but it still is busy waiting,of course.24.It certainly works with preemptive scheduling.In fact,it was designed forthat case.When scheduling is nonpreemptive,it might fail.Consider the case in which turn is initially0but process1runsfirst.It will just loop forever and never release the CPU.25.To do a semaphore operation,the operating systemfirst disables interrupts.Then it reads the value of the semaphore.If it is doing a down and the sema-phore is equal to zero,it puts the calling process on a list of blocked processes associated with the semaphore.If it is doing an up,it must check to see if any processes are blocked on the semaphore.If one or more processes are block-ed,one of them is removed from the list of blocked processes and made run-nable.When all these operations have been completed,interrupts can be enabled again.8PROBLEM SOLUTIONS FOR CHAPTER226.Associated with each counting semaphore are two binary semaphores,M,used for mutual exclusion,and B,used for blocking.Also associated with each counting semaphore is a counter that holds the number of up s minus the number of down s,and a list of processes blocked on that semaphore.To im-plement down,a processfirst gains exclusive access to the semaphores, counter,and list by doing a down on M.It then decrements the counter.If it is zero or more,it just does an up on M and exits.If M is negative,the proc-ess is put on the list of blocked processes.Then an up is done on M and a down is done on B to block the process.To implement up,first M is down ed to get mutual exclusion,and then the counter is incremented.If it is more than zero,no one was blocked,so all that needs to be done is to up M.If, however,the counter is now negative or zero,some process must be removed from the list.Finally,an up is done on B and M in that order.27.If the program operates in phases and neither process may enter the nextphase until both arefinished with the current phase,it makes perfect sense to use a barrier.28.With kernel threads,a thread can block on a semaphore and the kernel canrun some other thread in the same process.Consequently,there is no problem using semaphores.With user-level threads,when one thread blocks on a semaphore,the kernel thinks the entire process is blocked and does not run it ever again.Consequently,the process fails.29.It is very expensive to implement.Each time any variable that appears in apredicate on which some process is waiting changes,the run-time system must re-evaluate the predicate to see if the process can be unblocked.With the Hoare and Brinch Hansen monitors,processes can only be awakened on a signal primitive.30.The employees communicate by passing messages:orders,food,and bags inthis case.In UNIX terms,the four processes are connected by pipes.31.It does not lead to race conditions(nothing is ever lost),but it is effectivelybusy waiting.32.It will take nT sec.33.In simple cases it may be possible to determine whether I/O will be limitingby looking at source code.For instance a program that reads all its inputfiles into buffers at the start will probably not be I/O bound,but a problem that reads and writes incrementally to a number of differentfiles(such as a compi-ler)is likely to be I/O bound.If the operating system provides a facility such as the UNIX ps command that can tell you the amount of CPU time used by a program,you can compare this with the total time to complete execution of the program.This is,of course,most meaningful on a system where you are the only user.34.For multiple processes in a pipeline,the common parent could pass to the op-erating system information about the flow of data.With this information the OS could,for instance,determine which process could supply output to a process blocking on a call for input.35.The CPU efficiency is the useful CPU time divided by the total CPU time.When Q ≥T ,the basic cycle is for the process to run for T and undergo a process switch for S .Thus (a)and (b)have an efficiency of T /(S +T ).When the quantum is shorter than T ,each run of T will require T /Q process switches,wasting a time ST /Q .The efficiency here is thenT +ST /QT which reduces to Q /(Q +S ),which is the answer to (c).For (d),we just sub-stitute Q for S and find that the efficiency is 50%.Finally,for (e),as Q →0the efficiency goes to 0.36.Shortest job first is the way to minimize average response time.0<X ≤3:X ,3,5,6,9.3<X ≤5:3,X ,5,6,9.5<X ≤6:3,5,X ,6,9.6<X ≤9:3,5,6,X ,9.X >9:3,5,6,9,X.37.For round robin,during the first 10minutes each job gets 1/5of the CPU.Atthe end of 10minutes,C finishes.During the next 8minutes,each job gets 1/4of the CPU,after which time D finishes.Then each of the three remaining jobs gets 1/3of the CPU for 6minutes,until B finishes,and so on.The fin-ishing times for the five jobs are 10,18,24,28,and 30,for an average of 22minutes.For priority scheduling,B is run first.After 6minutes it is finished.The other jobs finish at 14,24,26,and 30,for an average of 18.8minutes.If the jobs run in the order A through E ,they finish at 10,16,18,22,and 30,for an average of 19.2minutes.Finally,shortest job first yields finishing times of 2,6,12,20,and 30,for an average of 14minutes.38.The first time it gets 1quantum.On succeeding runs it gets 2,4,8,and 15,soit must be swapped in 5times.39.A check could be made to see if the program was expecting input and didanything with it.A program that was not expecting input and did not process it would not get any special priority boost.40.The sequence of predictions is 40,30,35,and now 25.41.The fraction of the CPU used is35/50+20/100+10/200+x/250.To beschedulable,this must be less than1.Thus x must be less than12.5msec. 42.Two-level scheduling is needed when memory is too small to hold all theready processes.Some set of them is put into memory,and a choice is made from that set.From time to time,the set of in-core processes is adjusted.This algorithm is easy to implement and reasonably efficient,certainly a lot better than,say,round robin without regard to whether a process was in memory or not.43.Each voice call runs200times/second and uses up1msec per burst,so eachvoice call needs200msec per second or400msec for the two of them.The video runs25times a second and uses up20msec each time,for a total of 500msec per second.Together they consume900msec per second,so there is time left over and the system is schedulable.44.The kernel could schedule processes by any means it wishes,but within eachprocess it runs threads strictly in priority order.By letting the user process set the priority of its own threads,the user controls the policy but the kernel handles the mechanism.45.The change would mean that after a philosopher stopped eating,neither of hisneighbors could be chosen next.In fact,they would never be chosen.Sup-pose that philosopher2finished eating.He would run test for philosophers1 and3,and neither would be started,even though both were hungry and both forks were available.Similarly,if philosopher4finished eating,philosopher3 would not be started.Nothing would start him.46.If a philosopher blocks,neighbors can later see that she is hungry by checkinghis state,in test,so he can be awakened when the forks are available.47.Variation1:readers have priority.No writer may start when a reader is ac-tive.When a new reader appears,it may start immediately unless a writer is currently active.When a writerfinishes,if readers are waiting,they are all started,regardless of the presence of waiting writers.Variation2:Writers have priority.No reader may start when a writer is waiting.When the last ac-tive processfinishes,a writer is started,if there is one;otherwise,all the readers(if any)are started.Variation3:symmetric version.When a reader is active,new readers may start immediately.When a writerfinishes,a new writer has priority,if one is waiting.In other words,once we have started reading,we keep reading until there are no readers left.Similarly,once we have started writing,all pending writers are allowed to run.48.A possible shell script might beif[!–f numbers];then echo0>numbers;ficount=0while(test$count!=200)docount=‘expr$count+1‘n=‘tail–1numbers‘expr$n+1>>numbersdoneRun the script twice simultaneously,by starting it once in the background (using&)and again in the foreground.Then examine thefile numbers.It will probably start out looking like an orderly list of numbers,but at some point it will lose its orderliness,due to the race condition created by running two cop-ies of the script.The race can be avoided by having each copy of the script test for and set a lock on thefile before entering the critical area,and unlock-ing it upon leaving the critical area.This can be done like this:if ln numbers numbers.lockthenn=‘tail–1numbers‘expr$n+1>>numbersrm numbers.lockfiThis version will just skip a turn when thefile is inaccessible,variant solu-tions could put the process to sleep,do busy waiting,or count only loops in which the operation is successful.SOLUTIONS TO CHAPTER3PROBLEMS1.It is an accident.The base register is16,384because the program happened tobe loaded at address16,384.It could have been loaded anywhere.The limit register is16,384because the program contains16,384bytes.It could have been any length.That the load address happens to exactly match the program length is pure coincidence.2.Almost the entire memory has to be copied,which requires each word to beread and then rewritten at a different location.Reading4bytes takes10nsec, so reading1byte takes2.5nsec and writing it takes another2.5nsec,for a total of5nsec per byte compacted.This is a rate of200,000,000bytes/sec.To copy128MB(227bytes,which is about1.34×108bytes),the computer needs227/200,000,000sec,which is about671msec.This number is slightly pessimistic because if the initial hole at the bottom of memory is k bytes, those k bytes do not need to be copied.However,if there are many holes andmany data segments,the holes will be small,so k will be small and the error in the calculation will also be small.3.The bitmap needs1bit per allocation unit.With227/n allocation units,this is224/n bytes.The linked list has227/216or211nodes,each of8bytes,for a total of214bytes.For small n,the linked list is better.For large n,the bitmap is better.The crossover point can be calculated by equating these two formu-las and solving for n.The result is1KB.For n smaller than1KB,a linked list is better.For n larger than1KB,a bitmap is better.Of course,the assumption of segments and holes alternating every64KB is very unrealistic.Also,we need n<=64KB if the segments and holes are64KB.4.Firstfit takes20KB,10KB,18KB.Bestfit takes12KB,10KB,and9KB.Worstfit takes20KB,18KB,and15KB.Nextfit takes20KB,18KB,and9 KB.5.For a4-KB page size the(page,offset)pairs are(4,3616),(8,0),and(14,2656).For an8-KB page size they are(2,3616),(4,0),and(7,2656).6.They built an MMU and inserted it between the8086and the bus.Thus all8086physical addresses went into the MMU as virtual addresses.The MMU then mapped them onto physical addresses,which went to the bus.7.(a)M has to be at least4,096to ensure a TLB miss for every access to an ele-ment of X.Since N only affects how many times X is accessed,any value of N will do.(b)M should still be atleast4,096to ensure a TLB miss for every access to anelement of X.But now N should be greater than64K to thrash the TLB, that is,X should exceed256KB.8.The total virtual address space for all the processes combined is nv,so thismuch storage is needed for pages.However,an amount r can be in RAM,so the amount of disk storage required is only nv−r.This amount is far more than is ever needed in practice because rarely will there be n processes ac-tually running and even more rarely will all of them need the maximum al-lowed virtual memory.9.The page table contains232/213entries,which is524,288.Loading the pagetable takes52msec.If a process gets100msec,this consists of52msec for loading the page table and48msec for running.Thus52%of the time is spent loading page tables.10.(a)We need one entry for each page,or224=16×1024×1024entries,sincethere are36=48−12bits in the page numberfield.。
20.
由条件,每个进程需要2台磁带机,而且得保证至少有一台磁带机是空闲的,才不会导致死锁,故最多5个进程不会死锁,则n<=5.
21.
比较了在矩阵的行向量的可用资源,以M操作。
这一步必须在N 次,以找到一个反复的过程可以完成和被标记为已完成的。
这一过程做了mn的步骤。
重复算法的全过程意味着步数然后mn^2。
a= 1,b =
i + 1(I = 1,2,3,4,5……)
22.
需求矩阵如下:
0 1 0 0 2
0 2 1 0 0
1 0 3 0 0
0 0 1 1 1
如果X是0,马上有死锁。
如果X为1,D可以运行过程完成。
当它完成后,可用向量是1 1 2 2 1。
但是当前是死锁。
如果X为2,D运行后,可用向量1 1 3 2 1和C可以运行。
完成并返回它的资源可用后2 2 3 3 1向量,这将使B运行完成,然后到运行完整的。
因此,最小的x值,避免了死锁。
总排列数 = 36 ,且仅当AB的进程需要的资源次序相同则不会死锁,共6种,故不会死锁的可能性为6/36 = 1/6.
26.
为了避免循环等待,用贷款账户号码标识资源(帐户)。
在读入一个输入行后,一个进程锁定最小数字的帐户,然后当它获取锁(这可能需要等待),然后锁住另一个。
由于没有进程永远等待一个账户比他小的账户,没有一个循环等待,因此没有一个死锁。
部分习题参考答案针对书中习题的重点和难点部分给出参考答案,而其余习题可在书中相应章节处得到答案。
第 1 章3.操作系统是裸机之上的第一层软件,它只在核心态模式下运行,受硬件保护,与硬件关系尤为密切。
操作系统是整个计算机系统的控制管理中心,其他所有软件都建立在操作系统之上。
操作系统对它们既具有支配权力,又为其运行建造必备环境。
4.脱机I/O是指输入/输出工作不受主机直接控制,而由卫星机专门负责完成I/O,主机专门完成快速计算任务,从而二者可以并行操作。
联机I/O是指作业的输入、调入内存及结果输出都在CPU直接控制下进行。
8.硬件——是指计算机物理装置本身,它是计算机系统的物理基础。
如CPU、内存、设备等。
软件——是相对硬件而言的,它是与数据处理系统的操作有关的计算机程序、过程、规则及相关文档资料的总称。
简单地说,软件是计算机执行的程序。
多道程序设计——在这种设计技术下,内存中能同时存放多道程序,在管理程序的控制下交替地执行。
这些作业共享CPU和系统中的其他资源。
并发——是指两个或多个活动在同一给定的时间间隔中进行。
它是宏观上的概念。
吞吐量——在一段给定的时间内,计算机所能完成的总工作量。
分时——就是对时间的共享。
在分时系统中,分时主要是指若干并发程序对CPU时间的共享。
实时——表示“及时”或“即时”。
系统调用——是用户在程序中能以“函数调用”形式调用的、由操作系统提供的子功能的集合。
每一个子功能称做一条系统调用命令。
它是操作系统对外的接口,是用户级程序取得操作系统服务的唯一途径。
10.通常,大家会熟悉以下操作系统:Windows 2000,Windows XP,UNIX或Linux。
在上机工作过程中,操作系统为用户提供的服务包括:命令和数据输入/输出的管理,内存的分配,用户文件的管理,CPU的分配,设备管理等。
12.当执行操作系统程序时,处理机处于核心态。
它有较高的特权,可以执行所有的指令,包括一般用户程序中不能使用的特权指令,从而能对所有寄存器和内存进行访问、启动I/O操作等。
1.这些系统直接把程序载入内存,并且从word0(魔数)开始执行。
为了避免将header作为代码执行,魔数是一条branch指令,其目标地址正好在header之上。
按这种方法,就可能把二进制文件直接读取到新的进程地址空间,并且从0开始运行。
5.rename 调用不会改变文件的创建时间和最后的修改时间,但是创建一个新的文件,其创建时间和最后的修改时间都会改为当前的系统时间。
另外,如果磁盘满,复制可能会失败。
10.由于这些被浪费的空间在分配单元(文件)之间,而不是在它们内部,因此,这是外部碎片。
这类似于交换系统或者纯分段系统中出现的外部碎片。
11.传输前的延迟是9ms,传输速率是2^23Bytes/s,文件大小是2^13Bytes,故从内存读取或写回磁盘的时间都是9+2^13/2^23=9.977ms,总共复制一个文件需要9.977*2=19.954ms。
为了压缩8G磁盘,也就是2^20个文件,每个需要19.954ms,总共就需要20,923 秒。
因此,在每个文件删除后都压缩磁盘不是一个好办法。
12.因为在系统删除的所有文件都会以碎片的形式存在磁盘中,当碎片到达一定量磁盘就不能再装文件了,必须进行外部清理,所以紧缩磁盘会释放更多的存储空间,但在每个文件删除后都压缩磁盘不是一个好办法。
15.由于1024KB = 2^20B, 所以可以容纳的磁盘地址个数是2^20/4 = 2^18个磁盘地址,间接块可以保存2^18个磁盘地址。
与 10 个直接的磁盘地址一道,最大文件有 262154 块。
由于每块为 1 MB,最大的文件是262154 MB。
19.每个磁盘地址需要D位,且有F个空闲块,故需要空闲表为DF位,采用位图法则需要B位,当DF<B时,空闲表采用的空间少于位图,当D=16时,得F/B<1/D=6.25%,即空闲空间的百分比少于6.25%.20.a)1111 1111 1111 0000b)1000 0001 1111 0000c)1111 1111 1111 1100d)1111 1110 0000 110027.平均时间T = 1*h + 40*(1-h)=-39h+40ms28.1500rpm(每分钟1500转),60s/1500=0.004s=4ms,即每转需要4ms,平均旋转延迟为2ms;读取一个k个字节的块所需要的时间T是平均寻道时间,平均旋转延迟和传送时间之和。
CH6 应用题参考答案1.磁带卷上记录了若干文件,假定当前磁头停在第j个文件的文件头标前,现要按名读出文件i,试给出读出文件i的步骤。
答:由于磁带卷上的文件用“带标”隔开,每个文件的文件头标前后都使用了三个带标。
正常情况磁头应停在文件头标的前面,所以,只要计算带标的个数,就可找到所要文件。
1)当i≧j时,要正走磁带,步1 组织通道程序正走磁带,走过“带标”个数为3×(i-j)个。
步2 组织通道程序读文件i的文件头标。
步3 根据文件i的文件头标信息,组织读文件信息。
2)当i<j时,要反走磁带,步1 组织通道程序反走磁带,走过“带标”个数为3×(j-i)+1个。
步2 组织通道程序读文件i的文件头标。
步3 根据文件i的文件头标信息,组织读文件信息。
2.假定令B=物理块长、R=逻辑记录长、F=块因子。
对定长记录(一个块中有整数个逻辑记录),给出计算F的公式。
答:F=[B/R]。
3.某操作系统的磁盘文件空间共有500块,若用字长为32位的位示图管理盘空间,试问:(1)位示图需多少个字? (2)第i字第j位对应的块号是多少? (3)并给出申请/归还一块的工作流程。
答:(1) 位示图占用字数为500/32=16(向上取整)个字。
(2) 第i字第j位对应的块号N=32×i+j。
(3)申请时自上至下、自左至有扫描位示图跳过为1的位,找到第一个迁到的0位,根据它是第i字第j位算出对应块号,并分配出去。
归还时已知块号,块号/32算出第i字第j位并把位示图相应位清0。
4.若两个用户共享一个文件系统,用户甲使用文件A、B、C、D、E;用户乙要用到文件A、D、E、F。
己知用户甲的文件A与用户乙的文件A实际上不是同一文件;甲、乙两用户的文件D和E正是同一文件。
试设计一种文件系统组织方案,使得甲、乙两用户能共享该文件系统又不致造成混乱。
答:可以采用二级目录或树形目录结构来解决难题。
例如,5.在UNIX 中,如果一个盘块的大小为1KB,每个盘块号占4个字节,即每块可放256个地址。
第二章进程管理令狐采学第一部分教材习题(P81)3、为什么程序并发执行会产生间断性特征?(P36)4、程序并发执行,为何会失去封闭性和可再现性?(P37)【解】程序在并发执行时,是多个程序共享系统中的各种资源,因而这些资源的状态将由多个程序来改变,致使程序的运行已失去了封闭性。
同时由于失去了封闭性,也将导致其再失去可再现性。
程序在并发执行时,由于失去了封闭性,程序经过多次执行后,其计算机结果已与并发程序的执行速度有关,从而使程序的执行失去了可再现性。
5、在操作系统中为什么要引入进程概念?(P37)它会产生什么样的影响?【解】在操作系统中引入进程的概念,是为了实现多个程序的并发执行。
传统的程序不能与其他程序并发执行,只有在为之创建进程后,才能与其他程序(进程)并发执行。
这是因为并发执行的程序(即进程)是“停停走走”地执行,只有在为它创建进程后,在它停下时,方能将其现场信息保存在它的PCB中,待下次被调度执行是,再从PCB中恢复CPU现场并继续执行,而传统的程序却无法满足上述要求。
建立进程所带来的好处是使多个程序能并发执行,这极大地提高了资源利用率和系统吞吐量。
但管理进程也需付出一定的代价,包括进程控制块及协调各运行机构所占用的内存空间开销,以及为进行进程间的切换、同步及通信等所付出的时间开销。
6、试从动态性、并发性和独立性上比较进程和程序?(P37)【解】(1)动态性:进程既然是进程实体的执行过程,因此,动态性是进程最基本的特性。
动态性还表现为:“它由创建而产生,由调度而执行,因得不到资源而暂停执行,以及由撤消而消亡”。
可见,进程有一定的生命期。
而程序只是一组有序指令的集合,并存放在某种介质上,本身并无运动的含义,因此,程序是个静态实体。
(2)并发性:所谓进程的并发,指的是多个进程实体,同存于内存中,能在一段时间内同时运行。
并发性是进程的重要特征,同时也成为OS的重要特征。
引入进程的目的也正是为了使其程序能和其它进程的程序并发执行,而程序是无法并发执行的。
20.
由条件,每个进程需要2台磁带机,而且得保证至少有一台磁带机是空闲的,才不会导致死锁,故最多5个进程不会死锁,则n<=5.
21.
比较了在矩阵的行向量的可用资源,以M操作。
这一步必须在N 次,以找到一个反复的过程可以完成和被标记为已完成的。
这一过程做了mn的步骤。
重复算法的全过程意味着步数然后mn^2。
a= 1,b =
i + 1(I = 1,2,3,4,5……)
22.
需求矩阵如下:
0 1 0 0 2
0 2 1 0 0
1 0 3 0 0
0 0 1 1 1
如果X是0,马上有死锁。
如果X为1,D可以运行过程完成。
当它完成后,可用向量是1 1 2 2 1。
但是当前是死锁。
如果X为2,D运行后,可用向量1 1 3 2 1和C可以运行。
完成并返回它的资源可用后2 2 3 3 1向量,这将使B运行完成,然后到运行完整的。
因此,最小的x值,避免了死锁。
总排列数 = 36 ,且仅当AB的进程需要的资源次序相同则不会死锁,共6种,故不会死锁的可能性为6/36 = 1/6.
26.
为了避免循环等待,用贷款账户号码标识资源(帐户)。
在读入一个输入行后,一个进程锁定最小数字的帐户,然后当它获取锁(这可能需要等待),然后锁住另一个。
由于没有进程永远等待一个账户比他小的账户,没有一个循环等待,因此没有一个死锁。