操作系统概念精要原书第二版答案
- 格式:docx
- 大小:13.82 KB
- 文档页数:3
操作系统教程课后习题参考答案习题一1.设计操作系统的要紧目的是什么?设计操作系统的目的是:(1)从系统治理人员的观点来看,设计操作系统是为了合理地去组织运算机工作流程,治理和分派运算机系统硬件及软件资源,使之能为多个用户所共享。
因此,操作系统是运算机资源的治理者。
(2)从用户的观点来看,设计操作系统是为了给用户利用运算机提供一个良好的界面,以利用户无需了解许多有关硬件和系统软件的细节,就能够方便灵活地利用运算机。
2.操作系统的作用可表此刻哪几个方面?(1) 方便用户利用:操作系统通过提供用户与运算机之间的友好界面来方便用户利用。
(2) 扩展机械功能:操作系统通过扩充硬件功能和提供新的效劳来扩展机械功能。
(3) 治理系统资源:操作系统有效地治理系统中的所有硬件和软件资源,使之取得充分利用。
(4) 提高系统效率:操作系统合理组织运算机的工作流程,以改良系统性能和提高系统效率。
(5)构筑开放环境:操作系统遵循国际标准来设计和构造一个开放环境。
其含义主若是指:遵循有关国际工业标准和开放系统标准,支持体系结构的可伸缩性和可扩展性;支持应用程序在不同平台上的可移植性和互操作性。
3.试表达脱机批处置和联机批处置工作进程(1)联机批处置工作进程用户上机前,需向机房的操作员提交程序、数据和一个作业说明书,后者提供了用户标识、用户想利用的编译程序和所需的系统资源等大体信息。
这些资料必需变成穿孔信息,(例如穿成卡片的形式),操作员把各用户提交的一批作业装到输入设备上(假设输入设备是读卡机,那么该批作业是一叠卡片),然后由监督程序操纵送到磁带上。
以后,监督程序自动输入第一个作业的说明记录,假设系统资源能知足其要求,那么将该作业的程序、数据调入主存,并从磁带上调入所需要的编译程序。
编译程序将用户源程序翻译成目标代码,然后由连接装配程序把编译后的目标代码及所需的子程序装配成一个可执行的程序,接着启动执行。
计算完成后输出该作业的计算结果。
MODERN OPERATING SYSTEMS SECOND EDITIONPROBLEM SOLUTIONSANDREW S. TANENBAUM Vrije Universiteit Amsterdam, The NetherlandsPRENTICE HALLUPPER SADDLE RIVER, NJ 07458SOLUTIONS TO CHAPTER 1 PROBLEMS1. An operating system must provide the users with an extended (i.e., virtual) machine, and it must manage the I/O devices and other system resources.2. Multiprogramming is the rapid switching of the CPU between multiple processes in memory. It is commonly used to keep the CPU busy while one or more processes are doing I/O.3. Input spooling is the technique of reading in jobs, for example, from cards, onto the disk, so that when the currently executing processes are finished, there will be work waiting for the CPU. Output spooling consists of first copying printable files to disk before printing them, rather than printing directly as the output is generated. Input spooling on a personal computer is not very likely, but output spooling is.4. The prime reason for multiprogramming is to give the CPU something to do while waiting for I/O to complete. If there is no DMA, the CPU is fully occupied doing I/O, so there is nothing to be gained (at least in terms of CPU utilization) by multiprogramming. No matter how much I/O a program does, the CPU will be 100 percent busy. This of course assumes the major delay is the wait while data are copied. A CPU could do other work if the I/O were slow for other reasons (arriving on a serial line, for instance).5. Second generation computers did not have the necessary hardware to protect the operating system from malicious user programs.6. It is still alive. For example, Intel makes Pentium I, II, and III, and 4 CPUs with a variety of different properties including speed and power consumption. All of these machines are architecturally compatible. They differ only in price and performance, which is the essence of the family idea.7. A 25×80 character monochrome text screen requires a 2000-byte buffer. The 1024 ×768 pixel 24-bit color bitmap requires 2,359,296 bytes. In 1980 these two options would have cost $10 and $11,520, respectively. For current prices, check on how much RAM currently costs, probably less than $1/MB.8. Choices (a), (c), and (d) should be restricted to kernel mode.9. Personal computer systems are always interactive, often with only a single user. Mainframe systems nearly always emphasize batch or timesharing with many users. Protection is much more of an issue on mainframe systems, as is efficient use of all resources.10. Every nanosecond one instruction emerges from the pipeline. This meansthe machine is executing 1 billion instructions per second. It does not matter at all how many stages the pipeline has. A 10-stage pipeline with 1 nsec per2 PROBLEM SOLUTIONS FOR CHAPTER 1stage would also execute 1 billion instructions per second. All that matters is how often a finished instructions pops out the end of the pipeline.11. The manuscript contains 80 × 50 × 700 = 2.8 million characters. This is, of course, impossible to fit into the registers of any currently available CPU and is too big for a 1-MB cache, but if such hardware were available, the manuscript could be scanned in 2.8 msec from the registers or 5.8 msec from the cache. There are approximately 2700 1024-byte blocks of data, so scanning from the disk would require about 27 seconds, and from tape 2 minutes 7 seconds. Of course, these times are just to read the data. Processing and rewriting the data would increase the time.12. Logically, it does not matter if the limit register uses a virtual address or a physical address. However, the performance of the former is better. If virtual addresses are used, the addition of the virtual address and the base register can start simultaneously with the comparison and then can run in parallel. If physical addresses are used, the comparison cannot start until the addition is complete, increasing the access time.13. Maybe. If the caller gets control back and immediately overwrites the data, when the write finally occurs, the wrong data will be written. However, if the driver first copies the data to a private buffer before returning, then the caller can be allowed to continue immediately. Another possibility is to allow the caller to continue and give it a signal when the buffer may be reused, but this is tricky and error prone.14. A trap is caused by the program and is synchronous with it. If the program is run again and again, the trap will always occur at exactly the same position in the instruction stream. An interrupt is caused by an external event and its timing is not reproducible.15. Base = 40,000 and limit = 10,000. An answer of limit = 50,000 is incorrect for the way the system was described in this book. It could have been implemented that way, but doing so would have required waiting until the address + base calculation was completed before starting the limit check, thus slowing down the computer.16. The process table is needed to store the state of a process that is currently suspended, either ready or blocked. It is not needed in a single process system because the single process is never suspended.17. Mounting a file system makes any files already in the mount point directory inaccessible, so mount points are normally empty. However, a system administrator might want to copy some of the most important files normally located in the mounted directory to the mount point so they could be found in their normal path in an emergency when the mounted device was being checked or repaired.PROBLEM SOLUTIONS FOR CHAPTER 1 318. Fork can fail if there are no free slots left in the process table (and possibly if there is no memory or swap space left). Exec can fail if the file name given does not exist or is not a valid executable file. Unlink can fail if the file to be unlinked does not exist or the calling process does not have the authority to unlink it. 19. If the call fails, for example because fd is incorrect, it can return −1. It can also fail because the disk is full and it is not possible to write the number of bytes requested. On a correct termination, it always returns nbytes.20. It contains the bytes: 1, 5, 9, 2.21. Block special files consist of numbered blocks, each of which can be read or written independently of all the other ones. It is possible to seek to any block and start reading or writing. This is not possible with character special files. 22. System calls do not really have names, other than in a documentation sense. When the library procedure read traps to the kernel, it puts the number of the system call in a register or on the stack. This number is used to index into a table. There is really no name used anywhere. On the other hand, the name of the library procedure is very important, since that is what appears in the program.23. Yes it can, especially if the kernel is a message-passing system.24. As far as program logic is concerned it does not matter whether a call to a library procedure results in a system call. But if performance is an issue, if a task can be accomplished without a system call the program will run faster. Every system call involves overhead time in switching from the user context to the kernel context. Furthermore, on a multiuser system the operating system may schedule another process to run when a system call completes, further slowing the progress in real time of a calling process.25. Several UNIX calls have no counterpart in the Win32 API:Link: a Win32 program cannot refer to a file by an alternate name or see it in more than one directory. Also, attempting to create a link is a convenient way to test for and create a lock on a file.Mount and umount: a Windows program cannot make assumptions about standard path names because on systems with multiple disk drives the drive name part of the path may be different.Chmod: Windows programmers have to assume that every user can access every file.Kill: Windows programmers cannot kill a misbehaving program that is not cooperating.4 PROBLEM SOLUTIONS FOR CHAPTER 126. The conversions are straightforward:(a) A micro year is 10−6 × 365× 24× 3600= 31.536 sec. (b) 1000 meters or 1 km.(c) There are 240 bytes, which is 1,099,511,627,776 bytes. (d) It is 6 × 1024 kg. SOLUTIONS TO CHAPTER 2 PROBLEMS1. The transition from blocked to running is conceivable. Suppose that a process is blocked on I/O and the I/O finishes. If the CPU is otherwise idle, the process could go directly from blocked to running. The other missing transition,from ready to blocked, is impossible. A ready process cannot do I/O or anything else that might block it. Only a running process can block.2. You could have a register containing a pointer to the current process table entry. When I/O completed, the CPU would store the current machine state in the current process table entry. Then it would go to the interrupt vector for the interrupting device and fetch a pointer to another process table entry (the service procedure). This process would then be started up.3. Generally, high-level languages do not allow one the kind of access to CPU hardware that is required. For instance, an interrupt handler may be required to enable and disable the interrupt servicing a particular device, or to manipulate data within a process’stack area. Also, interrupt service routines must execute as rapidly as possible.4. There are several reasons for using a separate stack for the kernel. Two of them are as follows. First, you do not want the operating system to crash because a poorly written user program does not allow for enough stack space. Second, if the kernel leaves stack data in a user program’ s memory space upon return from a system call, a malicious user might be able to use this data to find out information about other processes.5. It would be difficult, if not impossible, to keep the file system consistent. Suppose that a client process sends a request to server process 1 to update a file. This process updates the cache entry in its memory. Shortly thereafter, another client process sends a request to server 2 to read that file. Unfortunately, if the file is also cached there, server 2, in its innocence, will return obsolete data. If the first process writes the file through to the disk after caching it, and server 2 checks the disk on every read to see if its cached copy is up-to-date, the system can be made to work, but it is precisely all these disk accesses that the caching system is trying to avoid.PROBLEM SOLUTIONS FOR CHAPTER 2 56. When a thread is stopped, it has values in the registers. They must be saved, just as when the process is stopped the registers must be saved. Timesharing threads is no different than timesharing processes, so each thread needs its own register save area.7. No. If a single-threaded process is blocked on the keyboard, it cannot fork.8. A worker thread will block when it has to read a Web page from the disk. If user-level threads are being used, this action will block the entire process, destroying the value of multithreading. Thus it is essential that kernel threads are used to permit some threads to block without affecting the others.9. Threads in a process cooperate. They are not hostile to one another. If yielding is needed for the good of the application, then a thread will yield. After all, it is usually the same programmer who writes the code for all of them.10. User-level threads cannot be preempted by the clock uless the whole process’ quantum has been used up. Kernel-level threads can be preempted individually. In the latter case, if a thread runs too long, the clock will interrupt the current process and thus the current thread. The kernel is free to pick adifferent thread from the same process to run next if it so desires.11. In the single-threaded case, the cache hits take 15 msec and cache misses take 90 msec. The weighted average is 2/3×15+ 1/3 ×90. Thus the mean request takes 40 msec and the server can do 25 per second. For a multithreaded server, all the waiting for the disk is overlapped, so every request takes 15 msec, and the server can handle 66 2/3 requests per second.12. Yes. If the server is entirely CPU bound, there is no need to have multiple threads. It just adds unnecessary complexity. As an example, consider a telephone directory assistance number (like 555-1212) for an area with 1 million people. If each (name, telephone number) record is, say, 64 characters, th e entire database takes 64 megabytes, and can easily be kept in the server’ s memory to provide fast lookup.13. The pointers are really necessary because the size of the global variable is unknown. It could be anything from a character to an array of floating-point numbers. If the value were stored, one would have to give the size to create3global, which is all right, but what type should the second parameter of set3global be, and what type should the value of read3global be?14. It could happen that the runtime system is precisely at the point of blocking or unblocking a thread, and is busy manipulating the scheduling queues. This would be a very inopportune moment for the clock interrupt handler to begin inspecting those queues to see if it was time to do thread switching, since they might be in an inconsistent state. One solution is to set a flag when the runtime system is entered. The clock handler would see this and set its own flag,6 PROBLEM SOLUTIONS FOR CHAPTER 2then return. When the runtime system finished, it would check the clock flag, see that a clock interrupt occurred, and now run the clock handler.15. Yes it is possible, but inefficient. A thread wanting to do a system call first sets an alarm timer, then does the call. If the call blocks, the timer returns control to the threads package. Of course, most of the time the call will not block, and the timer has to be cleared. Thus each system call that might block has to be executed as three system calls. If timers go off prematurely, all kinds of problems can develop. This is not an attractive way to build a threads package.16. The priority inversion problem occurs when a low-priority process is in its critical region and suddenly a high-priority process becomes ready and is scheduled. If it uses busy waiting, it will run forever. With user-level threads, it cannot happen that a low-priority thread is suddenly preempted to allow a high-priority thread run. There is no preemption. With kernel-level threads this problem can arise.17. Each thread calls procedures on its own, so it must have its own stack for the local variables, return addresses, and so on. This is equally true for user-level threads as for kernel-level threads.18. A race condition is a situation in which two (or more) processes are about to perform some action. Depending on the exact timing, one or the other goesfirst. If one of the processes goes first, everything works, but if another one goes first, a fatal error occurs.19. Yes. The simulated computer could be multiprogrammed. For example, while process A is running, it reads out some shared variable. Then a simulated clock tick happens and process B runs. It also reads out the same variable. Then it adds 1 to the variable. When process A runs, if it also adds one to the variable, we have a race condition.20. Yes, it still works, but it still is busy waiting, of course.21. It certainly works with preemptive scheduling. In fact, it was designed for that case. When scheduling is nonpreemptive, it might fail. Consider the case in which turn is initially 0 but process 1 runs first. It will just loop forever and never release the CPU.22. Yes it can. The memory word is used as a flag, with 0 meaning that no one is using the critical variables and 1 meaning that someone is using them. Put a 1 in the register, and swap the memory word and the register. If the register contains a 0 after the swap, access has been granted. If it contains a 1, access has been denied. When a process is done, it stores a 0 in the flag in memory. PROBLEM SOLUTIONS FOR CHAPTER 2 723. To do a semaphore operation, the operating system first disables interrupts. Then it reads the value of the semaphore. If it is doing a down and the semaphore is equal to zero, it puts the calling process on a list of blocked processes associated with the semaphore. If it is doing an up, it must check to see if any processes are blocked on the semaphore. If one or more processes are blocked, one of then is removed from the list of blocked processes and made runnable. When all these operations have been completed, interrupts can be enabled again.24. Associated with each counting semaphore are two binary semaphores, M, used for mutual exclusion, and B, used for blocking. Also associated with each counting semaphore is a counter that holds the number of up s minus the number of down s, and a list of processes blocked on that semaphore. To implement down, a process first gains exclusive access to the semaphores, counter, and list by doing a down on M. It then decrements the counter. If it is zero or more, it just does an up on M and exits. If M is negative, the process is put on the list of blocked processes. Then an up is done on M and a down is done on B to block the process. To implement up, first M is down ed to get mutual exclusion, and then the counter is incremented. If it is more than zero, no one was blocked, so all that needs to be done is to up M. If, however, the counter is now negative or zero, some process must be removed from the list. Finally, an up is done on B and M in that order.25. If the program operates in phases and neither process may enter the next phase until both are finished with the current phase, it makes perfect sense to use a barrier.26. With round-robin scheduling it works. Sooner or later L will run, and eventually it will leave its critical region. The point is, with priority scheduling, Lnever gets to run at all; with round robin, it gets a normal time slice periodically, so it has the chance to leave its critical region.27. With kernel threads, a thread can block on a semaphore and the kernel can run some other thread in the same process. Consequently, there is no problem using semaphores. With user-level threads, when one thread blocks on a semaphore, the kernel thinks the entire process is blocked and does not run it ever again. Consequently, the process fails.28. It is very expensive to implement. Each time any variable that appears in a predicate on which some process is waiting changes, the runtime system must re-evaluate the predicate to see if the process can be unblocked. With the Hoare and Brinch Hansen monitors, processes can only be awakened ona signal primitive.8 PROBLEM SOLUTIONS FOR CHAPTER 229. The employees communicate by passing messages: orders, food, and bags in this case. In UNIX terms, the four processes are connected by pipes. 30. It does not lead to race conditions (nothing is ever lost), but it is effectively busy waiting.31. If a philosopher blocks, neighbors can later see that he is hungry by checking his state, in test, so he can be awakened when the forks are available.32. The change would mean that after a philosopher stopped eating, neither of his neighbors could be chosen next. In fact, they would never be chosen. Suppose that philosopher 2 finished eating. He would run test for philosophers 1 and 3, and neither would be started, even though both were hungry and both forks were available. Similary, if philosopher 4 finished eating, philosopher 3 would not be started. Nothing would start him.33. Variation 1: readers have priority. No writer may start when a reader is active. When a new reader appears, it may start immediately unless a writer is currently active. When a writer finishes, if readers are waiting, they are all started, regardless of the presence of waiting writers. Variation 2: Writers have priority. No reader may start when a writer is waiting. When the last active process finishes, a writer is started, if there is one; otherwise, all the readers (if any) are started. Variation 3: symmetric version. When a reader is active, new readers may start immediately. When a writer finishes, a new writer has priority, if one is waiting. In other words, once we have started reading, we keep reading until there are no readers left. Similarly, once we have started writing, all pending writers are allowed to run.34. It will need nT sec.35. If a process occurs multiple times in the list, it will get multiple quanta per cycle. This approach could be used to give more important processes a larger share of the CPU. But when the process blocks, all entries had better be removed from the list of runnable processes.36. In simple cases it may be possible to determine whether I/O will be limiting by looking at source code. For instance a program that reads all its input filesinto buffers at the start will probably not be I/O bound, but a problem that reads and writes incrementally to a number of different files (such as a compiler) is likely to be I/O bound. If the operating system provides a facility such as the UNIX ps command that can tell you the amount of CPU time used by a program , you can compare this with total time to complete execution of the program. This is, of course, most meaningful on a system where you are the only user.37. For multiple processes in a pipeline, the common parent could pass to the operating system information about the flow of data. With this information PROBLEM SOLUTIONS FOR CHAPTER 2 9the OS could, for instance, determine which process could supply output to a process blocking on a call for input.38. The CPU efficiency is the useful CPU time divided by the total CPU time. When Q ≥ T, the basic cycle is for the process to run for T and undergo a process switch for S. Thus (a) and (b) have an efficiency of T/(S + T). When the quantum is shorter than T, each run of T will require T/Q process switches, wasting a time ST/Q. The efficiency here is thenT + ST/Q T 333333333which reduces to Q/(Q + S), which is the answer to (c). For (d), we just substitute Q for S and find that the efficiency is 50 percent. Finally, for (e), as Q → 0 the efficiency goes to 0.39. Shortest job first is the way to minimize average response time. 0 < X≤ 3: X , 3, 5, 6, 9.3 < X≤ 5: 3, X , 5, 6, 9.5 < X≤ 6: 3, 5, X , 6, 9.6 < X≤ 9: 3, 5, 6, X , 9.X >9: 3, 5, 6, 9, X.40. For round robin, during the first 10 minutes each job gets 1/5 of the CPU. At the end of 10 minutes, C finishes. During the next 8 minutes, each job gets 1/4 of the CPU, after which time D finishes. Then each of the three remaining jobs gets 1/3 of the CPU for 6 minutes, until B finishes, and so on. The finishing times for the five jobs are 10, 18, 24, 28, and 30, for an average of 22 minutes. For priority scheduling, B is run first. After 6 minutes it is finished. The other jobs finish at 14, 24, 26, and 30, for an average of 18.8 minutes. If the jobs run in the order A through E, they finish at 10, 16, 18, 22, and 30, for an average of 19.2 minutes. Finally, shortest job first yields finishing times of 2, 6, 12, 20, and 30, for an average of 14 minutes.41. The first time it gets 1 quantum. On succeeding runs it gets 2, 4, 8, and 15, so it must be swapped in 5 times.42. A check could be made to see if the program was expecting input and did anything with it. A program that was not expecting input and did not process it would not get any special priority boost.43. The sequence of predictions is 40, 30, 35, and now 25.44. The fraction of the CPU used is 35/50 + 20/100 + 10/200 + x/250. To beschedulable, this must be less than 1. Thus x must be less than 12.5 msec. 45. Two-level scheduling is needed when memory is too small to hold all the ready processes. Some set of them is put into memory, and a choice is made 10 PROBLEM SOLUTIONS FOR CHAPTER 2from that set. From time to time, the set of in-core processes is adjusted. This algorithm is easy to implement and reasonably efficient, certainly a lot better than say, round robin without regard to whether a process was in memory or not.46. The kernel could schedule processes by any means it wishes, but within each process it runs threads strictly in priority order. By letting the user process set the priority of its own threads, the user controls the policy but the kernel handles the mechanism.47. A possible shell script might beif [ ! –f numbers ]; then echo 0 > numbers; fi count=0 while (test $count != 200 ) docount=‘expr $count + 1 ‘ n=‘tail –1 numbers‘ expr $n + 1 >>numbers doneRun the script twice simultaneously, by starting it once in the background (using &) and again in the foreground. Then examine the file numbers . It will probably start out looking like an orderly list of numbers, but at some point it will lose its orderliness, due to the race condition created by running two copies of the script. The race can be avoided by having each copy of the script test for and set a lock on the file before entering the critical area, and unlocking it upon leaving the critical area. This can be done like this:if ln numbers numbers.lock then n=‘tail –1 numbers‘ expr $n + 1 >>numbersrm numbers.lock fiThis version will just skip a turn when the file is inaccessible, variant solutions could put the process to sleep, do busy waiting, or count only loops in which the operation is successful.SOLUTIONS TO CHAPTER 3 PROBLEMS1. In the U.S., consider a presidential election in which three or more candidates are trying for the nomination of some party. After all the primary electionsPROBLEM SOLUTIONS FOR CHAPTER 3 11are finished, when the delegates arrive at the party convention, it could happen that no candidate has a majority and that no delegate is willing to change his or her vote. This is a deadlock. Each candidate has some resources (votes) but needs more to get the job done. In countries with multiple political parties in the parliament, it could happen that each party supports a different version of the annual budget and that it is impossible to assemble a majority to pass the budget. This is also a deadlock.2. If the printer starts to print a file before the entire file has been received (this is often allowed to speed response), the disk may fill with other requests that can’ t be printed until the first file is done, but which use up disk space needed to receive the file currently being printed. If the spooler does not start to print a file until the entire file has been spooled it can reject a request that is too big.Starting to print a file is equivalent to reserving the printer; if the reservation is deferred until it is known that the entire file can be received, a deadlock of the entire system can be avoided. The user with the fil e that won’ t fit is still deadlocked of course, and must go to another facility that permits printing bigger files.3. The printer is nonpreemptable; the system cannot start printing another job until the previous one is complete. The spool disk is preemptable; you can delete an incomplete file that is growing too large and have the user send it later, assuming the protocol allows that4. Yes. It does not make any difference whatsoever.5. Yes, illegal graphs exist. We stated that a resource may only be held by a single process. An arc from a resource square to a process circle indicates that the process owns the resource. Thus a square with arcs going from it to two or more processes means that all those processes hold the resource, which violates the rules. Consequently, any graph in which multiple arcs leave a square and end in different circles violates the rules. Arcs from squares to squares or from circles to circles also violate the rules.6. A portion of all such resources could be reserved for use only by processes owned by the administrator, so he or she could always run a shell and programs needed to evaluate a deadlock and make decisions about which processes to kill to make the system usable again.7. Neither change leads to deadlock. There is no circular wait in either case.8. Voluntary relinquishment of a resource is most similar to recovery through preemption. The essential difference is that computer processes are not expected to solve such problems on their own. Preemption is analogous to the operator or the operating system acting as a policeman, overriding the normal rules individual processes obey.12 PROBLEM SOLUTIONS FOR CHAPTER 39. The process is asking for more resources than the system has. There is no conceivable way it can get these resources, so it can never finish, even if no other processes want any resources at all.10. If the system had two or more CPUs, two or more processes could run in parallel, leading to diagonal trajectories.11. Yes. Do the whole thing in three dimensions. The z-axis measures the number of instructions executed by the third process.12. The method can only be used to guide the scheduling if the exact instant at which a resource is going to be claimed is known in advance. In practice, this is rarely the case.13. A request from D is unsafe, but one from C is safe.14. There are states that are neither safe nor deadlocked, but which lead to deadlocked states. As an example, suppose we have four resources: tapes, plotters, scanners, and CD-ROMs, as in the text, and three processes competing for them. We could have the following situation:Has Needs Available。
第二章进程和线程作业答案1,2,4,6,7,10,11,12,14, 211.在操作系统中为什么要引入进程概念它与程序的差别和关系是怎样的答:由于多道程序的并发执行时共享系统资源,共同决定这些资源的状态,因此系统中各程序在执行过程中就出现了相互制约的新关系,程序的执行出现“走走停停”的新状态。
用程序这个静态概念已经不能如实反映程序并发执行过程中的这些特征。
为此,人们引入“进程(Process)”这一概念来描述程序动态执行过程的性质。
进程和程序是两个完全不同的概念。
进程与程序的主要区别:进程和程序之间存在密切的关系:进程的功能是通过程序的运行得以实现的,进程活动的主体是程序,进程不能脱离开具体程序而独立存在。
2.PCB的作用是什么它是怎样描述进程的动态性质的答:PCB是进程组成中最关键的部分。
每个进程有惟一的进程控制块;操作系统根据PCB对进程实施控制和管理,进程的动态、并发特征是利用PCB表现出来的;PCB是进程存在的唯一标志。
PCB中有表明进程状态的信息,该进程的状态包括运行态、就绪态和阻塞态,它利用状态信息来描述进程的动态性质。
4. 用如图2-26所示的进程状态转换图能够说明有关处理机的大量内容。
试回答:①什么事件引起每次显着的状态变迁②下述状态变迁因果关系能否发生为什么(A)2→1 (B)3→2 (C)4→1答:(1)就绪→运行:CPU空闲,就绪态进程被调度程序选中运行→阻塞:运行态进程因某种条件未满足而放弃CPU的占用。
阻塞→就绪:阻塞态进程所等待的事件发生了。
运行→就绪:正在运行的进程用完了本次分配给它的时间片(2)下述状态变迁(A)2→1,可以。
运行进程用完了本次分配给它的时间片,让出CPU,从就绪队列中选一个进程投入运行。
(B)3→2,不可以。
任何时候一个进程只能处于一种状态,它既然由运行态变为阻塞态,就不能再变为就绪态。
(C)4→1,可以。
某一阻塞态进程等到的事件出现了,而且此时就绪队列为空,该进程进入就绪队列后马上又被调度运行。
第1章操作系统概论(1) 试说明什么是操作系统,它具有什么特征?其最基本特征是什么?解:操作系统就是一组管理与控制计算机软硬件资源并对各项任务进行合理化调度,且附加了各种便于用户操作的工具的软件层次。
现代操作系统都具有并发、共享、虚拟和异步特性,其中并发性是操作系统的最基本特征,也是最重要的特征,其它三个特性均基于并发性而存在。
(2) 设计现代操作系统的主要目标是什么?解:现代操作系统的设计目标是有效性、方便性、开放性、可扩展性等特性。
其中有效性指的是OS应能有效地提高系统资源利用率和系统吞吐量。
方便性指的是配置了OS后的计算机应该更容易使用。
这两个性质是操作系统最重要的设计目标。
开放性指的是OS应遵循世界标准规范,如开放系统互连OSI国际标准。
可扩展性指的是OS应提供良好的系统结构,使得新设备、新功能和新模块能方便地加载到当前系统中,同时也要提供修改老模块的可能,这种对系统软硬件组成以及功能的扩充保证称为可扩展性。
(3) 操作系统的作用体现在哪些方面?解:现代操作系统的主要任务就是维护一个优良的运行环境,以便多道程序能够有序地、高效地获得执行,而在运行的同时,还要尽可能地提高资源利用率和系统响应速度,并保证用户操作的方便性。
因此操作系统的基本功能应包括处理器管理、存储器管理、设备管理和文件管理。
此外,为了给用户提供一个统一、方便、有效的使用系统能力的手段,现代操作系统还需要提供一个友好的人机接口。
在互联网不断发展的今天,操作系统中通常还具备基本的网络服务功能和信息安全防护等方面的支持。
(4) 试说明实时操作系统和分时操作系统在交互性、及时性和可靠性方面的异同。
解:交互性:分时系统能够使用户和系统进行人-机对话。
实时系统也具有交互性,但人与系统的交互仅限于访问系统中某些特定的专用服务程序。
及时性:分时系统的响应时间是以人能够接受的等待时间为标准,而实时控制系统对响应时间要求比较严格,它是以控制过程或信息处理中所能接受的延迟为标准。
第一章操作系统引论1.设计现代OS的主要目标是什么?答:(1)有效性(2)方便性(3)可扩充性(4)开放性2.OS的作用可表现在哪几个方面?答:(1)OS作为用户与计算机硬件系统之间的接口(2)OS作为计算机系统资源的管理者(3)OS实现了对计算机资源的抽象3.为什么说OS实现了对计算机资源的抽象?答:OS首先在裸机上覆盖一层I/O设备管理软件,实现了对计算机硬件操作的第一层次抽象;在第一层软件上再覆盖文件管理软件,实现了对硬件资源操作的第二层次抽象。
OS 通过在计算机硬件上安装多层系统软件,增强了系统功能,隐藏了对硬件操作的细节,由它们共同实现了对计算机资源的抽象。
4.试说明推动多道批处理系统形成和发展的主要动力是什么?答:主要动力来源于四个方面的社会需求与技术发展:(1)不断提高计算机资源的利用率;(2)方便用户;(3)器件的不断更新换代;(4)计算机体系结构的不断发展。
5.何谓脱机I/O和联机I/O?答:脱机I/O 是指事先将装有用户程序和数据的纸带或卡片装入纸带输入机或卡片机,在外围机的控制下,把纸带或卡片上的数据或程序输入到磁带上。
该方式下的输入输出由外围机控制完成,是在脱离主机的情况下进行的。
而联机I/O方式是指程序和数据的输入输出都是在主机的直接控制下进行的。
6.试说明推动分时系统形成和发展的主要动力是什么?答:推动分时系统形成和发展的主要动力是更好地满足用户的需要。
主要表现在:CPU 的分时使用缩短了作业的平均周转时间;人机交互能力使用户能直接控制自己的作业;主机的共享使多用户能同时使用同一台计算机,独立地处理自己的作业。
7.实现分时系统的关键问题是什么?应如何解决?答:关键问题是当用户在自己的终端上键入命令时,系统应能及时接收并及时处理该命令,在用户能接受的时延内将结果返回给用户。
解决方法:针对及时接收问题,可以在系统中设置多路卡,使主机能同时接收用户从各个终端上输入的数据;为每个终端配置缓冲区,暂存用户键入的命令或数据。
1、管理对象是内存及作为内存的扩展和延伸的后援存储器(外存)。
基本任务:a.按某种算法分配和回收存储空间。
b.实现逻辑地址到物理地址的转换。
c.由软硬件共同实现程序间的相互保护。
2、程序中通过符号名称来调用、访问子程序和数据,这些符号名的集合被称为“名字空间”,简称名空间。
当程序经过编译或者汇编以后,形成了一种由机器指令组成的集合,被称为目标程序,或者相对目标程序。
这个目标程序指令的顺序都以0为一个参考地址,这些地址被称为相对地址,或者逻辑地址,有的系统也称为虚拟地址。
相对地址的集合称为相对地址空间,也称虚拟地址空间。
目标程序最后要被装入系统内存才能运行。
目标程序被装入的用户存储区的起始地址是一个变动值,与系统对存储器的使用有关,也与分配给用户使用的实际大小有关。
要把以0作为参考地址的目标程序装入一个以某个地址为起点的用户存储区,需要进行一个地址的对应转换,这种转换在操作系统中称为地址重定位。
也就是说将目标地址中以0作为参考点的指令序列,转换为以一个实际的存储器单元地址为基准的指令序列,从而才成为一个可以由CPU调用执行的程序,它被称为绝对目标程序或者执行程序。
这个绝对的地址集合也被称为绝对地址空间,或物理地址空间。
用户程序的装入,是一个从外存空间将用户已经编译好的目标程序,装入内存的过程。
在这个过程中,要进行将相对地址空间的目标程序转换为绝对地址空间的可执行程序,这个地址变换的过程称为地址重定位,也称地址映射,或者地址映象。
覆盖:是利用程序内部结构的特征,以较小的内存空间运行较大程序的技术。
交换:是指内外存之间交换信息。
3、一旦一个区域分配给一个作业后,其剩余空间不能再用(内零头或内碎片),另外当一区域小于当前所有作业的大小时,便整个弃置不用(外零头或外碎片)。
4、(1)2.4us (2)1.5us5、为了给大作业(其地址空间超过主存可用空间)用户提供方便,使他们摆脱对主存和外存的分配和管理。
1.1在多道程序和分时环境中,多个用户同时共享一个系统,返种情冴导致多种平安问题。
a. 列出此类的问题b.在一个分时机器中,能否确保像在与用机器上一样的平安度?并解释乀。
Answer:a.窃叏戒者复制*用户癿程序戒数据;没有合理癿预算来使用资源〔CPU,存,磁盘空闱,外围设备〕b.应该丌行,因为人类设计癿仸何保护机制都会丌可避兊癿被另外癿人所破译,而丏径自信癿认为程序本身癿实现是正确癿是一件困难癿亊。
1.2资源的利用问题在各种各样的操作系统中出现。
试例丼在以下的环境中哪种资源必须被严栺的管理。
〔a〕大型电脑戒迷你电脑系统〔b〕不效劳器相联的工作站〔c〕手持电脑Answer: 〔a〕大型电脑戒迷你电脑系统:存呾CPU资源,外存,网络带宽〔b〕不效劳器相联癿工作站:存呾CPU资源〔c〕手持电脑:功率消耗,存资源1.3在什举情冴下一个用户使用一个分时系统比使用一台个人计算机戒单用户工作站更好?Answer:当另外使用分时系统癿用户较少时,仸务十分巨大,硬件速度径快,分时系统有意丿。
充分利用该系统可以对用户癿问题产生影响。
比起个人电脑,问题可以被更快癿解决。
迓有一种可能収生癿情冴是在同一时闱有许多另外癿用户在同一时闱使用资源。
当作业足够小,丏能在个人计算机上合理癿运行时,以及当个人计算机癿性能能够充分癿运行程序来到达用户癿满意时,个人计算机是最好癿,。
1.4在下面丼出的三个功能中,哪个功能在以下两种环境下,(a)手持装置(b)实时系统需要操作系统的支持?(a)批处理程序(b)虚拟存储器(c)分时Answer:对二实时系统来说,操作系统需要以一种公平癿方式支持虚拟存储器呾分时系统。
对二手持系统,操作系统需要提供虚拟存储器,但是丌需要提供分时系统。
批处理程序在两种环境中都是非必需癿。
1.5描述对称多处理〔SMP〕和非对称多处理乀间的区别。
多处理系统的三个优点和一个缺点?Answer:SMP意味着所以处理器都对等,而丏I/O可以在仸何处理器上运行。
1、高级调度也叫作业调度(或宏观调度),是将已进入系统并处于后备状态的作业按某种算法选择一个或一批,为其建立进程,让其进入主机。
中级调度负责进程在内存和辅存对换区之间的对换。
低级调度也叫进程调度(或微观调度),我们所说的CPU调度,主要就是指的这一级调度。
2、(1)不一定。
(2)可能。
(3)不一定。
3、CPU调度使得多个进程能有条不紊地共享一个CPU。
而且,由于调度的速度很快,使每个用户产生一个错觉,就好象他们每人都有一个专用CPU。
这就把“物理上的一个变成了逻辑上的多个”——为每个用户提供了一个虚拟处理机。
功能:保留原运行进程的现场信息;分配CPU;为新选中进程恢复现场。
4、(1)FCFS:P1---P2---P3---P4---P5SBF:P2---P4---P3----P5----P1非剥夺优先级:P4---P1---P3---P5---P2(2)FCFS:(10+11+13+14+19)/5=13.4SBF:(1+2+4+9+19)/5=7非剥夺优先级:(1+11+13+18+19)/5=12.45、剥夺方式是在现运行进程正在执行的CPU周期尚未结束之前,系统有权按某种原则剥夺它的CPU并把CPU分给另一进程。
剥夺CPU的原则有很多,视不同的调度算法而异。
其中最主要的是优先权原则和时间片原则。
在优先权原则下,只要在就绪队列中出现了比现运行进程优先权更高的进程,便立即剥夺现行进程的CPU并分给优先权最高的进程。
时间片原则是,当时间片到时后,便立即重新进行CPU调度。
非剥夺方式是,一旦CPU分给某进程的一个CPU周期,除非该周期到期并主动放弃,否则系统不得以任何方式剥夺现行进程的CPU。
6、引起进程调度的原因:a.进程自动放弃CPUi)进程运行结束ii)执行P、V操作等原语将自己封锁iii)进程提出I/O请求而等待完成b.CPU被抢占i)时间片用完ii)有更高优先级进程进入就绪状态7、不相同。
操作系统概念课后习题答案操作系统是计算机系统中的一个关键组成部分,负责管理和协调计算机硬件和软件资源的分配与调度。
在学习操作系统的过程中,解决课后习题是提高对操作系统概念理解的重要方法之一。
本篇文章将为您提供一些常见操作系统概念课后习题的答案,并对相应的知识点进行解析。
一、选择题1. 操作系统的主要功能是()。
a) 调度进程b) 管理内存c) 控制设备d) 以上都是答案:d) 以上都是解析:操作系统的主要功能包括调度进程、管理内存以及控制设备等。
它扮演着协调和管理计算机系统中各种资源的角色。
2. 在多道程序环境下,()是操作系统的核心功能。
a) 进程管理b) 文件管理c) 内存管理d) 网络管理答案:a) 进程管理解析:在多道程序环境下,操作系统需要管理多个进程的创建、调度、同步和通信等操作。
进程管理是操作系统的核心功能之一。
3. 操作系统中的分时系统是指()。
a) 多个任务同时执行b) 多个任务按时间片轮流执行c) 多个任务按优先级执行d) 多个任务按照先来先服务原则执行答案:b) 多个任务按时间片轮流执行解析:分时系统是一种多道程序设计方式,多个任务按照时间片的方式轮流执行。
每个任务都可以获得操作系统的部分处理时间,以实现并发执行的效果。
二、填空题1. 进程是程序的()。
答案:执行实例或执行过程解析:进程是程序在计算机上执行的实例或执行过程,它包括正在运行的程序的相关信息以及所需的资源。
2. 死锁是指两个或多个进程因为争夺资源而无法继续运行的状态,具有()、不可剥夺和循环等特性。
答案:互斥、占有并等待、不可剥夺和循环等特性解析:死锁是指两个或多个进程因为互相争夺资源而陷入的无法继续运行的状态。
其特性包括互斥、占有并等待、不可剥夺和循环等。
三、简答题1. 请解释进程和线程之间的区别。
答案:进程是程序在计算机上执行的实例或执行过程,拥有自己的独立地址空间和系统资源。
而线程是在进程内部运行的较小的执行单位,共享相同的地址空间和系统资源。
1、基本任务①缓冲区管理②地址转换和设备驱动③I/O调度:为I/O请求分配外设、通道、控制器等④中断管理2、在循环测试方式中,因为外设完全是一个被动的控制对象,CPU必须对之进行连续的监视。
为改变这种局面,首先是增加外设的主动性——每当外设传输结束时,能主动向CPU 报告,此即引入中断的概念。
为了把CPU从繁忙的杂务中解放出来,I/O设备的管理不再依赖于CPU,而应建立起自己的一套管理机构,这就产生了“通道”。
根据信息交换方式,通道可分为以下三种类型:字节多路通道选择通道成组多路通道3、简单地说,缓冲技术主要解决在系统某些位置上信息的到达率与离去率不匹配的问题。
缓冲技术是在这些位置上设置能存贮信息的缓冲区,在速率不匹配的二者之间起平滑作用。
4、常用的设备分配技术有:独占:固定地将设备分给一个用户。
共享:将设备分给若干用户共享使用。
虚拟:用共享设备去模拟独占设备,以达到共享、快速的效果。
5、引入Spooling系统后,就把一个可共享的磁盘装置,改造成为若干台I/O设备(虚拟输入、输出设备)。
当需输入时,输入程序就把输入设备上的作业传输到输入井中,并由作业控制块进行排队等候,再由作业调度程序将输入井中作业调入内存运行。
运行完毕由文件系统将结果组成文件放入输出井中,以后就由Spooling输出程序将结果从相应设备输出。
6、I/O启动与结束当某一进程在CPU上运行而提出I/O请求时,则通过系统调用进入操作系统,操作系统首先为之分配通道和设备,然后按照I/O请求编制通道程序,并存入内存。
然后将通道程序起址CAW(通道地址寄存器),接着启动I/O。
CPU发出启动I/O指令之后,通道工作过程为:首先根据通道地址寄存器(CAW),从内存取出通道命令送入通道控制字寄存器(CCW),同时,修改CAW。
根据CCW中命令进行实际I/O操作。
执行完毕后,如还有命令则转回去继续进行,否则接着往下进行。
最后,发I/O结束中断向CPU汇报工作完成。
操作系统教程第二版课后答案【篇一:《操作系统教程》(第四版)课后答案】目录第一章第二章第三章第四章第五章第六章第七章第八章操作系统概述处理器管理并发进程存储管理设备管理文件管理操作系统的安全与保护网络和分布式操作系统1 7 26 93 103 108113 115【篇二:操作系统教程(第四版)课后习题答案】1、有一台计算机,具有imb 内存,操作系统占用200kb ,每个用户进程各占200kb 。
如果用户进程等待i/o 的时间为80 % ,若增加1mb 内存,则cpu 的利用率提高多少?答:设每个进程等待i/o 的百分比为p ,则n 个进程同时等待刀o的概率是pn ,当n 个进程同时等待i/o 期间cpu 是空闲的,故cpu 的利用率为1-pn。
由题意可知,除去操作系统,内存还能容纳4 个用户进程,由于每个用户进程等待i/o的时间为80 % , 故:cpu利用率=l-(80%)4 = 0.59若再增加1mb 内存,系统中可同时运行9 个用户进程,此时:cpu 利用率=l-(1-80%)9 = 0.87故增加imb 内存使cpu 的利用率提高了47 % :87 %/59 %=147 %147 %-100 % = 47 %2 一个计算机系统,有一台输入机和一台打印机,现有两道程序投入运行,且程序a 先开始做,程序b 后开始运行。
程序a 的运行轨迹为:计算50ms 、打印100ms 、再计算50ms 、打印100ms ,结束。
程序b 的运行轨迹为:计算50ms 、输入80ms 、再计算100ms ,结束。
试说明(1 )两道程序运行时,cpu有无空闲等待?若有,在哪段时间内等待?为什么会等待?( 2 )程序a 、b 有无等待cpu 的情况?若有,指出发生等待的时刻。
答:画出两道程序并发执行图如下:(1)两道程序运行期间,cpu存在空闲等待,时间为100 至150ms 之间(见图中有色部分)(2)程序a 无等待现象,但程序b 有等待。
第一章操作系统引论1.设计现代OS的主要目标是什么?答:〔1〕有效性〔2〕方便性〔3〕可扩大性〔4〕开放性2.OS的作用可表现在哪几个方面?答:〔1〕OS作为用户及计算机硬件系统之间的接口〔2〕OS作为计算机系统资源的管理者〔3〕OS实现了对计算机资源的抽象3.为什么说OS实现了对计算机资源的抽象?答:OS首先在裸机上覆盖一层I/O设备管理软件,实现了对计算机硬件操作的第一层次抽象;在第一层软件上再覆盖文件管理软件,实现了对硬件资源操作的第二层次抽象。
OS 通过在计算机硬件上安装多层系统软件,增强了系统功能,隐藏了对硬件操作的细节,由它们共同实现了对计算机资源的抽象。
4.试说明推动多道批处理系统形成和开展的主要动力是什么?答:主要动力来源于四个方面的社会需求及技术开展:〔1〕不断提高计算机资源的利用率;〔2〕方便用户;〔3〕器件的不断更新换代;〔4〕计算机体系构造的不断开展。
5.何谓脱机I/O和联机I/O?答:脱机I/O 是指事先将装有用户程序和数据的纸带或卡片装入纸带输入机或卡片机,在外围机的控制下,把纸带或卡片上的数据或程序输入到磁带上。
该方式下的输入输出由外围机控制完成,是在脱离主机的情况下进展的。
而联机I/O方式是指程序和数据的输入输出都是在主机的直接控制下进展的。
6.试说明推动分时系统形成和开展的主要动力是什么?答:推动分时系统形成和开展的主要动力是更好地满足用户的需要。
主要表现在:CPU 的分时使用缩短了作业的平均周转时间;人机交互能力使用户能直接控制自己的作业;主机的共享使多用户能同时使用同一台计算机,独立地处理自己的作业。
7.实现分时系统的关键问题是什么?应如何解决?答:关键问题是当用户在自己的终端上键入命令时,系统应能及时接收并及时处理该命令,在用户能承受的时延内将结果返回给用户。
解决方法:针对及时接收问题,可以在系统中设置多路卡,使主机能同时接收用户从各个终端上输入的数据;为每个终端配置缓冲区,暂存用户键入的命令或数据。
计算机操作系统第二版答案习题一1. 什么是操作系统?它的主要功能是什么?答:操作系统是用来管理计算机系统的软、硬件资源,合理地组织计算机的工作流程,以方便用户使用的程序集合;其主要功能有进程管理、存储器管理、设备管理和文件管理功能。
2. 什么是多道程序设计技术?多道程序设计技术的主要特点是什么?答:多道程序设计技术是把多个程序同时放入内存,使它们共享系统中的资源;特点:多道,即计算机内存中同时存放多道相互独立的程序;宏观上并行,是指同时进入系统的多道程序都处于运行过程中;微观上串行,是指在单处理机环境下,内存中的多道程序轮流占有CPU,交替执行。
3. 批处理系统是怎样的一种操作系统?它的特点是什么?答:批处理操作系统是一种基本的操作系统类型。
在该系统中,用户的作业被成批的输入到计算机中,然后在操作系统的控制下,用户的作业自动地执行;特点是:资源利用率高、系统吞吐量大、平均周转时间长、无交互能力。
4. 什么是分时系统?什么是实时系统?试从交互性、及时性、独立性、多路性和可靠性几个方面比较分时系统和实时系统。
答:分时系统:一个计算机和许多终端设备连接,每个用户可以通过终端向计算机发出指令,请求完成某项工作,在这样的系统中,用户感觉不到其他用户的存在,好像独占计算机一样。
实时系统:对外部输入的信息,实时系统能够在规定的时间内处理完毕并作出反应。
比较:交互性:实时系统具有交互性,但人与系统的交互,仅限于访问系统中某些特定的专用服务程序。
它不像分时系统那样向终端用户提供数据处理、资源共享等服务。
实时系统的交互性要求系统具有连续人机对话的能力,也就是说,在交互的过程中要对用户得输入有一定的记忆和进一步的推断的能力。
及时性:实时系统对及时性的要求与分时系统类似,都以人们能够接受的等待时间来确定。
而及时系统则对及时性要求更高。
独立性:实时系统与分时系统一样具有独立性。
每个终端用户提出请求时,是彼此独立的工作、互不干扰。
第一章操作系统引论1.设计现代OS的主要目标是什么?答:(1)有效性(2)方便性(3)可扩充性(4)开放性2.OS的作用可表现在哪几个方面?答:(1)OS作为用户与计算机硬件系统之间的接口(2)OS作为计算机系统资源的管理者(3)OS实现了对计算机资源的抽象3.为什么说OS实现了对计算机资源的抽象?答:OS首先在裸机上覆盖一层I/O设备管理软件,实现了对计算机硬件操作的第一层次抽象;在第一层软件上再覆盖文件管理软件,实现了对硬件资源操作的第二层次抽象。
OS 通过在计算机硬件上安装多层系统软件,增强了系统功能,隐藏了对硬件操作的细节,由它们共同实现了对计算机资源的抽象。
4.试说明推动多道批处理系统形成和发展的主要动力是什么?答:主要动力来源于四个方面的社会需求与技术发展:(1)不断提高计算机资源的利用率;(2)方便用户;(3)器件的不断更新换代;(4)计算机体系结构的不断发展。
5.何谓脱机I/O和联机I/O?答:脱机I/O 是指事先将装有用户程序和数据的纸带或卡片装入纸带输入机或卡片机,在外围机的控制下,把纸带或卡片上的数据或程序输入到磁带上。
该方式下的输入输出由外围机控制完成,是在脱离主机的情况下进行的。
而联机I/O方式是指程序和数据的输入输出都是在主机的直接控制下进行的。
6.试说明推动分时系统形成和发展的主要动力是什么?答:推动分时系统形成和发展的主要动力是更好地满足用户的需要。
主要表现在:CPU 的分时使用缩短了作业的平均周转时间;人机交互能力使用户能直接控制自己的作业;主机的共享使多用户能同时使用同一台计算机,独立地处理自己的作业。
7.实现分时系统的关键问题是什么?应如何解决?答:关键问题是当用户在自己的终端上键入命令时,系统应能及时接收并及时处理该命令,在用户能接受的时延内将结果返回给用户。
解决方法:针对及时接收问题,可以在系统中设置多路卡,使主机能同时接收用户从各个终端上输入的数据;为每个终端配置缓冲区,暂存用户键入的命令或数据。
word 文档下载后可自由复制编辑你计算机系统结构清华第 2 版习题解答word 文档下载后可自由复制编辑1 目录1.1 第一章(P33)1.7-1.9 (透明性概念),1.12-1.18 (Amdahl定律),1.19、1.21 、1.24 (CPI/MIPS)1.2 第二章(P124)2.3 、2.5 、2.6 (浮点数性能),2.13 、2.15 (指令编码)1.3 第三章(P202)3.3 (存储层次性能), 3.5 (并行主存系统),3.15-3.15 加 1 题(堆栈模拟),3.19 中(3)(4)(6)(8)问(地址映象/ 替换算法-- 实存状况图)word 文档下载后可自由复制编辑1.4 第四章(P250)4.5 (中断屏蔽字表/中断过程示意图),4.8 (通道流量计算/通道时间图)1.5 第五章(P343)5.9 (流水线性能/ 时空图),5.15 (2种调度算法)1.6 第六章(P391)6.6 (向量流水时间计算),6.10 (Amdahl定律/MFLOPS)1.7 第七章(P446)7.3 、7.29(互连函数计算),7.6-7.14 (互连网性质),7.4 、7.5 、7.26(多级网寻径算法),word 文档下载后可自由复制编辑7.27 (寻径/ 选播算法)1.8 第八章(P498)8.12 ( SISD/SIMD 算法)1.9 第九章(P562)9.18 ( SISD/多功能部件/SIMD/MIMD 算法)(注:每章可选1-2 个主要知识点,每个知识点可只选 1 题。
有下划线者为推荐的主要知识点。
)word 文档 下载后可自由复制编辑2 例 , 习题2.1 第一章 (P33)例 1.1,p10假设将某系统的某一部件的处理速度加快到 10倍 ,但该部件的原处理时间仅为整个运行时间的40%,则采用加快措施后能使整个系统的性能提高多少?解:由题意可知: Fe=0.4, Se=10,根据 Amdahl 定律S n To T n1 (1Fe )S n 1 10.6 0.4100.64 Fe Se 1.56word 文档 下载后可自由复制编辑例 1.2,p10采用哪种实现技术来求浮点数平方根 FPSQR 的操作对系统的性能影响较大。
操作系统第二版罗宇_课后答案操作系统部分课后习题答案1.2操作系统以什么方式组织用户使用计算机?请问:操作系统以进程的方式非政府用户采用计算机。
用户所须要顺利完成的各种任务必须由适当的程序去表达出来。
为了同时实现用户的任务,必须使适当功能的程序执行。
而进程就是指程序的运转,操作系统的进程调度程序同意cpu在各进程间的转换。
操作系统为用户提供更多进程建立和完结等的系统调用功能,并使用户能建立崭新进程。
操作系统在初始化后,可以为每个可能将的系统用户建立第一个用户进程,用户的其他进程则可以由母进程通过“进程建立”系统调用展开建立。
1.4早期监督程序(monitor)的功能是什么?请问:早期监督程序的功能就是替代系统操作员的部分工作,自动控制作业的运转。
监督程序首先把第一道作业调到主存,并启动该作业。
运转完结后,再把下一道作业调到主存启动运转。
它如同一个系统操作员,负责管理批作业的i/o,并自动根据作业控制说明书以单道以太网的方式掌控作业运转,同时在程序运行过程中通过提供更多各种系统调用,掌控采用计算机资源。
1.7试述多道程序设计技术的基本思想。
为什么采用多道程序设计技术可以提高资源利用率?请问:多道程序设计技术的基本思想就是,在主存同时维持多道程序,主机以交错的方式同时处置多道程序。
从宏观来看,主机内同时维持和处置若干道已开始运行但尚未完结的程序。
从微观来看,某一时刻处理机只运转某道程序。
可以提高资源利用率的原因:由于任何一道作业的运行总是交替地串行使用cpu、外设等资源,即使用一段时间的cpu,然后使用一段时间的i/o设备,由于采用多道程序设计技术,加之对多道程序实施合理的运行调度,则可以实现cpu和i/o设备的高度并行,可以大大提高cpu与外设的利用率。
1.8什么就是分时系统?其主要特征就是什么?适用于于哪些应用领域?答:分时系统是以多道程序设计技术为基础的交互式系统,在此系统中,一台计算机与多台终端相连接,用户通过各自的终端和终端命令以交互的方式使用计算机系统。
操作系统概念精要原书第二版答案1.什么是操作系统?其主要功能是什么?操作系统是控制和管理计算机系统内各种硬件和软件资源,有效组织多道程序运行的系统软件(或程序集合),是用户和计算机直接的程序接口.2.在某个计算机系统中,有一台输入机和一台打印机,现有两道程序投入运行,程序A、B同时运行,A略早于B。
A的运行轨迹为:计算50ms、打印100ms、再计算50ms、打印100ms,结束。
B的运行轨迹为:计算50ms、输入80ms、再计算100ms,结束。
试说明:(1)两道程序运行时,CPU是否空闲等待?若是,在那段时间段等待?(2)程序A、B是否有等待CPU的情况?若有,指出发生等待的时刻。
(1) cpu有空闲等待,在100ms~150ms的时候.(2)程序A没有等待cpu,程序B发生等待的时间是180ms--200ms.1.设公共汽车上,司机和售票员的活动如下:司机的活动:启动车辆;正常行车;到站停车。
售票员的活动:关车门;售票;开车门。
在汽车不断的到站、停车、行驶过程中,用信号量和P、V操作实现这两个活动的同步.关系。
semaphore sl,s2;sl=0;s2=0;cobegin司机0;售票员0;coend process司机){while(true) while(true)P(s1);启动车辆;正常行车;到站停车;V(s2);}}process售票员0 while(true)关车门;V(s1);售票;P(s2); .开车门;上下乘客,}2.设有三个进程P、Q、R共享一个缓冲区,该缓冲区一次只能存放一个数据,P进程负责循环地从磁带机读入数据并放入缓冲区,Q进程负责循环地从缓冲区取出P进程放入的数据进行加工处理,并把结果放入缓冲区,R进程负贵循环地从缓冲区读出Q进程放入的数据并.在打印机.上打印。
请用信号量和P、V操作,写出能够正确执行的程序。
操作系统概念精要原书第二版答案
1一个操作系统的三个主要目的是什么?
答:三个主要目的是:为计算机用户提供一个方便方便地在计算机硬件上执行程序的环境。
•根据需要分配计算机的单独资源来执行所需的任务。
分配过程应尽可能的公平和合理。
•作为一种控制程序,它主要具有两个功能: (1)监督用户程序的执行,防止计算机出现错误和不当使用;(2)管理I/O设备的操作和控制。
2我们强调了需要一个操作系统来充分使用计算硬件。
什么时候操作系统适合放弃这一原则并“浪费”资源?为什么这样的系统并不是真正的浪费呢?
答:单用户系统应该最大限度地为用户使用该系统。
GUI可能会“浪费”CPU周期,但它却更优化了用户与系统的交互。
3程序员在为实时环境编写操作系统时必须克服的主要缺点是什么?
答:主要的缺点是保持操作系统在实时系统的时间限制内。
如果系统在某个时间段内没有完成一个任务,则可能会导致整个系统的崩溃。
因此,在为实时系统编写操作系统时,作者必须确保他的调度方案不允许响应时间超过时间限制。
4记住操作系统的各种细节,考虑操作系统是否应该包括诸如浏览器和邮件程序等应用程序。
主张它应该,也不应该,并解释你的答案。
答:一个论点支持包括流行的应用程序在操作系统是,如果应用程序是嵌入在操作系统中,它可能会更好地利用内核的特性,因此有性能优势的应用程序,运行之外的内核。
然而,反对在操作系统中嵌入应用程序的争论通常占主导地位: (1)应用程序是应用程序——而不是操作系统的一部分,(2)在内核中运行的任何性能缺陷都被安全漏洞所抵消,(3)包含应用程序会导致操作系统臃肿。
5内核模式和用户模式之间的区别如何作为一种基本形式的保护(安全)?
答:内核模式和用户模式之间的区别以以下方式提供了一种基本的保护形式。
某些指令只能在CPU处于内核模式时才能执行。
类似地,只有在程序处于内核模式时才能访问硬件设备,而只有在CPU处于内核模式时才能启用或禁用中断。
因此,CPU在用户模式下执行时的能力非常有限,从而加强了对关键资源的保护。
6下列哪一项指令应享有特权? a.设置计时器的值。
b.读时钟。
c.清除内存。
d.发出一个陷阱指令。
e.关闭中断。
f.修改设备状态表中的条目。
g.从用户模式切换到内核模式。
h.访问I/O设备。
答:以下操作需要具有特权:设置定时器值,清除内存,关闭中断,修改设备状态表中的条目,访问I/O设备。
其余的都可以在用户模式下执行。
7一些早期的计算机通过将操作系统放置在一个不能被用户作业或操作系统本身所修改的内存分区中来保护操作系统。
描述一下你认为这样的方案可能会出现的两个不同的因素。
答:操作系统所需的数据(密码、访问控制、会计信息等)必须存储在其中或通过不受保护的内存进行传递,因此未经授权的用户可以访问。