米勒麻醉学第七版7
- 格式:pdf
- 大小:2.76 MB
- 文档页数:68
4 – Medical InformaticsC. William HansonKey Points1. A computer's hardware serves many of the same functions as those of the human nervoussystem, with a processor acting as the brain and buses acting as conducting pathways, as well asmemory and communication devices.2.The computer's operating system serves as the interface or translator between its hardware andthe software programs that run on it, such as the browser, word processor, and e-mail programs.3.The hospital information system is the network of interfaced subsystems, both hardware andsoftware, that coexist to serve the multiple computing requirements of a hospital or health system,including services such as admissions, discharge, transfer, billing, laboratory, radiology, andothers.4.An electronic health record is a computerized record of patient care.puterized provider order entry systems are designed to minimize errors, increase patient careefficiency, and provide decision support at the point of entry.6.Decision support systems can provide providers with best-practice protocols and up-to-dateinformation on diseases or act to automatically intervene in patient care when appropriate.7.The Health Insurance Portability and Accountability Act is a comprehensive piece of legislationdesigned in part to enhance the privacy and security of computerized patient information.8.Providers are increasingly able to care for patients at a distance via the Internet, and telemedicinewill continue to grow as the technology improves, reimbursement becomes available, andlegislation evolves.Computer HardwareCentral Processing UnitThe central processing unit (CPU) is the “brain” of a modern computer. It sits on the motherboard, which is the computer's skeleton and nervous system, and communicates with the rest of the computer and the world through a variety of “peripherals.” Information travels through the computer on “buses,” which are the computer's information highways or “nerves,” in the form of “bits.” Bits are aggregated into meaningful information in exactly the same way that dots and dashes are used in Morse code. Bits are the building blocks for both the instructions, or programs, and the data, or files, with which the computer works.Today's CPU is a remarkable piece of engineering, totally comparable in scope and scale to our great bridges and buildings, but so ubiquitous and hidden that most of us are unaware of its miniature magnificence. Chip designers essentially create what can be thought of as a city, complete with transportation, utilities, housing, and a government, every time they create a new CPU. With each new generation of chips, the “cities” grow substantially and yet remain miniaturized to the size of a fingernail. For the purposes of this text, the CPU can be treated as a black box into which flow two highways: one for data, the other for instructions. Inside that black box, the CPU ( Fig. 4-1 ) uses the instructions to determine what to do with data—for example, how to create this sentence from my interaction with the computer's keyboard. The CPU's internal clock is like a metronome pacing the speed with which the instructions are executed.Figure 4-1 Programs and data are stored side by side in memory in the form of single data bits—the program tells the central processing unit (CPU) what to do with the data. RAM, random-access memory.Most people think that the clock speed of the CPU, which is measured in megahertz, or millions of instructions per minute, determines the performance speed of the unit. In reality, the performance of a CPU is a function of several factors that should be intuitive to anesthesiologists when an operating room (OR) analogy is used. Let us compare the clock speed to surgical speed, where a fast clock is comparable to a fast surgeon and vice versa. CPUs also have what are called caches, which are holding areas for data and instructions, quite comparable to preoperative holding areas. Information is moved around in the CPU on buses, which can be likened to the number of ORs. In other words, it is possible to have a CPU that is limited because it has a slow or small cache, in the same way that OR turnover is limited by the lack of preoperative preparation beds or too few ORs for the desired caseload.The speed of a processor is a function of the width of its internal buses, clock speed, the size and speed of internal caches, and the effectiveness with which it anticipates the future. Although this last concept may seem obscure, an OR analogy would be an algorithm that predicts a procedure's length based on previous operations of the same type by the same surgeon. Without going into detail, modern processors use techniques called speculation, prediction, and explicit parallelism to maximize efficiency of the CPU.True general-purpose computers are distinct from their predecessor calculating machines in that regardless of whether they are relatively slow and small, as in dedicated devices such as smart phones, or highly streamlined and fast, as in supercomputers, they can perform the same tasks given enough time. This definition was actually formalized by Alan Turing, who is one of the fathers of computing.Each type of CPU has its own instruction set, which is essentially its language. Families of CPUs, such as Intel's processors, tend to use one common language, albeit with several dialects, depending on the specific chip. Other CPU families use a very different language. A complex–instruction set computer (CISC) has a much more lush vocabulary than a reduced–instruction set computer (RISC), but the latter may have certain efficiencies relative to the former. It is the fact that both types of computer architecture can run exactly the same program (i.e., any windowed operating system) that makes them general-purpose computers.MemoryComputers have a variety of different kinds of memory ranging from very small, very fast memory in theCPU to much slower, typically much larger memory storage sites that may be fixed (hard disk) or removable (compact disk, flash drive).Ideally, we would like to have an infinite amount of extremely fast memory immediately available to the CPU, just as we would like to have all of the OR patients for a given day waiting in the holding area ready to roll into the OR as soon as the previous case is completed. Unfortunately, this would be infinitely expensive. The issue of ready data availability is particularly important now, as opposed to a decade ago, because improvements in central processing speed have outpaced gains in memory speed such that the CPU can sit idle for extended periods while it waits for a desired chunk of data from memory.Computer designers have come up with an approach that ensures a high likelihood that the desired data will be close by. This necessitates the storage of redundant copies of the same data in multiple locations at the same time. For example, the sentence I am currently editing in a document might be stored in very fast memory next to the CPU, whereas a version of the complete document, including an older copy of the same sentence, could be stored in slower, larger-capacity memory ( Fig. 4-2 ). At the conclusion of an editing session, the two versions are reconciled and the newer sentence is inserted into the document.Figure 4-2 Processing of text editing using several “memory” caches in which duplicate copies of the same text may be kept nearby for ready access.The very fast memory adjacent to the CPU is referred as cache memory, and it comes in different sizes and speeds. Cache memory is analogous to the preoperative and postoperative holding areas in an OR in that both represent rapidly accessible buffer space. Modern computer architectures have primary andsecondary caches that can either be built into the CPU chip or be situated adjacent to it on the motherboard. Cache memory is typically implemented in static random-access memory (SRAM), whereas the larger and slower “main memory” consists of dynamic random-access memory (DRAM) modules. RAM has several characteristics, including the facts that it can be read or written (in contrast with read-only memory), it disappears when the electricity is turned off, and it is much faster than the memory on a disk drive.To understand the impact of the mismatch in memory access times and CPU speed, consider the following. Today's fastest hard disks have access times measuring about 10 milliseconds (to get a random chunk of information). If a 200-mHz CPU had to wait for 10 milliseconds between each action requiring new data from a hard disk, it would sit idle for 2 million clock cycles between each clock cycle used for actual work. Furthermore, it takes 10 times longer for the computer to get data from a compact disk or digital video disk than it does for data from a hard disk.CommunicationsThere are many functionally independent parts of a computer that need to communicate seamlessly and on a timely basis. The keyboard and mouse have to be able to signal their actions, the monitor must be refreshed continuously, and the memory stores have to be read and written correctly. The CPU orchestrates all of this by using various system buses as communication and data pathways. Whereas some of the buses are dedicated to specific tasks on newer computers, such as communication with the video processor over a dedicated video bus, others are general-purpose buses.Buses are analogous to highways traveling between locations in the computer ( Fig. 4-3 ). In most computers, the buses vary in width, with the main bus typically being the widest and other buses narrower and therefore of lower capacity. Data (bits) travel along a bus in parallel, like a rank of soldiers, and at regular intervals determined by the clock speed of the computer. Older computers had main buses 8 bits wide, whereas newer Pentium-class computers use buses as wide as 64 bits.Figure 4-3 Buses are like highways, where the number of available “lanes” relates to bus capacity.Input-output buses link “peripherals” such as the mouse, keyboard, removable disk drives, and game controllers to the rest of the computer. These buses have become faster and increasingly standardized. The universal serial bus (USB) is a current widely accepted standard, as is Apple's proprietary Firewire bus. These buses allow “on-the-fly” attachment and removal of peripherals via a standardized plug, and a user can expect that a device plugged into one of these ports will identify itself to the operating system and function without the need for specific configuration. This is a distinct improvement over the previous paradigm, in which a user typically needed to open the housing of the computer to attach a new peripheral and then configure a specific software driver to permit communication between the device and the computer.In addition to their local computing function, modern personal computers have become our conduits to networks and must therefore act as terminal points on the Internet. As with houses or phones, eachcomputer must have an individual identifier (address, phone number) to receive communications uniquely intended for it. Examples of these kinds of specific addresses are the IP (Internet protocol) address and the MAC (media access control) address. The IP address is temporarily or permanently assigned to a device on the Internet (typically a computer) to uniquely identify it among all of the other devices on the Internet. The MAC address is used to specifically identify the network interface card for the computers that assign IP addresses.The computer must also have the right kind of hardware to receive and interpret Internet-based communications. Wired and wireless network interface cards are built into all new computers and have largely replaced modems as the hardware typically used for network communications. Whereas a modem communicates over existing phone lines also used for voice communication, network cards communicate over channels specifically intended for computer-to-computer communications and are almost invariably faster than modems.Although we commonly think of the Internet as being one big network, it is instructive to understand a little bit about the history of computer networking. In the beginning there were office networks and the progenitor Internet. The first office network was designed at the Palo Alto Research Center, which is the Xerox research laboratory where a number of major computer innovations were developed. That office network was called Ethernet and was designed as part of the “office of the future,” where word-processing devices and printers were cabled together. Separately, the ARPAnet was the Defense Advanced Research Project Agency's creation and linked mainframe computers at major universities. Over time, the two networks grew toward one another almost organically, and today we have what seems to be a seamless network that links computers all over—and above—the world.Networking technology has evolved almost as rapidly as computer technology. As with buses in a computer, the networks that serve the world can be likened to highways. Backbone networks ( Fig. 4-4 ) are strung across the world and have tremendous capacity, like interstate highways. Lower-capacity systems feed into the backbones, and traffic is directed by router computers. To facilitate traffic management, before transmission messages are cut up into discrete packets, each of which travels autonomously to its destination, where they are reassembled. Internet packets may travel over hard-wired, optical, or wireless networks en route to their destination.Figure 4-4 Lower-speed networks are attached to high-speed “core” networks that span the globe. LAN, local area network. Computer SoftwareOperating System and ProgrammingThe operating system (OS) is the government of the computer. As with a municipal government, the OS is responsible for coordinating the actions of disparate components of a computer, including its hardware and various software programs, to ensure that the computer runs smoothly. Specifically, it controls the CPU, memory, interface devices, and all the programs running on the machine at any given time. The OS also needs to provide a consistent set of rules and regulations to which new programs must adhere to participate.Although most of us think of Apple and Windows synonymously with OSs, there are other OSs that deservemention. Linux is an open-source, meaning nonproprietary, OS for personal computers that is distributed by several different vendors but continuously maintained by a huge community of passionate programmer devotees who contribute updates and new programs. In addition, every cell phone and smart device has its own OS that performs exactly the same role as for a personal computer OS.OSs can be categorized into four broad categories ( Fig. 4-5 ). A real-time OS is typically used to run a specific piece of machinery, such as a scientific instrument, and is dedicated solely to that task. A single-user, single-task OS is like that found on a cell phone, where a single user does one job at a time, such as dialing, browsing, or e-mail. Most of today's laptop and desktop computers are equipped with single-user, multitasking OSs, whereby a single user can run several “jobs” simultaneously, such as word processing, e-mail, and a browser. Finally, multi-user, multitasking OSs are usually found on mainframe computers and run many jobs for many users concurrently.Figure 4-5 Several operating system configurations.All OSs have a similar core set of jobs: CPU management, memory management, storage management, device management, application interfacing, and user interfacing. Without getting into detail beyond the scope of this chapter, the OS breaks a given software job down into manageable chunks and orders them for sequential assignment to the CPU. The OS also coordinates the flow of data among the various internal memory stores, as well as determines where that data will be stored for the long term and keeps track of it from session to session. The OS provides a consistent interface for applications so that the third-party program that you buy at a store will work properly on a given OS. Finally, and of most importance for manyof us, the OS manages its interface to you, the user. Typically, today that takes the form of a graphic user interface (GUI).E-mailE-mail communication over the Internet antedated the browser-based World Wide Web by decades. In fact, the earliest e-mail was designed for communication among multiple users in a “time-sharing,” multi-user environment on a mainframe computer. E-mail was used for informal and academic communications among the largely university-based user community. Without going into great detail, an e-mail communication protocol was designed so that each message included information about the sender, the addressee, and the body of the message. The protocol is called the Simple Mail Transfer Protocol (SMTP), and the process of message transmission proceeds as follows. The sender composes a message via a software-based messaging program (such as Outlook, Gmail). The sender then applies the recipient's address and dispatches the message. The message travels through a series of mailboxes, much as a regular letter does, and eventually arrives at the addressee's mailbox, where it sits awaiting “pickup.” Although e-mail has had dramatic and largely positive implications for the connectedness of organizations and people, it has also created hitherto unimagined problems, including spam, privacy issues, and the need for new forms of etiquette.The term spam is said to have come from a Monty Python skit. Spam is such a ubiquitous problem that most e-mail crossing the Internet is spam at this point. Spam is essentially bulk e-mail and was never envisioned by the creators of SMTP. Spam is a generic problem with e-mail communications, but the issues of privacy and etiquette are of much greater relevance for medically oriented e-mail.The American Medical Informatics Association has taken a lead role in defining the issues associated with e-mail in the medical setting. The organization defined patient-provider e-mail as “computer based communication between clinicians and patients within a contractual relationship in which the health care provider has taken on an explicit measure of responsibility for the client's care.”[1] A parallel set of issues relates to medically oriented communications between providers. [2] [3] [4] [5] Another category of medically oriented communications is that in which a provider offers medical advice in the absence of a “contractual relationship.” An egregious example of the latter is the prescription of erectile dysfunction remedies by physicians who review a Web-based form submitted by the “patient” and then prescribe a treatment for a fee.In theory, e-mail is a perfect way to communicate with patients. [6] [7] [8] [9] Because of its asynchronous nature, it allows two parties who may not be available at the same time to communicate efficiently ( Fig. 4-6 ), and it represents the middle road between two other types of asynchronous communication: voice mail and traditional mail. E-mail can also be tailored to brief exchanges, more structured communications, and information broadcasts (such as announcements). As such, a patient could send interval updates (blood pressure, blood sugar) to the physician. Alternatively, the physician could follow up on an office visit by providing educational material about a newly diagnosed condition or planned procedure.Figure 4-6 E-mail is an effective form of communication between a patient and a physician because it does not require bothparties to be present at the same time.Even though e-mail has many advantages in medicine, there are a variety of risks associated with its use, which has slowed adoption.[10] Some of the problems are generic to any e-mail exchange. Specifically, it is a more informal and often unfiltered form of communication than a letter and often has the immediacy of a conversation but lacks its visual and verbal cues. Emoticons (such as the use of “:)” to indicate that a comment was sent with a “smile”) evolved as a remedy for this problem.E-mail is also permanent in the sense that copies of it remain in mailbox backups even after deletion from local files ( Fig. 4-7 ). Every e-mail should therefore be thought of as discoverable from both a liability and recoverability standpoint. Before dispatch e-mail should be scrutinized for information or content that might be regretted at a later date.Figure 4-7 E-mail leaves copies of itself as it travels across the Internet.E-mail is also vulnerable to inadvertent or malicious breaches in privacy or disclosure through improper handling of data at any point along the “chain of custody” between the sender and the recipient. Alternatively, a hacker could potentially acquire sensitive medical information from unsecured e-mail or possibly even alter medical advice and test results in an e-mail from physician to patient.The Healthcare Insurance Portability and Accountability Act (HIPAA) legislation mandates secure electronic communication in correspondence regarding patient care. Three prerequisites for secure communication include authentication (that the message sender and recipient are who they say they are), encryption (that the message arrived unread and untampered with), and time/date stamping (that the message was sent at a verifiable time and date), although these techniques are not yet widely deployed in the medical community.It is beyond the scope of this chapter to go into great detail about the methods used to authenticate, encrypt, and time-stamp e-mail. However, it is possible to ensure that each of these elements by using mathematically linked pairs of numbers (keys), in which an individual's public key is published and freely available through a central registry (like a phone book) whereas a linked private key is kept secret ( Fig. 4-8 ). Public key encryption combined with traditional encryption is used to transmit messages securely across public networks, ensure that messages can be read only by a specific individual, and digitally sign the message.Although the use of e-mail for medical patient-provider and provider-provider communications is growing, it is not yet universally adopted for several reasons, including physician distrust of the medium, unfamiliarity with software, lack of standards, and lack of clear methods for reimbursement for time spent in e-mailcommunications. Nevertheless, several professional societies have published consensus recommendations about the management of e-mail in medical practice. Common consensus-based elements are enumerated in ( Box 4-1 ).Box 4-1BrowserMany people think of the Internet and the World Wide Web as one and the same. The Internet is theworldwide network, whereas the Web is one of its applications characterized by the browsers with which its users interact. The browser was invented by Tim Berners Lee at the European Organization for Nuclear Research, commonly known as CERN, in 1990. Marc Andreessen wrote the Mosaic browser andsubsequently the Netscape browser, which like all subsequent browsers, has a GUI and uses a specific “language” called hypertext markup language (HTML). Microsoft eventually developed its own version ofFigure 4-8 Public/private key encryption in which Joe sends a message intended only for Bob by using Bob's public key—the message remains encrypted until Bob decrypts it with his private key.Suggested Rules for E-mail Correspondence in a Medical SettingAll patient-provider e-mail should be encrypted.Correspondents should be authenticated (guarantee you are who you say you are).Patient confidentiality should be protected.Unauthorized access to e-mail (electronic or paper) should be prevented.The patient should provide informed consent regarding the scope and nature of electroniccommunications.Electronic communications should (ideally) occur in the context of a preexisting physician-patientrelationship.On-line communications are to be considered a part of the patient's medical record and should be included with the same.the browser, Internet Explorer, after recognizing the inevitability of the Web.The browser is a computer program, just like a word-processing or e-mail program, with a GUI. It can be thought of as a radio or television insofar is it serves as an interface to media that do not originate within it. The address of a webpage is analogous to the channel or frequency of a television or radio, and the browser “tunes in” to that address. In actuality, the local browser on your machine communicates with a server somewhere on the Internet (at the address specified in the address line) and uses a communication protocol called HTML as its language. The webpage displayed on your local browser was first constructed on the server and then sent to you.The original HTML was extremely spare and permitted the construction of very simple webpages. A variety of new “languages” and protocols have subsequently come into existence, such as Java, Javascript, ActiveX, Flash, and others, that allow enhancements to HTML. New browsers support interactivity, security, display of audio and video content, and other functions. Even though the scope of topics that could be covered in discussing browser communications far exceeds that of this chapter, certain issues deserve mention.“Cookies” is the term used for short lines of text that act like laundry tickets and are used by an Internet server (such as a Google search engine server) to “remember” things about the client computers with which it interacts. Cookies allow the server to keep track, for example, of the items that you have put in your virtual shopping cart as you shop ( Fig. 4-9 ). Although cookies are not inherently risky, there are other risks to the use of a browser.Figure 4-9 Cookies are used by a website, for example, to keep track of the items that a user has put in the “shopping cart.” Like a television, the browser acts like a window on the Internet, and for a long time it was safe to think of that window as being made of one-way glass. Unfortunately, many of the new innovations that allow us tofunction interactively with websites also have built-in flaws that permit malicious programmers to gain access to your computer. The best way to protect a computer involves timely application of all updates and patches issued by software manufacturers and the use of antivirus software with up-to-date definitions.Computers and Computing in MedicineHospital Information SystemsModern hospital information systems invariably fall somewhere on the spectrum between a monolithic single comprehensive system design, wherein a single vendor provides all of the components of the system, and a “best-in-breed” model consisting of multiple vendor-specific systems interacting through interfaces or, more typically, an interface “engine.” [11] [12] [13] [14] The monolithic system has the advantage of smooth interoperability, but some of the component elements may be substantially inferior to those offered by best-in-breed vendors.Component elements of a hospital information system include administrative, clinical, documentation, billing, and business systems. [15] [16] [17] Medical information technology is increasingly subject to governmental regulation, security concerns, and standards. Standards are essential for interoperability among systems and to ensure that systems use uniform terminology.[18]Health Level 7 (HL7) is an accepted set of rules and protocols for communication among medical devices. Clinical Context Management Specification (also known as CCOW) is a method to enable end users to seamlessly view results from disparate “back-end” clinical systems as though they were integrated. Some of the common medical terminologies or vocabularies include the Systematized Nomenclature of Medicine (SNOMED) and the International Classification of Diseases (the ICD family of classifications).[19]Modern complicated medical information systems often weave a host of disparate systems at geographically dispersed locations into an extended “intranet.” A core hospital, for example, may share an intranet with a geographically remote outpatient practice, or several hospitals in the same health system may coexist within the same intranet. Some of the elements may be physically connected ( Fig. 4-10 ) along a network “backbone,” whereas others may use virtual private network (VPN) connections that allow the user to appear to be part of the network while at a remote location.[16]Figure 4-10 Modern health care information systems consist of elements attached to a backbone. ADT, admission, discharge, and transfer.。
外科护理学第七版麻醉病人护理课前导入答案一、A1型题(总题数:20,分数:40.00)1.全身麻醉病人清醒前最危险的意外及并发症是(分数:2.00)A.呕吐物窒息√B.体温过低C.坠床D.引流管脱落E.意外损伤解析:2.为防止全麻时呕吐和术后腹胀,手术前禁食丶禁饮的时间分别是(分数:2.00)A.4h禁食,2h禁水B.6h禁食,4h禁水C.8h禁食,6h禁水D.10h禁食,4h禁水E.12h禁食,4-6h禁水√解析:3.手术麻醉的目的是(分数:2.00)A.保持呼吸道通畅B.保持循环稳定C.3预防术后感染使肌肉松弛’消除疼痛D,保证术中安全E.预防术后并发症解析:4.全麻的基本要求,不正确的是(分数:2.00)B.完全抑制应激反应√C.镇痛完全D.肌肉松弛E.呼吸丶循环等生理指标相对稳定解析:5.能预防局麻药中毒的术前用药是(分数:2.00)A.氩丙嗪B.异丙嗪C.阿托品D.哌替啶E.苯巴比妥钠√解析:6.吸入麻醉与静脉麻醉相比’其优点有(分数:2.00)A.诱导迅速B.操作方便C.药物安全无爆炸性D.对呼吸道无刺激E.容易调节麻醉深度√解析∶解析∶吸入麻醉是将挥发性麻醉剂或气体馷醉剂经呼吸道吸人肺内’经肺泡毛细血管吸收进入血液循环’达到中枢神经系统’产生非麻醉效应的一种方法。
吸入麻醉的优点为可产生安全丶有效的无知觉状态,并可使肌肉松弛、感觉淌失。
由于麻醉药经肺通气进入体内和排出体外’故麻醉深度的调节较其他麻醉方法更为容易。
7.不可采用硬膜外麻醉的手术部位是(分数:2.00)A.头部√B.上肢C.腹部解析∶解析∶硬膜外麻醉是捋局麻药注入硬膜外间隙’阻滞脊神经根,使其支配区域产生暂时性麻醉的麻醉方法。
适用于除头部以外的任何手术。
8.硬膜外麻醉最严重的并发症是(分数:2.00)A.血压下降B.血管扩张C.尿潴留D.I呼吸变慢E.全脊髓麻醉√解析∶解析∶全部脊神经受阻滞称全脊麻,是硬膜外麻醉最危险的并发症。
麻省总医院临床麻醉手册(第7版翻译版)【(美)Peter F.Dunn 著于永浩主译李文硕邓廼封校注】《Cliniacl Anesthesia Procedures of the Massachusetts General Hospital》《麻省总医院临床麻醉手册(第7版)》系统地介绍了麻醉临床基本技能,还补充了课本、杂志一些内容以及麻醉学与重症监护学的前沿知识。
另外,每一章节都包含了推荐阅读,《麻省总医院临床麻醉手册(第7版)》末尾增加了常见药物的功能及用法信息,旨在提高临床教学经验和鼓励进一步详尽的学习。
《麻省总医院临床麻醉手册(第7版)》是针对临床麻醉和重症监护住院医师、麻醉护士、医学生,以及相关专业住院医师的较实用的参考书。
作为对课本的有效补充,该书出版以来保持着数年更新一版的传统,成为美国和其他国家临床麻醉医生的常用"口袋书"。
《麻省总医院临床麻醉手册(第7版)》以其实用性和方便携带的特点深受广大年轻麻醉医生的欢迎。
在每一章后附有详细的推荐阅读资料,方面读者查阅参考。
于永浩天津医科大学总医院麻醉科序言第七版麻省总医院临床麻醉手册由麻省总医院麻醉科与重症监护病房的医师及其同仁以及相关医务人员编写。
本手册一直注重临床基本技能,主要涉及麻醉的安全管理、围术期护理和疼痛治疗。
本书主要反映了目前麻省总医院的临床操作,代表了麻醉住院医师、重症监护病房、疼痛、心血管麻醉医师的基本培养模式。
该手册补充了课本、杂志一些内容以及麻醉学与重症监护学的前沿知识。
本书编写深入浅出、内容严谨,适用于资深的麻醉医师、麻醉住院医师、麻醉护士、学医学生、内外科住院医师、呼吸治疗医师以及病人围术期治疗相关的医务人员。
该手册的目的是提高临床教学经验和鼓励进一步详尽的学习。
因此,每一章节包含了推荐阅读材料。
同前几版本麻醉手册一样。
该手册的每一章节都做了详细的回顾与更新,适当保留了先前版本的内容。
各章节联系紧密,为读者提供了容易理解相关材料的平台。
7 – Patient SimulationMarcus Rall,David M. Gaba,Peter Dieckmann,Christoph EichKey Points1.Simulators and the use of simulation have become an integral part of medical education, training,and research. The pace of developments and applications is very fast, and the results arepromising.2.Different types of simulators can be distinguished: computer-based or screen-basedmicrosimulators versus mannequin-based simulators. The latter can be divided into script-basedand model-based simulators.3.The development of mobile and less expensive simulator models allows for substantial expansionof simulator training to areas where this training could not be applied or afforded previously. Thebiggest obstacles to providing simulation training are not the simulator hardware but are (1)obtaining access to the learner population for the requisite time and (2) providing appropriatelytrained and skilled instructors to prepare, conduct, and evaluate the simulation sessions.4.Realistic simulations are a useful method to show mechanisms of error development (humanfactors) and to provide their countermeasures. The anesthesia crisis resource management(ACRM) course model with its ACRM key points (see Chapter 6 on Crisis Resource Management)is the de facto world standard for human factor–based simulator training. Curricula should usescenarios that are tailored to the stated teaching goals, rather than focusing solely on achievingmaximum “realism.”5.Simulator training is being adapted by many other fields outside anesthesia (e.g., emergencymedicine, neonatal care, intensive care, medical and nursing school).6.Simulators have proved to be very valuable in research to study human behavior and failuremodes under conditions of critical incidents and in the development of new treatment concepts(telemedicine) and in support of the biomedical industry (e.g., device beta-testing).7.Simulators can be used as effective research tools for studying methods of performanceassessment.8.Assessment of nontechnical skills (or behavioral markers) has evolved considerably and can beaccomplished with a reliability that likely matches that of many other subjective judgments inpatient care. Systems for rating nontechnical skills have been introduced and tested in anesthesia;one in particular (Anaesthetists' Non-Technical Skills [ANTS]) has been studied extensively andhas been modified for other fields.9.The most important part of simulator training that goes beyond specific technical skills is the self-reflective (often video-assisted) debriefing session after the scenario. The debriefing is influencedmost strongly by the quality of the instructor, not the fidelity of the simulator.10.Simulators are just the tools for an effective learning experience. The education and training,commitment, and overall ability of the instructors are of utmost importance.How can clinicians experience the difficulties of patient care without putting patients at undue risk? How can we assess the abilities of clinicians as individuals and teams when each patient is unique? These are questions that have challenged medicine for years. In recent years, these and related questions have begun to be answered in health care by the application of approaches new to medicine, but borrowed from years of successful service in other industries facing similar problems. These approaches focus on simulation, a technique well known in the military, aviation, space flight, and nuclear power industries. Simulation refers to the artificial replication of sufficient elements of a real-world domain to achieve a stated goal. The goals can include understanding the domain better, training personnel to deal with the domain, or testing the capacity of personnel to work in the domain. The fidelity of a simulation refers to how closely it replicates the domain and is determined by the number of elements that are replicated and the discrepancy between each element and the real world. The fidelity required depends on the stated goals. Some goalscan be achieved with minimal fidelity, whereas others require very high fidelity.Simulation has probably been a part of human activity since prehistoric times. Rehearsal for hunting activities and warfare was most likely an occasion for simulating the behavior of prey or enemy warriors. Technologic simulation probably dates back to the dawn of technology itself. Good and Gravenstein[1] pointed to the medieval quintain as a technologic device that crudely simulated the behavior of an opponent during sword fighting. If the swordsman did not duck at the appropriate time after striking a blow, he would be hit by a component of the quintain. In modern times, preparation for warfare has been an equally powerful spur to the development of simulation technologies, especially for aviation, shipping, and the operation of armored vehicles. These technologies have been adopted by their civilian counterparts, but they have attained their most extensive use in commercial aviation.Simulation in AviationAlthough some aircraft simulators were built between 1910 and 1927, none of them could provide the proper feel of the aircraft because they could not dynamically reproduce its behavior. In 1930, Link filed a patent for a pneumatically driven aircraft simulator. The Link Trainer was a standard for flight training before World War II, but the war accelerated its use and the further development of flight simulators. In the 1950s, electronic controls replaced pneumatic ones through analog, digital, and hybrid computers. The aircraft simulator achieved its modern form in the late 1960s, but it has been continuously refined. Aviation simulators are so realistic now that pilots with experience flying one aircraft are routinely certified to fly totally new or different aircraft, even if they have never flown the actual aircraft without passengers on board. Similar stories of the development of simulators can be told for numerous other industries.Uses of SimulatorsAlthough simulators originally were used to provide basic instruction on the operation of aircraft controls, the variety of uses of simulators in general has expanded greatly. Table 7-1 lists possible uses of simulators in all types of complex work situations. Simulation is a powerful generic tool for dealing with human performance issues (e.g., training, testing, and research) (see Chapter 6 ), for investigating human-machine interactions, and for the design and validation of equipment. As described later in this chapter, each of these uses is potentially relevant to anesthesiology. A few books are devoted solely to the topic of simulation and their use in and outside of anesthesia. [2] [3] [4]Table 7-1 -- Use of Simulators in Complex Work EnvironmentsTeam training, as human factor or CRM trainingTraining in dynamic plant controlTraining in diagnostic skillsDynamic mockup for design evaluationTest bed for checking operating instructionsEnvironment in which task analysis can be conducted (e.g., on diagnostic strategies)Test bed for new applications (e.g., telemedicine tools such as the “Guardian-Angel-System”)Source of data on human errors relevant to risk and reliability assessmentVehicle for (compulsory) testing/assessment and recertification of operatorsAdapted from Singleton WT: The Mind at Work. Cambridge, Cambridge University Press, 1989.CRM, Crisis resource management.Twelve Dimensions of SimulationCurrent and future applications of simulation can be categorized by 12 dimensions, each of which represents a different attribute of simulation ( Fig. 7-1 ).[5] Some dimensions have a clear gradient and direction, whereas others have only categorical differences. The total number of unique combinations across all the dimensions is very large (on the order of 412 to 512—4 million to 48 million). Some combinations overlap strongly with others, and some are inappropriate or irrelevant, so the actual number of meaningful combinations is much lower. Nonetheless, although the demonstrated applications of simulation in health care have been quite diverse, the space of possible applications (a large number,although quite a bit smaller than millions) has by no means been fully examined.Figure 7-1 The 12 dimensions of simulation applications (10 to 12 shown on next page). Any particular application can be represented as a point or range on each spectrum (shown by diamonds). This figure illustrates a specific application—multidisciplinary CRM-oriented decision making and teamwork training for adult intensive care unit personnel. CRM, crises resource management; ED, emergency department; ICU, intensive care unit; OR, operating room. *These terms are used according to Miller's pyramid of learning.Dimension 1: Purpose and Aims of the Simulation ActivityThe most obvious application of simulation is to improve the education and training of clinicians, but other purposes also are important. As used in this chapter, education emphasizes conceptual knowledge, basic skills, and an introduction to work practices. Training emphasizes the actual tasks and work to be performed. Simulation can be used to assess performance and competency of individual clinicians and teams, for low-stakes or formative testing and (to a lesser degree as yet) for high-stakes certification testing. [6] [7] Simulation rehearsals are now being explored as adjuncts to actual clinical practice; for example, surgeons or an entire operative team can rehearse an unusually complex operation in advance using a simulation of the specific patient. [8] [9] [10] Simulators can be powerful tools for research and evaluation, concerning organizational practices (patient care protocols) and for the investigation of human factors (e.g., of performance-shaping factors, such as fatigue,[11] or of the user interface and operation of medical equipment in high hazard clinical settings[12]). Simulation-based empirical tests of the usability of clinical equipment already have been used in designing equipment that is currently for sale; ultimately, such practices may be required by regulatory agencies before approval of new devices.Simulation can be a “bottom up” tool for changing the culture of health care concerning patient safety. First, it allows hands-on training of junior and senior clinicians about practices that enact the desired “culture of safety.”[13] Simulation also can be a rallying point about culture change and patient safety that can bring together experienced clinicians from various disciplines and domains (who may be “captured” because the simulations are clinically challenging) along with health care administrators, risk managers, and experts on human factors, organizational behavior, or institutional change.Dimension 2: Unit of Participation in the SimulationMany simulation applications are targeted at individuals. These may be especially useful for teaching knowledge and basic skills or for practice on specific psychomotor tasks. As in other high hazard industries, individual skill is a fundamental building block, but a considerable emphasis is applied at higherorganizational levels, in various forms of teamwork and interpersonal relations (often summarized under the rubric of crisis resource management (CRM) adapted from aviation cockpit resource management) (more on human factors and CRM concepts, see Chapter 6 ). [14] [18] CRM is based on empirical findings that individual performance is not sufficient to achieve optimal safety.[15] Team training may be addressed first to crews (also known as single discipline teams), consisting of multiple individuals from a single discipline, and then to teams (or multidisciplinary teams).[16] There are advantages and disadvantages to addressing teamwork in the single discipline approach that “trains crews to work in teams” versus the “combined team training” of multiple disciplines together.[21] For maximal benefit, these approaches are used in a complementary fashion.Teams exist in actual “work units” in an organization (e.g., a specific intensive care unit [ICU]), each of which is its own target for training. There also is growing interest and experience in applying simulation to nonclinical personnel and work units in health care organizations (e.g., to managers or executives)[22] and to organizations as a whole (e.g., entire hospitals or networks).Dimension 3: Experience Level of Simulation ParticipantsSimulation can be applied along the entire continuum of education of clinical personnel and the public at large. It can be used with early learners, such as schoolchildren, or lay adults to facilitate bioscience instruction, to interest individuals in biomedical careers, or to explain health care issues and practices. The major role of simulation has been, and will continue to be, to educate, train, and provide rehearsal for individuals actually involved in the delivery of health care. Simulation is relevant from the earliest level of vocational or professional education (students) and during apprenticeship training (interns and residents), and increasingly for experienced personnel undergoing periodic refresher training. Simulation can be applied regularly to practicing clinicians (as individuals, teams, or organizations) regardless of their seniority, [17] [18] providing an integrated accumulation of experiences that should have a long-term synergism.Dimension 4: Health Care Domain in Which the Simulation Is AppliedSimulation techniques can be applied across nearly all health care domains. Although much of the attention on simulation has focused on technical and procedural skills applicable in surgery, [11] [20] [21] [22] obstetrics, [23] [24] invasive cardiology, [25] [26] and other related fields, another bastion of simulation has been recreating whole patients for dynamic domains involving high hazard and invasive intervention, such as anesthesia, [27] [28] critical care, [29] [30] and emergency medicine. [31] [32] [33] [34] Immersive techniques can be used in imaging-intensive domains, such as radiology and pathology, and interactive simulations are relevant in the interventional sides of such arenas.[35] In many domains, simulation techniques have been useful for addressing nontechnical skills and professionalism issues, such as communicating with patients and coworkers, or addressing issues such as ethics or end-of-life care.Dimension 5: Health Care Disciplines of Personnel Participating in the SimulationSimulation is applicable to all disciplines of health care, not only to physicians. In anesthesiology, simulation has been applied to anesthesiologists, Certified Registered Nurse Anesthetists, and anesthesia technicians. Simulation is not limited to clinical personnel. It also may be directed at managers, executives, hospital trustees, regulators, and legislators. For these groups, simulation can convey the complexities of clinical work, and it can be used to exercise and probe the organizational practices of clinical institutions at multiple levels.Dimension 6: Type of Knowledge, Skill, Attitudes, or Behavior Addressed in SimulationSimulations can be used to help learners acquire new knowledge and to understand conceptual relations and dynamics better. Today physiologic simulations allow students to watch cardiovascular and respiratory functions unfold over time and respond to interventions—in essence making textbooks, diagrams, and graphs “come alive.” The next step on the spectrum is acquisition of skills to accompany knowledge. Some skills follow immediately from conceptual knowledge (e.g., cardiac auscultation), whereas others involve intricate and complex psychomotor activities (e.g., catheter placement or basic surgical skills). Isolated skills must be assembled into a new layer of clinical practices. An understanding of the concepts of general surgery cannot be combined only with basic techniques of dissecting and suturing or manipulation of instruments to create a capable laparoscopic surgeon. Basic skills must be integrated into actual clinical techniques, a process for which simulation may have considerable power, especially because it can readily provide experience with even uncommon anatomic or clinical presentations. In the current health care system, for most invasive procedures, novices at a task typically first perform the task on a real patient,albeit under some degree of supervision. They climb the learning curve, working on patients with varying levels of guidance. Simulation offers the possibility of having novices practice extensively before they begin to work on real patients as supervised “apprentices.”In this way and others, simulation is applicable to clinicians throughout their careers to support lifelong learning. It can be used to refresh skills for procedures that are not performed often. Knowledge, skills, and practices honed as individuals must be linked into effective teamwork in diverse clinical teams, which must operate safely in work units and larger organizations. [36] [37] [38] Perpetual rehearsal of responses to challenging events is needed because the team or organization must be practiced in handling them as a coherent unit.Dimension 7: Age of the Patient Being SimulatedSimulation is applicable to nearly every type and age of patient literally from “cradle to grave.” Simulation may be particularly useful for pediatric patients and clinical activities because neonates and infants have smaller physiologic reserves than most adults. [40] [41] Fully interactive neonatal and pediatric patient simulators are now available. Simulation also addresses issues of the elderly and end-of-life issues for every age.Dimension 8: Technology Applicable or Required for SimulationsTo accomplish these goals, various technologies (including no technology) are relevant for simulation. Verbal simulations (“what if” discussions), paper and pencil exercises, and experiences with standardized patient actors [41] [42] [43] require no technology, but can effectively evoke or recreate challenging clinical situations. Similarly, very low technology—even pieces of fruit or simple dolls—can be used for training in some manual tasks. Certain aspects of complex tasks and experiences can be recreated successfully with little technology. Some education and training on teamwork can be accomplished with role playing, analysis of videos, or drills with simple mannequins.[44]Ultimately, learning and practicing complex manual skills (e.g., surgery, cardiac catheterization) or practicing the dynamic management of life-threatening clinical situations that include risky or noxious interventions (e.g., intubation or defibrillation) can only be fully accomplished using either animals—which for reasons of cost and issues of animal rights is becoming very difficult—or a technologic means to recreate the patient and the clinical environment. The different types of simulation technologies relevant to anesthesiology are discussed later in this chapter.Dimension 9: Site of Simulation ParticipationSome types of simulation—those that use videos, computer programs, or the Web—can be conducted in the privacy of the learner's home or office using his or her own equipment. More advanced screen-based simulators might need more powerful computer facilities available in a medical library or learning center. Part-task trainers and virtual reality simulators are usually fielded in a dedicated skills laboratory. Mannequin-based simulation also can be used in a skills laboratory, although the more complex recreations of actual clinical tasks require either a dedicated patient simulation center with fully equipped replicas of clinical spaces or the ability to bring the simulator into an actual work setting (in-situ simulation). There are advantages and disadvantages to doing clinical simulations in situ versus in a dedicated center. Using the actual site allows training of the entire unit with all its personnel, procedures, and equipment. There would at best be limited availability of actual clinical sites, and the simulation activity may distract from real patient care work. The dedicated simulation center is a more controlled and available environment, allowing more comprehensive recording of sessions, and imposing no distraction on real activities. For large-scale simulations (e.g., disaster drills), the entire organization becomes the site of training.Video conferencing and advanced networking may allow even advanced types of simulation to be conducted remotely (see dimension 10 ). The collaborative use of virtual reality surgical simulators in real time already has been shown, even with locations that are separated by thousands of miles (see later for more on Site of Simulator).Dimension 10: Extent of Direct Participation in SimulationMost simulations—even screen-based simulators or part-task trainers—were initially envisioned as highly interactive activities with significant direct “on-site” hands-on participation. Not all learning requires direct participation, however. Some learning can occur merely by viewing a simulation involving others, as one can readily imagine being in the shoes of the participants. A further step is to involve the remote viewers either in the simulation itself or in debriefings about what transpired. Several centers have been usingvideoconferencing to conduct simulation-based exercises, including morbidity and mortality conferences.[45] Because the simulator can be paused, restarted, or otherwise controlled, the remote audience can readily obtain more information from the on-site participants, debate the proper course of action, and discuss with those in the simulator how best to proceed.Dimension 11: Feedback Method Accompanying SimulationSimilar to real life, one can learn a great deal just from simulation experiences themselves, without any additional feedback. For many simulations, specific feedback is provided to maximize learning. On screen-based simulators or virtual reality systems, the simulator itself can provide feedback about the participant's actions or decisions,[46] particularly for manual tasks where clear metrics of performance are readily delineated. [47] [48] More commonly, human instructors provide feedback. This can be as simple as having the instructor review records of previous sessions that the learner has completed alone. For many target populations and applications, an instructor provides real-time guidance and feedback to participants while the simulation is going on. The ability to start, pause, and restart the simulation can be valuable. For the most complex uses of simulation, especially when training experienced personnel, the typical form of feedback is a detailed postsimulation debriefing session, often using audio-video recordings of the scenario. Waiting until after the scenario is finished allows experienced personnel to apply their collective skills without interruption, and then allows them to see and discuss the advantages and disadvantages of their behaviors, decisions, and actions.Dimension 12: Organizational, Professional, and Societal Embedding of SimulationAnother important dimension is the degree to which the simulation application is embedded into an organization or industry.[19] Being highly embedded may mean that the simulation is a formal requirement of the institution or is mandated by the governmental regulator. Another aspect of embedding would be that—for early learners—the initial (steep) part of the learning curve would be required to occur in a simulation setting before the learners are allowed to work on real patients under supervision. Also, complete embedding of simulation into the workplace would mean that simulation training is a normal part of the work schedule, rather than being an “add-on” activity attended in the “spare time” of clinicians. Conceptual Issues about Patient Simulation“The key is the programme, not the hardware,” was a truth about simulation learned early in aviation. Using simulators in a goal-oriented way is equally or more about the conceptual aspects of the technique than it is about the technology of the simulation devices. An understanding of the conceptual and theoretical aspects of the use of simulation techniques can be helpful for determining the right applications of the technique and the important matchups that must be made in the design and conduct of simulation exercises to get the best results. When used most effectively, simulation can be—to borrow a line from the band U2—“even better than the real thing.”[20] The concepts discussed in this section concern the nature of “realism” and “reality” as they apply to simulation, and the way that these issues relate to the goals of simulation endeavors to generate a complex social undertaking. The ideas and concepts presented here are largely contributed by Peter Dieckmann and his adaptation of broader psychological concepts to simulation in medicine.Reality and Realism of SimulationUnless we are dreaming about it or we are an unrepentant solipsist, a simulation exercise is always “real” (it is actually happening), but it may or may not be a “realistic” replication of the reality that is the target of the exercise. Realism addresses the question of how closely a replication of a situation resembles the target. A key distinction is between a simulator (a device) and a simulation (the exercise in which the device is used).A simulator could be indistinguishable from a real human being (e.g., a standardized patient actor) and yet be used in an implausible and useless fashion. Conversely, certain kinds of realism (see later) can be evoked by simulation exercises that use very simple simulators or even no simulator at all (as in role-playing when the participants in a sense “become the simulator”). Merely creating a realistic simulation does not guarantee that it would have any meaning or utility (e.g., learning). [21] [22] A closely related aspect concerns the question of relevance and the social character of simulation.Three Distinct Dimensions for Simulation RealismA lot has been written about simulator and simulation realism using a variety of terms and concepts that are subtly different and often overlapping. These include physical fidelity (the device replicates physical aspects of the human body), environmental fidelity (the simulation room looks like an operating room), equipment fidelity (the clinical equipment works like or is the real thing), and psychological fidelity[23] (the simulationevokes behaviors similar to the real situation), and a variety of forms of validity, such as face validity (“looks and feels” real to participants), content validity (the exercise covers content relevant to the target situation), construct validity (the simulation can replicate performance or behavior according to predefined constructs about work in real situations), and predictive validity (performance during a simulation exercise predicts performance in an analogous real situation). [24] [25]The results of studies trying to investigate roots and effects of simulation “realism” are not conclusive partly because they may each concentrate on a different aspect of this complex whole. It is simply not true that maximum “realism” is either needed or desired for every type of simulation endeavor. For some applications with some target populations, it can be highly advantageous to reduce the realism to heighten the learning experience.[26]In 2007, we published an article attempting to clarify some issues about realism, reality, relevance, and the purposes of conducting simulations.[21] We applied the model of thinking about reality by the German psychologist Laucken to the realism of simulation.[21] Laucken described three modes of thinking—physical, semantical, and phenomenal, which have been renamed physical, conceptual, and emotional and experiential modes by colleagues in Boston.[27]Physical ModeThe physical mode concerns aspects of the simulation that can be measured in fundamental physical and chemical terms and dimensions (e.g., centimeters, grams, and seconds). The weight of the mannequin, the force generated during chest compressions, and the duration of a scenario all are physical aspects of the simulation reality. Existing simulator mannequins have many “unrealistic” physical elements despite their roughly human shape: they are made of plastic, not flesh and bone; they may have unusual mechanical noises detectable during auscultation of the chest; the “skin” does not change color. Some physical properties are not readily detectable and can be manipulated. Some clinical equipment used in mannequin-based simulation is fully functional and physically identical to the “real thing,” although in some cases functional physical limitations may have been introduced for convenience or for safety.[24] Labeled syringes may contain only water instead of opioids, or a real defibrillator may have been modified so that it does not actually deliver a shock (one manufacturer sells a “Hollywood defibrillator”). That certain physical properties and functions have been altered is not usually apparent to participants, at least without special briefings or labels.Semantical ModeThe semantical mode of thinking concerns concepts and their relationships. Within the semantical mode, a simulation of hemorrhage might be described in conceptual terms as “bleeding” of flow rate X beginning at time Y occurring at site Z and associated with a blood pressure of B that is a decrease from the prior value of A. In this mode of thinking, it is irrelevant how the information is transmitted or represented. The same pieces of information could be represented using a vital signs monitor, a verbal description, or the tactile perception of decreasingly palpable pulses. The semantical recoding of physical objects is the cornerstone of simulation. It allows the simulation exercise to represent a real situation, and it allows water-filled syringes to be treated as if they contain a drug.Phenomenal ModeThe phenomenal mode deals with the “experience,” including emotions and beliefs triggered by the situation. For many purposes, providing high phenomenal realism is the key goal, and the physical realism and semantic realism are merely means to this end.Relevance versus RealityA naive view of simulation would suggest that greater realism in all modes would lead to better achievement of the goals of simulation; this view has been criticized repeatedly. [23] [28] [29] Simulation is a complex social endeavor, conducted with different target populations for different purposes. The relevance of a simulation exercise concerns the match between the characteristics of the exercise and the reasons for which the exercise is conducted. Different elements of realism are emphasized or sacrificed to maximize the relevance of a simulation exercise. When training on invasive procedures, it is typical to forgo phenomenal realism and emphasize physical and semantic realism so that psychomotor skills can be the focus. Semantic realism may be sacrificed, especially to help early learners. Situations that might become lethal (e.g., trigger a cardiac arrest) very quickly may be slowed down so that inexperienced clinicians can try to think their way out of the problem. If such a situation were allowed to evolve at its normal speed, it would。