A Minimal Trusted Computing Base for Dynamically Ensuring Secure Information Flow
- 格式:pdf
- 大小:128.12 KB
- 文档页数:11
Trusted ComputingThe TCG GuidelinesWhat is TCG?The Core Component -TPMTPM provides:♦Secure Input & Output♦Memory curtaining / Protected execution ♦Sealed storage♦Remote attestationSystem Layout based on TCG ControversyWhy Are Systems Insecure?♦Commodity OS are too complex to build secure applications upon♦Commodity OS poorly isolate applications ♦Only weak mechanisms for authentication, making secure distributed applications difficult♦No trusted path between users and programsIdea: Trusted Computing♦Minimal trusted computing base–Implemented in a tamper-resistant hardware chip♦Provides basic security capabilities–Sealed storage–Remote attestation of machine’s state–Curtained memory–Secure input and output♦“Bootstrap”security from kernel to applications –Prevent malicious code from running in the kernel–Remotely “attest”that you running a particular software stack (from OS to applications)Business Objectives♦Prevent use of unlicensed software♦Digital rights management (DRM)–Prevent execution of unlicensed applications–Idea: before a streaming service releases musicfor your computer, you must prove that there is no ripping software running in your executionenvironment♦Law enforcement and intelligence♦“The mother(board) of all Big Brothers”-Lucky GreenTrusted Computing Group (TCG)♦Formed in Spring 2003, adopted the specifications of TCPA (Trusted Computing Platform Alliance), which was founded 1999♦Core members–AMD, Infineon, HP, IBM, Intel, Microsoft, Sun ♦Mission–To develop ,define,and promote open standardsfor hardware-enabled trusted computing andsecurity technologies♦About the TCG(continued) Groups of TCG–Infrastructure–Mobile–PC Client–Server–Software Stack–Storage–Trusted Network Connect–Trusted Platform Module(TPM)TCG Architecture Overview (continued) Trusted Computing Security EcosystemTCG Architecture Overview (continued) Reference PC Platform Containing a TCGTrusted Platform ModulesThe TCG GuidelinesWhat is TCG?The Core Component -TPMTPM provides:♦Secure Input & Output♦Memory curtaining / Protected execution ♦Sealed storage♦Remote attestationSystem Layout based on TCG ControversyIdea: Use Hardware♦Trusted Platform Module (TPM)–“Smartcard soldered to motherboard”–Cheap, fixed-function, tamper-proof hardware device •Contains at least an AES key and an RSA key pair•“Platform configuration registers”to store the hash ofthe currently running OS and maybe applications♦Must be close to the chipset–Involved in OS initialization; can’t be a real smartcard ♦Contains other security capabilities♦Requires changes to BIOS, OS, applicationsTPM in the Real World♦$7 chip–Many manufacturers: Atmel, Infineon, National, STMicro♦Installed in many desktops and notebooks –IBM/Lenovo, HP, Fujitsu♦Used in some secure systems software –File encryption: Vista, IBM, HP, Softex–Attestation for enterprise login: Cognizance,Wave–Single sign-on: IBM, Utimaco, WaveInfineon, National Semiconductor, Atmel and ST Microelectronics already propose compatible TCGcomponents Infineon SLD9630TT TPMAtmel AT97SC3201National SafeKeeper PC21100And others manufacturers soon likeST Microelectronics ST19WP18-TPMNational Infineon AtmelThe TPM : a realityCore Features♦Separate protected execution environment for applications that need higher security–Strong process isolation♦Privileged cryptographic services for these apps♦Secure path to and from the user♦Big idea: “project trust”into the main OSTPM Components♦Generate and use RSA keys♦Provide long-term protected storage of RSA root key ♦Store measurements in PCR♦Use anonymous identities to report PCR statusTPMRNG RSA Non-Volatile Storage Key Generation PCR Anonymous IdentitiesOpt-InNon-Volatile TPM Memory♦Endorsement key (EK)–Unique RSA key, created once for the life of the TPM at the time of manufacture•Proves that the TPM is genuine–Certified by TPM manufacturer–Root of the attestation chain♦Storage root key (SRK) and owner password –Generated when user takes ownership♦Persistent flags–For example, has ownership been taken?Code “Identity”♦In the trusted computing model, the host always knows what code is running on it–Can assign access rights to code identities♦Booting kernel causes its hash to be computed and stored in a read-only, tamper-proof register–“Platform configuration register”(PCR)♦Kernel recursively provides similar features forapplications executing on the system–Can think of the hash of the code as code’s identityPlatform Configuration Registers♦At least 16 PCRs on chip, each stores SHA-1 hash ♦Initialized to default value (e.g., 0) at boot time♦PCR values can be read and updated at runtime –TPM_Extend(n,D) stores SHA-1(PCR[n],D) in PCR[n]–TPM_PcrRead(n) reads value of PCR[n]♦TPM can save PCR values on shutdown and restore them on restart–TPM_SaveState and TPM_Startup(ST_STATE)Bootstrapping the Trust Chain♦Secret key is embedded in hardware, signed (certified) by hardware vendor♦Hardware certifies firmware♦Firmware certifies boot loader♦Boot loader certifies OS♦OS certifies applications, virtual machines, etc.Transitive TrustUsing PCRs♦PCR[n] initialized to 0 at startup♦BIOS boot block: –Calls TPM_Extend(n, <BIOS code>)–Loads and runs BIOS post-boot code♦BIOS:–Calls TPM_Extend(n, <MBR code>)–Loads and runs MBR♦Master boot record (MBR):–Calls TPM_Extend(n, <OS loader code, config>)–Loads and runs OS loader and so on …What does this operation do?Component CertificationA component wanting to be certified…♦Generates public/private key♦Makes ENDORSE call to lower-level component♦Lower-level component generates and signs a certificate containing:–SHA-1 hash of attestable parts of highercomponent–Higher component’s public key and applicationdataThe TCG GuidelinesWhat is TCG?The Core Component -TPMTPM provides:♦Secure Input & Output♦Memory curtaining / Protected execution ♦Sealed storage♦Remote attestationSystem Layout based on TCG ControversySecure Input and Output♦Isolation, sealed storage and attestation aren’t enough to keep secrets safe♦Users can be fooled into thinking they’re talking to a trusted system when they’re not ♦I/O channels must be protected from sniffing–Keyboard, frame buffer, etc.♦Protected path between user and applicationThe TCG GuidelinesWhat is TCG?The Core Component -TPMTPM provides:♦Secure Input & Output♦Memory curtaining / Protected execution ♦Sealed storage♦Remote attestationSystem Layout based on TCG ControversyMemory Curtaining♦Memory curtaining has the hardware keep programs from reading or writing each other’s memory♦Even OS access is denied♦Information is secure from an intruder with control over OSThe TCG GuidelinesWhat is TCG?The Core Component -TPMTPM provides:♦Secure Input & Output♦Memory curtaining / Protected execution ♦Sealed storage♦Remote attestationSystem Layout based on TCG ControversySealed Storage♦Protects private information with encryption from a key derived from corresponding hardware and software♦Data can only be read by the same combination of software and hardware–Example: Web server’s SSL private key that can only beread by an unmodified copy of the server’s code♦Prevent reverse-engineering of software–If MBR or OS changed, software won’t load♦Not a perfect solution–Updating OS, application, config requires re-sealingSealing Process♦TPM_TakeOwnership(OwnerPassword, …)–Creates 2048-bit RSA storage root key (SRK)–Can only be done once (by IT dept or computer owner)♦Optional: TPM_CreateWrapKey–Create more RSA keys certified by SRK–Each key identified by a 32-bit keyhandle♦TPM_Seal–encrypt data using RSA key –Arguments: keyhandle(which TPM key to use),password for using that keyhandle, PCR values toembed, symmetric key–Returns encrypted “blob”(under symmetric key)Key Features of Sealed Storage ♦TPM_Unseal decrypts the “blob”only if current PCR values match those in the blob–Only certain applications can decrypt the data–Changing MBR or OS kernel changes PCR values♦Why can’t attacker disable TPM until after boot, then extend PCRs with whatever he wants?–Root of trust: BIOS boot block♦Rollback attacks are possible–For example, “undo”security patches by opening blob with an old version of applicationTPM Counters♦TPM must support at least four hardware counters–Increment rate: every 5 seconds for 7 years ♦Provide time stamps on encrypted blobs ♦Support DRM applications–Example: “music will play for 30 days only”The TCG GuidelinesWhat is TCG?The Core Component -TPMTPM provides:♦Secure Input & Output♦Memory curtaining / Protected execution ♦Sealed storage♦Remote attestationSystem Layout based on TCG ControversyRemote AttestationAre You A Dog?♦On the Internet no oneknows you are a dog♦On the Internet no one knows if you have a proper configurationAttestation Definition♦Remote attestation allows changes to user’s computer to be detected♦Hardware generates a certificate stating what software is currently running♦Combined with public-key encryption to present certificate to remote party♦Information that could be attested to includes:–HW on platform–BIOS–Configuration options–And much moreAttestation Promise♦TCG never lies about the state of measured information♦This requires–Accurate measurement–Protected storage–Provable reporting ofmeasurementRemote Attestation♦Goal: prove to remote entities what software(OS, applications) you are running♦Remote entity (e.g., digital content provider) can request attestation of state via the Internet♦What can be proved?–Platform is in an approved configuration•Owner of machine doesn’t have privileged access to CPU –OS and applications have not been modified•Or even that they are licensed with maintenance fees paid –Only approved applications are loadedAttestation Examples♦Financial institution allows data download only if computer’s OS has all current security patches♦Laptop can connect to corporate network only if it runs authorized software♦Multi-role game players can join the game only if their game clients have not been modified♦Music store allows music download only if there are no unauthorized players installedAttestation Process♦Create attestation identity key (AIK)–Known only to TPM, public key certificate issued only if certificate for EK (endorsement key) is valid•Recall that EK is unique for TPM, stored in hardware ♦Sign PCR values using TPM_Quote–Arguments: keyhandle(which AIK to use), password for this keyhandle, list of PCRs to sign, 20-byte challengefrom remote server, additional user data•What is the challenge needed?♦Return PCR values + signatureRemote server PC TPM OSApp •Generate public/private key pair•TPM_Quote(AIK, PcrList, chal, pub-key)•Obtain certificateAttestation request (20-byte challenge)(SSL) key exchange using certificateValidate:1. Certificate issuer 2. PCR values in municate with app using SSL tunnel •Attestation should include key exchange•Application must be isolated from rest of systemHow Attestation Should WorkNexus OS[Shieh et al. at Cornell]♦Attesting to hashed kernel and application code is not always feasible–Too many possible software configurations♦Better approach: attesting to code properties –For example, “application never writes to disk”♦Nexus OS supports general attestation statements–“TPM says that it booted Nexus;Nexus says that it ran checker with hash X;checker says that application A has property P”Attestation Issues♦Attestation only certifies what code was loaded –Does not attest the current state of a running system–Code could have been compromised after loading, e.g.,by exploiting a vulnerability♦May interfere with security software–Malicious music file exploits bug in a music player–TCG prevents anyone from getting music file in the clear –how does anti-virus company develop defense?♦Exposure of a single endorsement key is deadly –Using exposed key in TPM emulator, can attest toanything without actually running itPrivacy Issues in Attestation♦Each trusted machine has sets of unique AES and RSA hardware keys–Unique identifiers, may be used to track user behavior –Intel CPUID fiasco♦Basic approach: opt-in–User designates what software can access the sealedstorage and authentication functions that use the keys ♦Authentication key disclosure strictly controlled –Access to the RSA public key components is restricted –Only one export of the RSA public key per power cyclePseudo-Identities♦If every party I communicate with needs my hardware RSA public key to encrypt some info for me, the key becomes a platform ID♦Solution: pseudo-identity–Generate a temporary RSA key pair–Use hardware key once to certify the pseudo-identitykey, then just use the pseudo-identity keys♦Need a third-party certification authority (“Privacy CA”) for certifying temporary keysIllustrationA worm spreads from a single PC across the network TCG standards deny network access to an infected PC preventing worm propagationA TCG-based Security Can Eliminate Security AttacksIllustration(continued)A rogue access point provides an avenue for a war driver to sniff the network A rogue access point is immediately recognized as an untrusted device and denied access to the networkA TCG-based Security Can Eliminate Security AttacksIllustration(continued)A thief steals a PC with cleartext confidential data A thief steals a PC with encrypted confidential dataA TCG-based Security Can Eliminate Security AttacksThe TCG GuidelinesWhat is TCG?The Core Component -TPMTPM provides:♦Secure Input & Output♦Memory curtaining / Protected execution ♦Sealed storage♦Remote attestationSystem Layout based on TCG ControversyFunctional Layoutz TPS –Trusted Platform Subsystem z BIOS z Drivers z ALL operations come through TPS zTPM –Trusted Platform Module z Hardware z Microcodez Protected functionalityz Shielded locations TPM TPSRequestsSystem ArchitectureO S Pr e s en t TPM Hardware and MicrocodeBIOS Application Ring 3 Library OS / Driver Ring 0 Library TCG Security Driver OS Absent Library Middleware OS Present TPS Security API OS Absent TPS Security API O S AbsentH a r d -w areThe TCG GuidelinesWhat is TCG?The Core Component -TPMTPM provides:♦Secure Input & Output♦Memory curtaining / Protected execution ♦Sealed storage♦Remote attestationSystem Layout based on TCG Controversy。
《2024年高考英语新课标卷真题深度解析与考后提升》专题05阅读理解D篇(新课标I卷)原卷版(专家评价+全文翻译+三年真题+词汇变式+满分策略+话题变式)目录一、原题呈现P2二、答案解析P3三、专家评价P3四、全文翻译P3五、词汇变式P4(一)考纲词汇词形转换P4(二)考纲词汇识词知意P4(三)高频短语积少成多P5(四)阅读理解单句填空变式P5(五)长难句分析P6六、三年真题P7(一)2023年新课标I卷阅读理解D篇P7(二)2022年新课标I卷阅读理解D篇P8(三)2021年新课标I卷阅读理解D篇P9七、满分策略(阅读理解说明文)P10八、阅读理解变式P12 变式一:生物多样性研究、发现、进展6篇P12变式二:阅读理解D篇35题变式(科普研究建议类)6篇P20一原题呈现阅读理解D篇关键词: 说明文;人与社会;社会科学研究方法研究;生物多样性; 科学探究精神;科学素养In the race to document the species on Earth before they go extinct, researchers and citizen scientists have collected billions of records. Today, most records of biodiversity are often in the form of photos, videos, and other digital records. Though they are useful for detecting shifts in the number and variety of species in an area, a new Stanford study has found that this type of record is not perfect.“With the rise of technology it is easy for people to make observation s of different species with the aid of a mobile application,” said Barnabas Daru, who is lead author of the study and assistant professor of biology in the Stanford School of Humanities and Sciences. “These observations now outnumber the primary data that comes from physical specimens(标本), and since we are increasingly using observational data to investigate how species are responding to global change, I wanted to know: Are they usable?”Using a global dataset of 1.9 billion records of plants, insects, birds, and animals, Daru and his team tested how well these data represent actual global biodiversity patterns.“We were particularly interested in exploring the aspects of sampling that tend to bias (使有偏差) data, like the greater likelihood of a citizen scientist to take a picture of a flowering plant instead of the grass right next to it,” said Daru.Their study revealed that the large number of observation-only records did not lead to better global coverage. Moreover, these data are biased and favor certain regions, time periods, and species. This makes sense because the people who get observational biodiversity data on mobile devices are often citizen scientists recording their encounters with species in areas nearby. These data are also biased toward certain species with attractive or eye-catching features.What can we do with the imperfect datasets of biodiversity?“Quite a lot,” Daru explained. “Biodiversity apps can use our study results to inform users of oversampled areas and lead them to places – and even species – that are not w ell-sampled. To improve the quality of observational data, biodiversity apps can also encourage users to have an expert confirm the identification of their uploaded image.”32. What do we know about the records of species collected now?A. They are becoming outdated.B. They are mostly in electronic form.C. They are limited in number.D. They are used for public exhibition.33. What does Daru’s study focus on?A. Threatened species.B. Physical specimens.C. Observational data.D. Mobile applications.34. What has led to the biases according to the study?A. Mistakes in data analysis.B. Poor quality of uploaded pictures.C. Improper way of sampling.D. Unreliable data collection devices.35. What is Daru’s suggestion for biodiversity apps?A. Review data from certain areas.B. Hire experts to check the records.C. Confirm the identity of the users.D. Give guidance to citizen scientists.二答案解析三专家评价考查关键能力,促进思维品质发展2024年高考英语全国卷继续加强内容和形式创新,优化试题设问角度和方式,增强试题的开放性和灵活性,引导学生进行独立思考和判断,培养逻辑思维能力、批判思维能力和创新思维能力。
Entrust Adaptive Issuance Instant ID SoftwareHIGHLIGHTSLaunch your ID card program with confidenceEntrust Adaptive Issuance Instant ID software is the ideal starting point for organizations that want to make their people and premises more secure. Built for a single-user environment, InstantID software delivers the core tools you need to easily design, print, and manage ID cards and credentials. This user-friendly software offers you a complete solution for issuing basic IDs. Plus, you can easily scale up to another, more feature-rich edition in the Instant ID software suite as your needs grow.KEY BENEFITSEase of use. Drag-and-drop features make the software intuitive and easyto use, allowing you to spend less time learning and more time creating new credentials.• B rowser-based applications do notrequire an internet connection.• P rovides administrators with anintuitive interface to build the rightworkflows and card design. Everyday users can automatically follow astandardized process to improveaccuracy.• T he Quick Start Wizard gets usersfrom initial login to print with minimal setup and configuration.Configure to meet your needs. Settings can be configured to allow everyday users the ability to capture the data they want, how they want, while ensuring only the proper information is captured. Use your existing data. Import data from CSV and ASCII files or integrate directly with Microsoft Access, SQL Server, or MySQL*.*Available with Instant ID Plus editionEntrust Adaptive Issuance at a glanceEntrust Adaptive Issuance Instant ID offers a basic card issuance edition that’s just right for you – now and into the future.Instant ID Express software includes:• B asic photo capture• Data import• Credential designer• D rag-and-drop workflowsfor a single-user environment• E mbedded database for record storage and retrieval Instant ID Plus software also includes:• S ignature capture• DSLR camera support• Report capabilities• Composite field support• SQL Server support• MySQL support• Microsoft Access support• Batch printingBuild a complete card-issuance solutionEntrust Adaptive Issuance Instant ID software works with Entrust printers, supplies, and global services to give you a complete ID-issuance solution that delivers exceptional results.TECHNICAL SPECIFICATIONS SYSTEM REQUIREMENTSOperating System: Microsoft Windows 7 / 8.1 / 10 Mobile Operating System: iOS, Android, Microsoft Surface Web Browser:Chrome, Internet Explorer v10or 11, EdgeMemory:4 GB (8 GB recommended)Hard Drive:1 GB of computer drive spaceDesktop or PC Screen Resolution:1360 x 768(minimum)INSTANT ID SOFTWARE STANDARD FEATURESCredential Design• F ront and back card design, pre-designed card templates• V ariable/Static text field, photograph field, variable/static graphic field, date field• Magnetic stripe, bar code and QR code Workflow• T ext field, static text field, static graphic field,date field, print count field• A uto-sequence field, check box field, list field• T ext field mask, field option – read only, hidden,mandatory, searchableDatabase & Data Import• I mport data from CSV, ASCII fileLanguage Support• E nglish, French, German, Japanese, JIS Kanji,Portuguese (Brazilian), Simplified Chinese,Spanish• L ocalization utility for other languagesPrinter Support• Instant ID integrates with non-Entrust devicesPhoto Capture• F ile input, TWAIN, DirectShow, web browserABOUT ENTRUST CORPORATIONEntrust secures a rapidly changing world by enabling trusted identities, payments,and data protection. Today more than ever, people demand seamless, secure experiences, whether they’re crossing borders, making a purchase, accessing e-government services, or logging into corporate networks. Entrust offers an unmatched breadth of digital security and credential issuance solutions at the very heart of all these interactions. With more than 2,500 colleagues, a network of global partners, and customers in over 150 countries, it’s no wonder the world’s most entrusted organizations trust us.For more information888.690.2424 +1 952 933 1223 ****************Entrust and the Hexagon logo are trademarks, registered trademarks, and/or service marks of Entrust Corporation in the U.S. and/or other countries. All other brand or product names are the property of their respective owners. Because we are continuously improving our products and services, Entrust Corporation Learn more atU.S. Toll-Free Phone: 888 690 2424 International Phone: +1 952 933 1223。
AIX操作系统的安装安装介质与方式AIX操作系统的安装可以:1)通过Tape安装。
2)通过CD-ROM安装。
3)通过网络安装。
4)预先安装(Preinstall).在购买时选择“预装操作系统”。
AIX操作系统的安装方式(Installation Method)有以下四种:完全覆盖安装:操作系统被安装在rootvg的第一块硬盘上,这将覆盖原系统中所有的系统保留目录。
保留安装:这种安装方式可以保留操作系统的版本不变,同时保留 rootvg上的用户数据,但将覆盖/usr 、/tmp、/var 和/ 目录。
用户还可以利用/etc/preserve.list指定系统安装时需要保留的文件系统。
默认的需保留的文件系统为/etc/filesystem中所列。
升级安装:这种安装方式用于操作系统的升级,这将覆盖/tmp目录。
这是系统默认的安装方式。
备份带安装:恢复用mksysb命令生成的安装带中/image.data中指定的文件系统,这种安装方式用于系统(rootvg)的复制。
安装步骤准备工作(1) 连接好鼠标键盘显示器和电源线。
(2) 打开主机电源。
(3) 主机自动进入加电自检过程。
(4) 等到主机自检完毕电源绿色指示灯闪烁,按白色按钮开机。
(5) 等到光驱指示灯亮过之后,把第一张安装介质插入驱动器。
(6) 等到显示器电源指示灯点亮之后,系统提示选择console,根据提示按数字键。
(7) 等到主机开机鸣响一过,键盘灯闪亮过后,按‘5’(字符终端)进入系统安装画面。
(8) 提示键入admin口令,默认口令是admin.(power5小机)BOS(Base Operating System)安装(1) 主机将从安装介质上引导;(2) 当终端显示如下信息时:键入“1”并回车(注意:键入的“1”不回显)选择主控台(3) 屏幕上将不断显示一些信息,几分钟后出现:(4) 此后屏幕出现:这是系统安装和维护的主菜单。
(5)安装BOS基本操作系统键入“2”并回车,屏幕出现“Install and Setting”画面:这是系统安装的默认设置,用户可以根据需要进行修改。
Trusted ComputingLisa Thalheim1The widespread use of the term “intellectual property” in the discussion on access to knowledge and information is a fortuitous circumstance for those who profit from the sale of nonphysical goods. The term suggests that texts, music, and know-how are exactly the same as cars, houses, or televisions. There are clear rules for defining ownership of material goods, the rights of the owner, and what we understand as theft of such a good.The mantra of intellectual property cannot, however, belie that there are important differences between a digital piece of music and a vehicle. One such difference is that the musical piece –in contrast to the vehicle – can be shared simultaneously among any number of users without anyone incurring a damage. Another difference is that a vehicle cannot be copied any number of times and the copies distributed – which is possible in the case of a digitally available piece of music. We have already witnessed how those wishing to sell music respond to the possibility of reproducing musical pieces at virtually no cost. One need only look, for example, at the criminalization and vigorous prosecution of file-sharing network users by the music industry. The term “pirated copy” itself serves as a n example of how public perception is influenced. It relates the unauthorized reproduction of music and text to the criminal offense of robbery, which, by definition, is linked to the use of force. There are also efforts at the technological level to quash unlimited reproduction by preventing the copying of musical pieces using software and hardware. This method has the advantage that the interested parties –mostly international corporations – do not have to rely on policy makers and the legal system.Trusted computing is a technology that attempts to broadly implement this kind of artificial restriction on the possibilities of digital products, even though its creators vigorously dispute having had this intent in its development.Trusted computing in itself is difficult to grasp. Not only is it complicated from a technological point of view but it also combines various features, some of which are desirable and useful, others of which are problematic and dangerous – depending on who is deploying the technology and for what purpose. Advocates praise trusted computing as a solution for protecting against computer viruses and other attacks. Opponents vocally and energetically criticize the damage potential –because industry associations, manufacturers, and possibly also governments are usurping the user’s control over his own computer. What is it about this technology that causes such a stir? And who is right – the advocates or the opponents of trusted computing?In 1998, some of the major computer industry corporations founded the Trusted Computing Platform Alliance (TCPA). This industry alliance was then renamed Trusted Computing Group (TCG) in 2004. The founding members were chipmakers AMD, Infineon, and Intel; hardware makers AMD, Hewlett-Packard, IBM, and Sun Microsystems; and software maker Microsoft. The Trusted Computing Group’s website meanwhile lists over 140 member companies. Trusted computing can be viewed as an approach to solving problems that we have with our globally linked and ubiquitous computer systems: computer viruses, attacks on servers and private PCs, and, consequently, the loss of confidential information.1The author is a student of computer science and philosophy at Humboldt University in BerlinSome of the founding companies developed their own projects. Microsoft initially called its project Palladium and then NGSCB, which stands for Next Generation Secure Computing Base. This project covers both hardware and software. NGSCB attempts to develop fully trusted computer systems, including software and hardware. It thus differs from Intel’s Safer Computing Initiative, which is mainly concentrated on the hardware aspects of trusted computing.Simultaneous to this effort, the TCG members are developing TCPA specifications: a series of documents detailing how trusted computer systems are to be implemented.With these specifications, the TCG has proposed a de facto standard for how the basic security problems of computer systems are to be solved in the future.The centerpiece of the TCPA is the Trusted Platform Module (TPM), a small chip that is cheap to build and is supplied as an integral component of computers, printers, network hardware, and entertainment electronics. That means that anyone who purchases hardware is simultaneously buying the TPM –whether consciously or unconsciously. Most current notebooks already contain such a TPM. Both the U.S. Army and the U.S. Department of Defense require that every newly purchased computer contain a TPM.2The function of a TPM can be compared to that of a notary public. The TPM can store data confidentially and only distribute it under certain, predetermined conditions and it can certify information about the status of the computer system.It can reliably determine whether the computer has uploaded a predetermined set of programs, whether the licensing provisions are being observed for those programs, or whether they have been manipulated – whether by a virus or knowingly by the user. The TPM can then present this information to the computer user.However, it also offers the possibility of providing this information to third parties – say, the operator of a website or an online music provider with whom the user interacts.The latter feature is one of the main criticisms of opponents of trusted computing because this function enables online content providers to determine, for example, whether a user is working with a “trusted” software environment. From the provider’s perspective that would be a software environment, say, that makes it impossible to copy legally acquired content –a document, a piece of music, a video –onto a computer or to burn it onto a CD. So it is conceivable that providers might view only Microsoft Windows with Microsoft’s MediaPlayer as trusted and simply deny its services to anyone who does not use such a software environment. While the user would be free to deactivate the TPM – this fact could, in turn, be determined by the provider and serve as a reason to exclude the user from the service in question.The other criticism of the TCPA specification is that the user is granted only limited control over his computer. The TPM works on the basis of a secret key that is cryptographically different for each TPM. Practically all TPM functions are built on this key and, as no two TPMs in the world have the same key, it, in turn, makes possible to identify a TPM. Users, however, are unable to gain knowledge of or change this key; the manufacturer burns the key onto the TPM during production. The TCG justifies this decision with the argument that it 2/policy-guidance/dod-dar-tpm-decree07-03-07.pdf and/ciog6/news/500Day2006Update.pdfserves to protect the user himself. If the user does not know the key, he cannot erroneously reveal it to an attacker.A TPM, in principle, offers some useful functions that can help users better prevent important data from being lost or compromised. Yet it still seems too early to be able to estimate the mid-term effects of implementation of trusted computing. The technology is very complex and so far has not been discussed in the public. It will also take some time before applications begin using TPMs on a broad basis. What these applications will look like and what they will actually perform still remains widely unclear.What is clear, however, is that trusted computing by no means offers the promoted patent solution to all the problems of computer security. Instead, the cited risks associated with the deployment of trusted computing are already becoming evident.A technological assessment of the TCPA specification leads to the conclusion that the technology is unlikely to have a dramatic impact on the PC software market. It is likewise difficult to predict whether trusted computing will have significant negative effects on free software. But the existence and widespread use of TPMs in all computers weakens the hand of the individual (the user) vis-à-vis the computer and media industry. The technology has considerable potential to shift the power relationship further in favor of major corporations and industrial alliances.Even if trusted computing does have less influence in the PC sphere, we will tentatively see a greater influence in the area of specialized devices, especially entertainment electronics. Here, it is already now practical to allow the user only minimal control over the device. That has recently been shown in devices such as the Apple iPod, iPhone, and Amazon’s Kindle. The TCPA specification is ideal for bringing to market reliable and near unavoidable digital rights management3(DRM) applications on devices. Trusted computing is no longer a technical framework that can be used in a variety of ways. Rather, the companies behind trusted computing are primarily representing their economic interests by advancing this technology. These interests coincide in part with those of the user; in part, they are also intended to restrict the freedom and rights of the user (and hardware owner) as much as possible.Last but not least, the TCPA can also be understood as an attempt to technologically ingrain social acceptance of the concept of “intellectual property” without concern for the outcome of current, political, social, and legal discussions.It is up to users to reject the loss of control associated with the TCPA and to demand a technological alternative that treats users not as opponents or defenseless victims but as partners and citizens.3Digital Rights Management is a catch-all phrase for technological measures undertaken to guarantee the enforcement of rights to digital content, such as copyrights to documents or music. A frequent application of DRM technologies is, for example, protection against the copying of document or music files.。
02.人工智能养成良好的答题习惯,是决定高考英语成败的决定性因素之一。
做题前,要认真阅读题目要求、题干和选项,并对答案内容作出合理预测;答题时,切忌跟着感觉走,最好按照题目序号来做,不会的或存在疑问的,要做好标记,要善于发现,找到题目的题眼所在,规范答题,书写工整;答题完毕时,要认真检查,查漏补缺,纠正错误。
一、阅读理解1Some are concerned that AI tools are turning language learning into a weakening pursuit. More and more people are using simple, free tools, not only to decode text but also to speak. With these apps’ conversation mode, you talk into a phone and a spoken translation is heard moments later; the app can also listen for another language and produce a translation in yours.Others are less worried. Most people do not move abroad or have the kind of on-going contact with a foreign culture that requires them to put in the work to become fluent. Nor do most people learn languages for the purpose of humanising themselves or training their brains. On their holiday, they just want a beer and the spaghetti without incident.Douglas Hofstadter, an expert in many languages, has argued that something profound (深刻的) will disappear when people talk through machines. He describes giving a broken, difficult speech in Chinese, which required a lot of work but offered a sense of satisfaction at the end.As AI translation becomes an even more popular labor-saving tool, people can be divided into two groups. There will be those who want to stretch their minds, expose themselves to other cultures or force their thinking into new pathways. This group will still take on language study, often aided by technology. Others will look at learning a new language with a mix of admiration and puzzlement, as they might with extreme endurance (耐力) sports: “Good for you, if that’s your thing, but a bit painful for my taste.”But a focus on the learner alone misses the fundamentally social nature of language. It is a bit like analysing the benefits of close relationships to heart-health but overlooking the inherent (固有的) value of those bondsthemselves. When you try to ask directions in broken Japanese or ruin a joke in broken German, you are making direct contact with someone. And when you speak a language well enough to tell a story with perfect timing or put delicate differences on an argument, that connection is more profound still. The best relationships do not require a medium.1. What is the first two paragraphs mainly about?A. Communicating through apps is simple.B. Apps provide a one-way interactive process.C. Using apps becomes more and more popular.D. AI tools weaken the needs of language learning.2. What is Douglas’ attitude to language learning?A. Favorable.B. Objective.C. Doubtful.D. Unclear3. What do we know about the second group mentioned in paragraph 4?A. They are keen on foreign culture.B. They long to join in endurance sports.C. They find Al tools too complex to operate.D. They lack the motivation to learn language.4. How does the author highlight his argument in the last paragraph?A. By providing examples.B. By explaining concepts.C. By stating reasons.D. By offering advice.【答案】1. D 2. A 3. D 4. A【解析】这是一篇说明文。
Quizzes for Chapter 11单选(1分)图灵测试旨在给予哪一种令人满意的操作定义得分/总分∙ A.人类思考 ∙ B.人工智能∙ C.机器智能1.00/1.00 ∙D.机器动作正确答案:C 你选对了2多选(1分)选择以下关于人工智能概念的正确表述得分/总分∙A.人工智能旨在创造智能机器该题无法得分/1.00 ∙B.人工智能是研究和构建在给定环境下表现良好的智能体程序该题无法得分/1.00∙C.人工智能将其定义为人类智能体的研究该题无法得分/1.00∙ D.人工智能是为了开发一类计算机使之能够完成通常由人类所能做的事该题无法得分/1.00 正确答案:A 、B 、D 你错选为A、B 、C 、D3多选(1分)如下学科哪些是人工智能的基础?得分/总分∙A.经济学0.25/1.00 ∙B.哲学0.25/1.00∙ C.心理学0.25/1.00∙D.数学0.25/1.00正确答案:A 、B 、C 、D 你选对了4多选(1分)下列陈述中哪些是描述强AI (通用AI )的正确答案?得分/总分∙A.指的是一种机器,具有将智能应用于任何问题的能力0.50/1.00∙ B.是经过适当编程的具有正确输入和输出的计算机,因此有与人类同样判断力的头脑0.50/1.00∙C.指的是一种机器,仅针对一个具体问题 ∙D.其定义为无知觉的计算机智能,或专注于一个狭窄任务的AI正确答案:A 、B 你选对了5多选(1分)选择下列计算机系统中属于人工智能的实例得分/总分∙ A.Web 搜索引擎 ∙ B.超市条形码扫描器∙ C.声控电话菜单该题无法得分/1.00 ∙D.智能个人助理该题无法得分/1.00正确答案:A 、D 你错选为C 、D6多选(1分)选择下列哪些是人工智能的研究领域 得分/总分∙ A.人脸识别0.33/1.00 ∙B.专家系统0.33/1.00 ∙C.图像理解 ∙D.分布式计算正确答案:A 、B 、C 你错选为A 、B7多选(1分)考察人工智能(AI)的一些应用,去发现目前下列哪些任务可以通过AI 来解决得分/总分∙A.以竞技水平玩德州扑克游戏0.33/1.00 ∙B.打一场像样的乒乓球比赛∙ C.在Web 上购买一周的食品杂货0.33/1.00 ∙D.在市场上购买一周的食品杂货正确答案:A 、B 、C 你错选为A 、C8填空(1分)理性指的是一个系统的属性,即在_________的环境下做正确的事。
武器特点介绍英文作文1. This weapon is known for its incredible accuracy and precision, making it a favorite among sharpshooters and marksmen.2. The unique design of this weapon allows for rapid firing, giving the user a significant advantage in combat situations.3. With its lightweight and ergonomic design, this weapon is easy to handle and maneuver, making it ideal for close-quarter combat.4. The advanced technology used in this weapon ensures minimal recoil, allowing for quick and accurate follow-up shots.5. This weapon is equipped with a variety of customizable features, allowing the user to adapt it to different tactical needs and preferences.6. The durability and reliability of this weapon makeit a trusted choice for military and law enforcement agencies around the world.7. The versatility of this weapon allows for use in a wide range of environments and scenarios, from urbanwarfare to long-range engagements.8. The intuitive and user-friendly controls of this weapon make it easy for even novice users to quickly become proficient in its operation.9. The firepower and stopping power of this weapon make it a formidable choice for defensive and offensive purposes.10. The sleek and modern appearance of this weapon adds to its appeal, making it a popular choice among collectors and enthusiasts.。
可信计算英语Trusted ComputingThe concept of trusted computing has become increasingly important in the digital age, where the security and integrity of our data and devices are paramount. Trusted computing refers to a set of technologies and standards that aim to enhance the trustworthiness of computing systems, ensuring that they behave in a predictable and reliable manner. This is particularly crucial in an era where cyber threats, data breaches, and malicious software pose significant risks to individuals, businesses, and governments.At the core of trusted computing is the idea of establishing a trusted platform, which is a computing environment that can be relied upon to perform specific tasks or functions in a secure and verifiable manner. This is achieved through the integration of hardware and software components, such as secure processors, trusted platform modules (TPMs), and trusted execution environments (TEEs), which work together to create a chain of trust that extends from the hardware to the software and applications running on the system.One of the primary goals of trusted computing is to provide userswith a high level of assurance that their devices and the data they store or process are protected from unauthorized access, modification, or tampering. This is accomplished through the implementation of security features like secure boot, which ensures that the system's firmware and operating system are authentic and have not been compromised, and remote attestation, which allows a device to prove its identity and the integrity of its software to a remote party.Another important aspect of trusted computing is the ability to enforce access control and data protection policies. By leveraging secure hardware and software components, trusted computing systems can implement robust access control mechanisms that restrict the use of sensitive data or resources to authorized entities. This is particularly valuable in scenarios where sensitive information, such as personal financial data, medical records, or national security information, needs to be protected from unauthorized access or misuse.Furthermore, trusted computing can play a crucial role in the development of secure and reliable cloud computing environments. By providing a trusted platform for the execution of cloud-based applications and the storage of sensitive data, trusted computing can help address concerns about the security and privacy of cloud-based services, enabling organizations to take advantage of the benefits ofcloud computing while mitigating the risks associated with it.The impact of trusted computing extends beyond individual devices and cloud-based services. It also has significant implications for the broader Internet of Things (IoT) ecosystem, where the proliferation of connected devices has heightened the need for secure and trustworthy systems. By incorporating trusted computing principles into IoT devices, manufacturers can ensure that these devices are less vulnerable to cyber threats, such as botnets, malware, and unauthorized access, thereby enhancing the overall security and reliability of the IoT infrastructure.However, the implementation of trusted computing is not without its challenges. Ensuring the proper integration and interoperability of the various hardware and software components involved can be a complex and resource-intensive process. Additionally, there are concerns about the potential for trusted computing to be used to restrict user freedoms or to enable surveillance and monitoring by governments or other entities.To address these challenges, ongoing research and development efforts are focused on improving the scalability, flexibility, and transparency of trusted computing solutions. This includes the exploration of new hardware architectures, the development of open-source trusted computing frameworks, and the establishmentof industry-wide standards and best practices to ensure the widespread adoption and trustworthiness of these technologies.In conclusion, trusted computing is a crucial component of modern digital security, providing a foundation for the development of secure and reliable computing systems, cloud-based services, and IoT ecosystems. As the threats to our digital lives continue to evolve, the importance of trusted computing will only grow, and it will be essential for individuals, businesses, and governments to embrace these technologies to protect their data, devices, and critical infrastructure.。
-015November7,2001 A Minimal Trusted Computing Base for Dynamically EnsuringSecure Information FlowJeremy Brown Thomas F.Knight,Jr.1IntroductionWith each passing year,more and more valuable,con-fidential information is stored in government and com-mercial computer systems.Ensuring the security of those computer systems is a challenge with social,political,and technological aspects;computer networks,however,make the technological aspects particularly important as com-puter systems are exposed to assault from remote sites. Two critical components of the technological computer security problem are access control and data dissemina-tion control.Access control mechanisms prevent unau-thorized parties from accessing(e.g.reading,modifying, or executing)confidential data or programs.Data dissemi-nation control mechanisms prevent confidential data from being exposed to unauthorized parties,either by accident or due to malicious code which has gained read-access to the data;e.g.a malicious or erroneous program should never be able to read a“Top Secret”value and write it out as an“Unclassified”result.In this memo we present two contributions addressing the problem of controlling data dissemination,also known as ensuring secure informationflow.First,we present a sound,flexible model which dynamically ensures secure dataflow with respect to a lattice-based informationflow policy,with security classification on a per-word basis. Second,we present a set of hardware mechanisms,most notably the Hash Execution(HEX)unit,which enable the practical implementation of our model.We believe that recent trends in logic and memory density and costs make the architectural overhead of our mechanisms small,and that they are more than offset by the significant benefits they bring to system security.Our dynamic strategy has several advantages over static (compile-time)verificationof secure informationflow.It Project Aries Technical Memo ARIES-TM-015Artificial Intelligence LaboratoryDepartment of Electrical Engineering and Computer Science Massachusetts Institute of TechnologyCambridge,MA USAResearch performed underDARPA/AFOSR Contract Number F306029810172is not conservative in its enforcement of security policy; static verification techniques must reject some programs which are,in fact,correct,whereas our dynamic strat-egy will safely execute them.Also,our strategy does not require that the security tag for each storage location be fixed;instead,our mechanism allows security tags to be adjusted according to a set of rules which guarantees that no information is leaked due to those adjustments. Perhaps the most important advantage of our dynamic strategy over static verification strategies,however,is that it reduces the size of the required Trusted Computing Base (TCB).A TCB consists of a set of hardware and software mechanisms which,if correctly implemented,guarantee that regardless of any other code run on the system,se-curity will not be violated.Verifying the correctness of a TCB is laborious at best(e.g.[21,14]),and con-siderable work tends to go into minimizing its size(e.g.[20].)Compile-time program verification places the ver-ifying compiler within the bounds of the TCB;unfortu-nately,a compiler is an extremely large,complex piece of software.By contrast,our dynamic mechanism guaran-tees security with a TCB composed of only a few simple hardware mechanisms and software routines.The remainder of this paper is structured as follows. In Section2,we discuss a handful of previous works in the computer securityfield,particularly with respect to secure dataflow.In Section3,we present an abstract ar-chitecture which ensures that informationflow is secure according to a lattice model;we also present a general rule enabling authorized principals to safely declassify a selected datum without unintentionally leaking informa-tion about other data.In Section4,we describe the hash execution(HEX)unit,a general-purpose processor com-ponent with particular applicability to our security model. In Section5,we discuss a few additional matters relating to the practical implementation of our abstract model.We conclude in Section6.2Related WorkThere has been a great deal of work on computer security in general and on secure informationflow in specific,andwe make no attempt here at identifying all or even most of it.Instead,we focus on a few key works which have direct relevance to this memo.2.1Schemes for dynamic,secure dataflow Several previous works describe schemes for dynamically ensuring secure dataflow.Lampson outlines the parameters of the problem in [12],suggesting that“confined”programs or procedures must be“memoryless”.Fenton[8]describes the Data Mark Machine,an abstract model which implements memoryless confinement.The Data Mark Machine includes a return-address stack used to allow safe declassification of the program counter:an unclassified process may push a return-address onto the stack,and a classified process may then declassify itself by popping the unclassified address into its program counter.Building on[8],Gat and Saal[10]address the addi-tional problem of general purpose registers,suggesting that any register which has a classified value written into it in the course of a classified computation must be returned to its original value when the program counter exits the classified region.The Hydra[23,3]operating system takes a different approach to implementing confined procedures,eschew-ing classifications in favor of permissions-based confine-ment.The caller of a routine can insist that the called routine may only write data into storage explicitly pro-vided to the routine by the caller;in this way,the caller has complete control over where its data(or derivatives thereof)may be stored upon return from the called rou-tine.Since Hydra does not actually tag data with se-curity classifications,however,if the caller accidentally provides publicly-reachable storage as an argument to the confined procedure,confidential information may easily be revealed.On the other hand,the ADEPT-50[22]operating sys-tem does maintain security classifications,although on a rather coarse,per-file basis.Whenever a job creates a new file,the classification of thatfile is at least as great as that of allfiles the job has previously accessed;we refer to this type of scheme,in which the classification of a job or pro-cess cannot be lowered after it has been raised,as a“high-water mark”scheme.Since the goal of ADEPT’s classi-fication scheme is merely to prevent unauthorized access, there is no attempt made to prevent classified information from being written into a previously-created,unclassified file.The surveillance protection mechanism[11]dynami-cally tracks the classification of each variable and of the program counter;it assumes that only the output of a pro-gram will be visible to any other process,and refuses to display it if the program counter has been contaminated by overly classified data.The program counter classifica-tion can only increase;as with any other high-water mark scheme,the surveillance protection mechanism is there-fore conservative in its security policy enforcement.The Privacy Restriction Processor[19]attempts to ensure dataflow security by associating classifications with each storage region(segment),and with each pro-cess counter.The process counter classification is monotonically-increasing;thus the PRP is another high-water mark scheme.Additionally,the PRP allows the classification of each segment to monotonically increase; because there is no moderation of when these classifica-tions may increase,such reclassification actually enables implicit information leakage as described in[4].2.2Lattice-based secure informationflow Denning[4]describes the lattice model of information flow that we use as the foundation for our dynamically se-cure abstract architecture;she also identifies severalflaws and limitations of previous schemes for dynamically en-suring secure dataflow which provided inspiration and counter-examples for our design.The sequel work[5]presents a mechanism for compile-time verification of informationflow security in a program with respect to a particular security lattice;every storage location is statically tagged with its security classification. Myers and Liskov[17,18]present a specific(and in our opinion particularly elegant)security classification scheme;equivalence-classes on their security labels form a lattice,and thus essentially the same static certification techniques as described in[5]apply.Additionally,the structure of their security labels enables a tasteful mecha-nism for decentralized declassification of data by owning principals.Myers[16]describes a variation of the Java programming language which includes this security label-ing scheme.2.3CapabilitiesFinally,although we have in general made no attempt to address matters of access control,the correctness of our data-dissemination control model relies on the use of ca-pabilities[6,13]–hardware-protected pointers denoting a specific region of memory–to ensure that unallocated memory may not be observed by any process.In par-ticular,we favor guarded-pointer style capabilities[2,1] which efficiently encode base and bounds in the capabil-ity representation itself in a fashion which allows the ca-pability to point directly at any word in its range,rather than just thefirst word.3Abstract ArchitectureIn this section,we describe an abstract architecture which dynamically ensures secure informationflow when run-ning arbitrary code.The architecture provides the follow-ing features:•Precise(non-conservative)enforcement of a lattice-based security policy•Dynamic security tags on a per-word basis•Safe overwriting of values with values of different se-curity levels•Safe declassification of a program counter when its dependency on classified data ends(i.e.the PC is not stuck at the security“high-water mark”)At a high level,our goal is to provide confinement ([12])by ensuring that no modifications to machine state performed by a process running with a classified program counter may be observed by processes with inferior or or-thogonal classifications–these processes can thus be said to“have no memory”of the operations performed by the classified process.This imposes restrictions on when a process may overwrite an existing value both in memory, where the act of overwriting must be invisible to processes with inferior/orthogonal classifications,and in registers, where the act of overwriting must be invisible to the pro-cess itself when it has escaped from the classified region of execution.As per Denning[4],we enforce a security policy de-scribed by a lattice of security classes;⊥is the least re-strictive class,and is the most restrictive.We will de-note the security class of an object o as o.If information is allowed toflow from class a to class b,we will write a→b;if not,we will write a/→b.The→relation is transitive.∀C:⊥→C,and∀C:C→ .The class combining operator⊕is a least-upper-bound operator on the class lattice such that a⊕b is the least restrictive class that encompasses all of the restrictions ofa and b.∀C:⊥⊕C=C,and∀C: ⊕C= .3.1Components,conventions,and datarepresentationOur architecture features processes,a shared memory,in-put channels,and output channels.Each process consists of a set of general-purpose reg-isters,a program-counter,and a special register-stack whose purpose is explained in Section3.4.The registers of a given process are private and may not be examined by any other process.The machine word is the unit of information:each slot of memory contains one word,and each register contains one word.A word w consists of a pair(w,w v)where w v is the data stored in the word,and w is the security class of the word.Words are read and written atomically. Except in special casess,in the remainder of this paper we will not distinguish between a word of memory and a register in ourflow-control rules;we will simply speak of a generic word w.When we speak of writing a new value v with class v into a word w,we will generally denote the original word contents as w i and the new word contents as w i+1.The program counter associated with a process p is tagged with a security class just like any other word;we will generally refer to the security class of p’s program counter simply as the security class of p,and will denote it as p.When an operation updates p,we will generally denote the original class as p i and the new one as p i+1. To ensure that unallocated regions of memory may not be observed by any process,pointers to memory are repre-sented using capabilities denoting specific ranges of mem-ory;hardware checks ensure that no process may access a region of memory for which it does not hold a capability.1 To disable covert channels based on order-of-allocation, the specific memory address denoted by a capability may not be observed by non-TCB code.A capability is stored as the data component of a word just as any other datum would be.Each input channel I is tagged with a static(unmodi-fiable)security class I.Each output channel O is tagged with a static security class O.3.2Operations at a single security level Many operations will not adjust the security class of the program counter.Process p may request a memory from an allocation routine which is part of the TCB;the capability returned by the allocator has security class p,as do all the words in the segment to which the capability points.Write attempts never adjust the program counter’s se-curity class.We note here that process p may freely write a value v into word w when p=w=v;we defer presen-tation of the general rule for legal writes to Section3.6. The primitive operation readable?(w)invoked by p re-turns true if w→p and false otherwise;in other words,it returns true only when p could examine the value or class of w without increasing p.Invoking readable?does not change p.21In point of fact,any scheme which prevented unallocated memory from being examined by any process would be adequate,e.g.a privi-leged“top-of-memory register”which was incremented during alloca-tion.Capabilities,however,are the solution to so many other problems as well that we cannot resist their application in this case.2We do not have an operation contrapositive to readable?,i.e.one which returns true if and only if p→w.While such an operation seems attractive,in conjunction with monotonically-increasing security classes described in Section3.6it poses a channel for information leakage,soProcess p may examine p,and compare it for equality with any w for which?(w)is true;p may ex-amine theflow-relationships between all classes C in the security lattice for which C→p.3.3Computing the security class of a result Suppose that process p executes the sequential instruction I s:c=OP(a,b)where OP is a non-branching operation and the instruc-tion’s storage location I s has security class I s.Since OP is non-branching,p’s program counter after the operation is independent of the values of a or b;however,it is obvi-ously dependent upon the fact that OP is non-branching! Thus,p i+1=p i⊕I s(1) Generally,we expect instructions in a given body of code to have the same classification,and thus in most practi-cal cases p will not be changed by non-branching instruc-tions.The class of the operation result is computed from the classes of the operands and the process itself,i.e.c=p⊕a⊕b⊕I s(2) Where the result may actually be stored–that is,which register or memory slot is a legal target for c–is deter-mined by the general rule for safe writing in Section3.6. Now suppose that process p computes the conditionally-branching instructionI b:if(a)goto TARGETIn this case,the new value of the program counter is plainly predicated on the value of a in addition to the instruction itself,and thusp i+1=p i⊕a⊕I b(3) This leads to two conundrums.First,if the program counter always increases in security on every branch,how do we avoid inevitably driving every p all the way up to ?And second,since after a branch p is now running at an increased security level,how can it use any of the reg-isters which werefilled with lower-security values before the branch?The answer to both problems lies with the register stack.3.4The register stackThe register stack is an arbitrarily deep stack of pairs of the form(register-name,value).The security class of each pair is the class of the value.A process may inspect the we must omit it.register stack’s contents as if they were any other values, but it may not modify the stack’s contents except with the following three primitive operations.3.4.1Primitive operationsThe primitive operation PUSH RETURN(a)where a is an address pushes the pair(PC,a )onto the register stack, where the value of a is the same as that of a,but a = p⊕a,i.e.a =(p⊕a,a v).The program counter is incremented and p is adjusted according to rule1as with any other non-branching operation.The primitive operation PUSH GPR(GPRID,n) where n is some value pushes the old value o of the specified general-purpose register onto the stack in a pair (GPRID,o);note that o is the same on the stack as it was in the register–it is not combined with p!At the same time as o is placed on the stack,n is written into the register where the value of n is the same as that of n,but n =p⊕n,i.e.n =(p⊕n,n v).The primitive operation POP pops the top pair (register-name,value)off of the register stack and places the popped value into the register named register-name. In the case of a GPR,this re-creates the state of the regis-ter prior to the PUSH GPR operation;in the case of a PC, this sets the program counter to the address specified by the corresponding PUSH RETURN.More specifically,the effect of POP when a PC en-try is on top of the stack is to set p PC to the address chosen at the time of the corresponding PUSH RETURN. Since the destination address was chosen at the time of the PUSH RETURN,it contains no information as to the activities of the process since that time,and thus it is safe for p to take on the security class of the popped address even though it may be less strict than the class of the PC value it is replacing!Thus,we can prevent p from monotonically increas-ing with every conditional branch by simply preceding each such branch with a PUSH RETURN,and terminat-ing each path of the conditional with a POP.Let us re-iterate that since the post-conditional-branch address is pushed before the conditional branch itself is executed, the popped address contains no information regarding the value branched on,and therefore its class may safely re-place whatever security class was imposed on the PC by the conditional branch.The register stack is similar to the program counter stack of the Data Mark Machine[8]and the unclassified state-restoring mechanisms of[10]3.4.2Process initiation,error handling,and termina-tionA process is started with an empty register stack.A pro-cess exits by attempting to POP an empty register stack; there is no other exit mechanism.A default error handler invokes POP repeatedly until a (PC,a)pair is popped or the stack is empty.A program-mer may install other error handlers for various error con-ditions;regardless,an error handler will always run with same security class,register set,register stack,etc.that the process had before it caused the error.This ensures that p’s error remains invisible at any class C for which p/→C.3.5Operations on I/O channelsProcess p may only read values from input channel I if p=I;a read attempt which violates this rule raises an error which is handled at p.Since the error is predicated on information freely available to p,the success or failure of a read leaks no new information about I to p.Values read from I are initially assigned security class I. Process p may send value v over output channel O only if p=O and v→p(i.e.for p readable?(v)is true.)A attempt violates this rule raises an error which is handled at p.Since the error is predicated on information freely available to p,the success or failure ofa read leaks no new information about O or v to p.3.6A general rule for writing new valuesinto wordsIn this section we present a rule allowing the class of each word w to monotonically increase over time,and formally show the rule to be safe.We also present an example which attempts to violate secure informationflow using implicitflow and show it does not succeed under our rule.3.6.1The general ruleReclassification Rule1Process p may write a value v into word w i when w i=p;the write sets w i+1=p⊕v, thus ensuring that regardless of v,p→w i+1.An attempt by p to write to a word w i for which w i=p signals an error condition which is handled at p.To demonstrate the safety of this rule we must verify two things:first,that p learns nothing about the class or value of v from the success or failure of the write;and second,that p leaks no information about its activity to any process p for which p/→p .To show that p learns nothing about the class or value of v,we simply note that the predcondition for the write to succeed is independent of v;failure or success of the write is not predicated on any characteristic of v.To show that p leaks no information to any process p for which p/→p ,we reason as follows:since p=w i, p/→p is equivalent to w i/→p which,taken in conjunc-tion with p→w i+1,implies that w i+1/→p .Resultingly, for p both readable?(w i)and readable?(w i+1)are false; no information is leaked to p via the readable?routine. Correspondingly,p cannot observe either the original or the new values or classes while remaining at p ,so it can-not detect the fact that they have changed.3.6.2A special-case ruleArbitrary writes by processes running with class⊥present a special case.Reclassification Rule2A process p running with p=⊥may write any value into any word,regardless of their respective classifications.A process running with class⊥embodies no privileged information in its program counter;therefore,regardless of what it writes where,it leaks no privileged information by its actions.Since every write succeeds,the process learns nothing about the values being written or overwrit-ten.3.6.3An example:preventing implicitflows Denning[4]argues that there an intrinsic problem with dynamic bindings,namely that“...a change in an object’s class may remove that object from the purview of a user whose clearance no longer permits access to the object. The class change event can thereby be used to leak infor-mation...”Denning cites an example by Fenton[7]:b:=c:=false;if a then c:=true;if c then b:=true;Initially p=b=c=⊥,and the constants true and false have class⊥.To illustrate the problem,a is any class stricter than⊥.Denning points out that on a system which allows un-restricted writes to a variable as long it they monotoni-cally increase its security class(e.g.the Privacy Restric-tion Processor[19],)the execution leaks the value of a into b without ever raising the class of b to match that of a.Our rule,however,is more restrictive and therefore safe:under our rule,the attempt to write to c after branch-ing on a fails3,since after the branch p=a and therefore p=a.Regardless of the value of then,c always re-mains false;no information is leaked from a.4With respect to a similar piece of code,Myers and Liskov[17]note that while it is easy for a runtime mech-anism to detect an improper informationflow,if the error causes the program to abort,the program-abort or lack thereof conveys some information.Since our architecture does not allow a program to exit/abort,and error handlers run at the same security level as the process that invokes them,our architecture is not subject to this problem.5 3.7Authenticated declassificationThere will undoubtedly be occasions when a user may le-gitimately wish to explicitly assign a weaker security class to a particular datum;for instance,a computation based on classified data might yield a result suitable for public dissemination.To support this goal,we add the notion of authenticated principals to our architecture.Each process runs on behalf of a specific principal.A principal has two powers:the power to perform certain declassifications,and the power to replace itself with certain other principals.3.7.1DeclassificationA principal u may be authorized to perform certain de-classifications;it is up to the trusted software implement-ing the lattice and class structures to define the declassifi-cations allowed by each principal.For instance,in a linear lattice,a particular principal might be authorized to declassify data from“Secret”to “Classified”,but not permitted to declassify“Classified”to“Unclassified.”As another example,in Myers and Liskov’s labeling scheme[17,18],each security class is defined by nested sets of principals;a principal may,for instance,perform a declassification by removing itself from the list of a da-tum’s owners.63The error handler invoked by the write failure could report the inci-dent over an output channel O where a→O before allowing execution to continue through the program;this would enable debugging at a later date.4As a general note,to express any if-statement“correctly”on our architecture–that is,in such a fashion that after the if-statement,p is the same as it was before the if-statement–one must precede each if-statement with a PUSH RETURN operation,and terminate each if-statement with a POP operation.5It is worth noting that a program could conditionally fail to termi-nate based on a secure value.Neither static nor dynamic schemes for verifying secure informationflow can prevent this type of information leakage;additional mechanisms and restrictions are required.(Non-termination is just an extreme form of covert information transfer via timing channels.)6The extended labeling scheme in[18]scheme is not actually a lat-The act of declassification obviously reveals some in-formation about both the declassified value and the pro-cess performing the declassification.Since a process’state could depend on information that a principal is not allowed to reveal,we must constrain when declassifica-tion may be performed.We define the following rule for safe declassification:Reclassification Rule3Each principal u is associated with a(possibly empty)set of pairs of classes F u= {(a u1,b u1),(a u1,b u1),...}.Process p running with u’s authority u may change the class of word w to C if and only if p=w and∃(a ui,b ui)∈F u:w=b ui and C=a uip may change its own class under the same criteria, which simplify in this case to:∃a ui→b ui∈F u:p=b ui and C=a uiSince this rule only allows a process to declassify a value when it is authorized to declassify the program counter in the same way,it reveals no information about process state which the principal is not authorized to re-veal.This rule is the only rule in our architecture which allows a process p to write a value with a security classi-fication C for which it is not guaranteed that p→C. Note that while this rule is correct,an implementation might choose not to instantiate the set F u explicitly,as it might be prohibitively large.For instance,in the My-ers/Liskov labeling scheme,while the declassification op-eration“remove this principal from the list of owners”is simple to define and implement,it expresses a huge num-ber of potential transitions–F u must contain one pair for every possible list of owners containing the principal.3.7.2Principal replacementA principal may be authorized to replace itself with an-other principal as the principal upon whose behalf the cur-rent process is running(the authority principal.)We im-plement this with a hardware-supported“role”stack,the top element of which is the authority principal.Using this mechanism,we can implement Role-Based Access Control([9])–a principal may take on any of a variety of“roles”(i.e.other principals),without ever hav-ing the access-rights of more than one of those roles at a time.The role stack is also quite useful for logging exactly which principals performed what actions in what roles tice;equivalence classes on the labels form a lattice.In spite of this, their declassification operations actually map in a one-to-one fashion with specific arcs in the lattice.。