Verifying Cryptographic Protocols for Electronic Commerce
- 格式:pdf
- 大小:126.70 KB
- 文档页数:14
信息安全工程师英语词汇Information Security Engineer English VocabularyIntroductionIn today's digital era, information security plays a critical role in safeguarding sensitive data from unauthorized access, alteration, or destruction. As technology continues to advance, the need for highly skilled professionals, such as Information Security Engineers, has become increasingly important. These professionals possess a vast knowledge of English vocabulary used in the field of information security. This article aims to provide an extensive list of English words and phrases commonly used by Information Security Engineers.1. Basic Terminology1.1 ConfidentialityConfidentiality refers to the protection of information from unauthorized disclosure. It ensures that only authorized individuals have access to sensitive data.1.2 IntegrityIntegrity refers to maintaining the accuracy, consistency, and trustworthiness of data throughout its lifecycle. It involves preventing unauthorized modification or alteration of information.1.3 AvailabilityAvailability refers to ensuring that authorized users have access to the information they need when they need it. It involves implementing measures to prevent service interruptions and downtime.1.4 AuthenticationAuthentication is the process of verifying the identity of a user, device, or system component. It ensures that only authorized individuals or entities can access the system or data.1.5 AuthorizationAuthorization involves granting or denying specific privileges or permissions to users, ensuring they can only perform actions they are allowed to do.2. Network Security2.1 FirewallA firewall is a network security device that monitors and controls incoming and outgoing traffic based on predetermined security rules. It acts as a barrier between internal and external networks, protecting against unauthorized access.2.2 Intrusion Detection System (IDS)An Intrusion Detection System is a software or hardware-based security solution that monitors network traffic for suspicious activities or patterns that may indicate an intrusion attempt.2.3 Virtual Private Network (VPN)A Virtual Private Network enables secure communication over a public network by creating an encrypted tunnel between the user's device and the destination network. It protects data from being intercepted by unauthorized parties.2.4 Secure Socket Layer/Transport Layer Security (SSL/TLS)SSL/TLS is a cryptographic protocol that provides secure communication over the internet. It ensures the confidentiality and integrity of data transmitted between a client and a server.3. Malware and Threats3.1 VirusA computer virus is a type of malicious software that can replicate itself and infect other computer systems. It can cause damage to data, software, and hardware.3.2 WormWorms are self-replicating computer programs that can spread across networks without human intervention. They often exploit vulnerabilities in operating systems or applications to infect other systems.3.3 Trojan HorseA Trojan Horse is a piece of software that appears harmless or useful but contains malicious code. When executed, it can provide unauthorized access to a user's computer system.3.4 PhishingPhishing is a fraudulent technique used to deceive individuals into providing sensitive information, such as usernames, passwords, or credit card details. It often involves impersonating trusted entities via email or websites.4. Cryptography4.1 EncryptionEncryption is the process of converting plain text into cipher text using an encryption algorithm. It ensures confidentiality by making the original data unreadable without a decryption key.4.2 DecryptionDecryption is the process of converting cipher text back into plain text using a decryption algorithm and the appropriate decryption key.4.3 Key ManagementKey management involves the generation, distribution, storage, and revocation of encryption keys. It ensures the secure use of encryption algorithms.5. Incident Response5.1 IncidentAn incident refers to any event that could potentially harm an organization's systems, data, or users. It includes security breaches, network outages, and unauthorized access.5.2 ForensicsDigital forensics involves collecting, analyzing, and preserving digital evidence related to cybersecurity incidents. It helps identify the cause, scope, and impact of an incident.5.3 RemediationRemediation involves taking actions to mitigate the impact of a security incident and prevent future occurrences. It includes removing malware, patching vulnerabilities, and implementing additional security controls.ConclusionAs Information Security Engineers, a strong command of English vocabulary related to information security is crucial for effective communication and understanding. This article has provided an extensive list of terms commonly used in the field, ranging from basic terminology to network security, malware, cryptography, and incident response. By mastering these words and phrases, professionals in the field can enhance their knowledge and contribute to the protection of sensitive information in today's ever-evolving digital landscape.。
【最新整理,下载后即可编辑】PART 11 Electronic Records; Electronic Signatures第11款电子记录;电子签名Subpart A--General Provisions分章A 一般规定Sec. 11.1 Scope.11.1适用范围(a) The regulations in this part set forth the criteria under which the agency considers electronic records, electronic signatures, and handwritten signatures executed to electronic records to be trustworthy, reliable, and generally equivalent to paper records and handwritten signatures executed on paper.本条款的规则提供了标准,在此标准之下FDA将认为电子记录、电子签名、和在电子记录上的手签名是可信赖的、可靠的并且通常等同于纸制记录和在纸上的手写签名。
(b) This part applies to records in electronic form that are created, modified, maintained, archived, retrieved, or transmitted, under any records requirements set forth in agency regulations.This part also applies to electronic records submitted to the agency under requirements of the Federal Food, Drug, and Cosmetic Act and the Public Health Service Act, even if such records are not specifically identified in agency regulations.However, this part does not apply to paper records that are, or have been, transmitted by electronic means.本条款适用于在FDA规则中阐明的在任何记录的要求下,以电子表格形式建立、修改、维护、归档、检索或传送的记录。
FDA 21 CFR part 11译文21 CFR Part 11是针对电子记录和电子签名的FDA法规,对于药厂和医疗器械使用的众多电子记录和电子签名提供了详尽的要求和规范。
Subpart A--General ProvisionsA部分—通用规定11.1 Scope.11.1 范围(a) 本部分的法规制定了接受标准,用于机构评估电子记录、电子签名、电子记录加手写签名的可信性、可靠性,以及通常等同于纸质记录和手写签名的形式。
(a) The regulations in this part set forth the criteria under which the agency considers electronic records, electronic signatures, and handwritten signatures executed to electronic records to be trustworthy, reliable, and generally equivalent to paper record sand handwritten signatures executed on paper.(b) 本部分适用于根据法规需求制定的,以电子形式生成、修改、维护、存档、恢复或传输的任何记录。
还适用于提交给监管机构的关于联邦食品、药品和化妆品以及公共健康服务法案需求的电子记录,即使此类记录不是法规中特别提到的。
但是,本部分不适用于以电子形式传输的纸质记录。
(b) This part applies to records in electronic form that are created, modified, maintained, archived, retrieved, or transmitted, under any records requirements set forth in agency regulations. This part also applies to electronic records submitted to the agency under requirements of the Federal Food, Drug, and Cosmetic Act and the Public Health Service Act, even if such records are not specifically identified in agency regulations. However, this part does not apply to paper records that are, or have been, transmitted by electronic means.(c) 当电子签名和相关的电子记录符合本部分要求时,机构应认可电子签名等同于手写签名、缩写和其他法规中要求常用的签名形式,除非是法规自1997年8月20日以来特别强调的情况。
ATTACKS ON SECURITY PROTOCOLSUSING AVISPAVaishakhi SM. Tech Computer EngineeringKSV University, Near Kh-5, Sector 15Gandhinagar, GujaratProf.Radhika MDept of Computer EngineeringKSV University, Near Kh-5, Sector 15Gandhinagar, GujaratAbstractNow a days, Use of Internet is increased day by day. Both Technical and non technical people use the Internet very frequently but only technical user can understand the aspects working behind Internet. There are different types of protocols working behind various parameters of Internet such as security, accessibility, availability etc. Among all these parameters, Security is the most important for each and every internet user. There are many security protocols are developed in networking and also there are many tools for verifying these types of protocols. All these protocols should be analyzed through the verification tool. AVISPA is a protocol analysis tool for automated validation of Internet security protocol and applications. In this paper, we will discuss about Avispa library which describes the security properties, their classification, the attack found and the actual HLPSL specification of security protocols.Keywords- HLPSL,OFMC,SATMC,TA4SP,MASQURADE,DOSI.I NTRODUCTIONAs the Usage of Internet Increases, its security accessibility and availability must be increased. All users are concerns about their confidentiality and security while sending the data through the Internet. We have many security protocols for improve the security. But Are these protocols are technically verified? Are these protocols are working correctly? For answers of all these questions, there are some verification tools are developed. There are many tools like SPIN, Isabelle, FDR, Scyther, AVISPA for verification and validation of Internet security protocols. Among these, we will use the AVISPA research tool is more easy to use[1].The AVISPA tool provides the specific language called HLPSL (High Level Protocol Specification Language). Avispa tool has the library which includes different types of security protocols and its specifications. Avispa library contains around 79 security protocols from 33 groups[1]. It constitutes 384 security problems. Various standardization committees like IETF (Internet Engineering Task Force), W3C(World Wide Web Consortium) and IEEE(Institute of Electrical and Electronics Engineers)work on this tool. AVISPA library is the collection of specification of security which is characterized as IETF protocols, NON IETF protocols and E-Business protocols.Each protocol is describe in Alice-Bob notation. AVISPA library also describes the security properties, their classification and the attack found[2].AVISPA library also provides the short description of the included protocols. AVISPA tool is working using four types of Back Ends:(1)OFMC(On the Fly Model Checker) performs protocol falsification and bounded verification. It implements the symbolic techniques and support the algebraic properties of cryptographic operators.(2)CL-Atse(Constraint logic Based Attack Searcher)applies redundancy elimination techniques. It supports type flaw detection.(3)SATMC(SAT based Attack Searcher)builds proportional formula encoding a bounded unrolling of the transition relation by Intermediate format.(4)TA4SP(Tree Automata Based Protocol Analyser).It approximates the intruder knowledge by regular tree language.TA4SP can show whether a protocol is flawed or whether it is safe for any number of sessions[4]. We found some security attacks while analyzing the security protocols. All security attacks are discussed below:II. HLPSL SyntaxPROTOCOL Otway_Rees;IdentifiersA, B, S : User;Kas,Kbs, Kab: Symmetric_Key;M,Na,Nb,X : Number;KnowledgeA : B,S,Kas;B : S,Kbs;S : A,B,Kas,Kbs;Messages1. A -> B : M,A,B,{Na,M,A,B}Kas2. B -> S : M,A,B,{Na,M,A,B}Kas,{Nb,M,A,B}Kbs3. S -> B : M,{Na,Kab}Kas,{Nb,Kab}Kbs4. B -> A : M,{Na,Kab}Kas5. A -> B : {X}KabSession_instances[ A:a; B:b; S:s; Kas:kas; Kbs:kbs ];Intruder Divert, Impersonate;Intruder_knowledge a;Goal secrecy_of X;A.Basic Roles[2]It is very easy to translate a protocol into HLPSL if it is written in Alice-Bob notation. A-B notation for particular protocol is as following:A ->S: {Kab}_KbsS ->B:{Kab}_KbsIn this protocol ,A want to set up a secure session with B by exchanging a new session key with the help of trusted server. Here Kas is the shared key between A and S.A starts by generating a new session key which is intended for B.She encrypts this key with Kas and send it to S.Then S decrypts message ,re encrypts kab with Kbs.After this exchange A and B share the new session key and can use it to communicate with one another.B.Transitions[2]The transition part contains set of transitions.Each represents the receipt of message and the sending of a reply message.The example of simple transition is as follows:Step 1: State = 0 /\ RCV({Kab’}_Kas) =|>State’:=2/\SND({kab’}_Kbs)Here, Step 1 is the name of the transition. This step 1 specifies that if the value of state is equal to zero and a message is received on channel RCV which contain some value Kab’ encrypted with Kas, then a transition files which sets the new value of state to 2 and sends the same value kab’ on channel SND, but this time encrypted with Kbs.posed Roles[2]Role session(A,B,S : agent,Kas, Kbs : symmetric key ) def=Local SA, RA, SB,RB,SS,RS :channel (dy)CompositionAlice (A,B,S, Kas, SA,RA)/\bob (B, A, S, Kbs, SB, RB)/\server (S, A, B, Kas, Kbs, SS, RS)end roleComposed roles contains one or more basic roles and executes together in parallel. It has no transition section. The /\ operator indicates that the roles should execute in parallel[4]. Here the type declaration channel (dy)stands for the Dolev-Yao intruder model[2]. The intruder has full control over the network, such that all messages sent by agents will go to the intruder. All the agents can send and receive on whichever channel they want; the intended connection between certain channel variables is irrelevant because the intruder is the network.We create the HLPSL code of security protocol using above syntax and verify those through the AVISPA tool [2]. Here we found some protocols with attack and some protocols without attacks. All the verified security protocol list are as below (figure 1):III. Security AttacksAs we show in the table that Internet security protocols may suffer from several types of attacks like flaw, replay, Man in the middle, masquerade, DOS etc. In Dos attack ,the attacker may target your computer and its network connection and the sites you are trying to use, an attacker may able to prevent you for accessing email, online accounts, websites etc[6].A flaw attack is an attack where a principal accepts a message component of one type as a message of another[7]. A replay attack Masqurade is the type of attack where the attackers pretends to be an authorized user of a system in order to gain access the private information of the system. Man in the middle is the attack where a user gets between the sender and receiver of information and sniffs any information being sent[6]. Man in the middle attack is sometimes known as Brigade attacks. Evasdropping attack is the act of secretly listening to the private conversation of others without their concent. It is a network layer attack. The attack could be done using tools called network sniffers [7]. These types of attacks can be removed by making some changes in the sessions and transactions.occurs when an attacker copies a stream of messages between two parties and replays the stream to one or more of the parties.IV.CONCLUSIONHere we have studied about the protocols using the AVISPA verification tool and we found different types of attacks on different Internet security protocols. All different types of goals are specified for different protocols.The attacks are interrupting to achieve their goals.We have to remove those attacks to make the protocols working properly.Figure 1: Attacks on security protocolsV.FUTURE WORKIn this paper we have defined the AVISPA library for Internet security protocols and survey the protocols and categorized the protocol with attacks and protocols without attacks. In the next stage we will apply some modifications in HLPSL language code on the security protocol which have the man in the middle attack using the techniques and we will try our best to remove the particular attack.VI.REFERENCES[1] Information Society Technologies, Automated Validation of Internet Security Protocols and Applications (version 1.1) user manual bythe AVISPA team,IST-2001-39252[2] Information Society Technologies, High Level Protocol Specification language Tutorial, A beginners Guide to Modelling and AnalyzingInternet Security Protocols,IST-2001-39252[3] Laura Takkinen,Helsinki University of Technology,TKKT-110.7290 Research Seminar on Network security[4] Daojing He,Chun Chen,Maode Ma,Sammy chan,International Journal of Communication Systems DOI:10.1002/Dac.1355 [5] Luca Vigano,Information Security Group,Electronic Theoretical Computer Science 155(2006)61-86 [6] U.Oktay and O.K.Sahingoz,6th [7] James Heather,Gavin Lowe,Steve Schneider,Programming Research group Oxford UniversityInternational Information security and cryptology conference,Turkey。
ASVS Item #RequirementV2.1Verify all pages and resources require authentication except those specifically intended to be public (Principle of complete mediation).V2.2Verify all password fields do not echo the user’s password when it is entered.V2.4Verify all authentication controls are enforced on the server side.V2.5Verify all authentication controls (including libraries that call external authentication services) have a centralized implementation.V2.6Verify all authentication controls fail securely to ensure attackers cannot log in.V2.7Verify password entry fields allow or encourage the use of passphrases, and do not prevent long passphrases or highly complex passwords being entered, and provide a sufficient minimum strength to protect against the use of commonly chosen passwords.V2.8Verify all account identity authentication functions (such as registration, update profile, forgot username, forgot password, disabled / lost token, help desk or IVR) that might regain access to the account are at least as resistant to attack as the primary authentication mechanism.V2.9Verify users can safely change their credentials using a mechanism that is at least as resistant to attack as the primary authentication mechanism.V2.12Verify that all authentication decisions are logged. This should include requests with missing required information, needed for security investigations.V2.13salted using a salt that is unique to that account (e.g., internal user ID, account creation) and use bcrypt, scrypt or PBKDF2 before storing the password.V2.16Verify that credentials, and all other identity information handled by the application(s), do not traverse unencrypted or weakly encrypted links.V2.17Verify that the forgotten password function and other recovery paths do not reveal the current password and that the new password is not sent in clear text to the user.V2.18Verify that username enumeration is not possible via login, password reset, or forgot account functionality.V2.19Verify there are no default passwords in use for the application framework or any components used by the application (such as “admin/password”).V2.20Verify that a resource governor is in place to protect against vertical (a single account tested against all possible passwords) and horizontal brute forcing (all accounts tested with the same password e.g. “Password1”). A correct credential entry should incur no delay. Both these governor mechanisms should be active simultaneously to protect against diagonal and distributed attacks.V2.21Verify that all authentication credentials for accessing services external to the application are encrypted and stored in a protected location (not in source code).V2.22other recovery paths send a link including a time-limited activation token rather than the password itself. Additional authentication based on soft-tokens (e.g. SMS token, native mobile applications, etc.) can be required as well before the link is sent over.V2.23Verify that forgot password functionality does not lock or otherwise disable the account until after the user has successfully changed their password. This is to prevent valid users from being locked out.V2.24Verify that there are no shared knowledge questions/answers (so called "secret" questions and answers).V2.25Verify that the system can be configured to disallow the use of a configurable number of previous passwords.V2.26Verify re-authentication, step up or adaptive authentication, SMS or other two factor authentication, or transaction signing is required before any application-specific sensitive operations are permitted as per the risk profile of the application.V3.1Verify that the framework’s default session management control implementation is used by the application.V3.2Verify that sessions are invalidated when the user logs out.V3.3Verify that sessions timeout after a specified period of inactivity.V3.4Verify that sessions timeout after an administratively-configurable maximum time period regardless of activity (an absolute timeout).V3.5Verify that all pages that require authentication to access them have logout links.V3.6Verify that the session id is never disclosed other than in cookie headers; particularly in URLs, error messages, or logs. This includes verifying that the application does not support URL rewriting of session cookies.V3.7Verify that the session id is changed on login to prevent session fixation.V3.8Verify that the session id is changed upon re-authentication.V3.10Verify that only session ids generated by the application framework are recognized as valid by the application.V3.11Verify that authenticated session tokens are sufficiently long and random to withstand session guessing attacks.V3.12Verify that authenticated session tokens using cookies have their path set to an appropriately restrictive value for that site. The domain cookie attribute restriction should not be set unless for a business requirement, such as single sign on.V3.14Verify that authenticated session tokens using cookies sent via HTTP, are protected by the use of "HttpOnly".V3.15Verify that authenticated session tokens using cookies are protected with the "secure" attribute and a strict transport security header (such as Strict-Transport-Security: max-age=60000; includeSubDomains) are present.V3.16Verify that the application does not permit duplicate concurrent user sessions, originating from different machines.V4.1Verify that users can only access secured functions or services for which they possess specific authorization.V4.2Verify that users can only access secured URLs for which they possess specific authorization.V4.3Verify that users can only access secured data files for which they possess specific authorization.V4.4Verify that direct object references are protected, such that only authorized objects or data are accessible to each user (for example, protect against direct object reference tampering).V4.5Verify that directory browsing is disabled unless deliberately desired.V4.8Verify that access controls fail securely.V4.9Verify that the same access control rules implied by the presentation layer are enforced on the server side for that user role, such that controls and parameters cannot be re-enabled or re-added from higher privilege users.V4.10Verify that all user and data attributes and policy information used by access controls cannot be manipulated by end users unless specifically authorized.V4.11Verify that all access controls are enforced on the server side.V4.12Verify that there is a centralized mechanism (including libraries that call external authorization services) for protecting access to each type of protected resource.V4.14Verify that all access control decisions are be logged and all failed decisions are logged.V4.16Verify that the application or framework generates strong random anti-CSRF tokens unique to the user as part of all high value transactions or accessing sensitive data, and that the application verifies the presence of this token with the proper value for the current user when processing these requests.V4.17Aggregate access control protection – verify the system can protect against aggregate or continuous access of secured functions, resources, or data. For example, possibly by the use of a resource governor to limit the number of edits per hour or to prevent the entire database from being scraped by an individual user.V5.1Verify that the runtime environment is not susceptible to buffer overflows, or that security controls prevent buffer overflows.V5.3Verify that all input validation failures result in input rejection.V5.4Verify that a character set, such as UTF-8, is specified for all sources of input.V5.5Verify that all input validation or encoding routines are performed and enforced on the server side.V5.6Verify that a single input validation control is used by the application for each type of data that is accepted.V5.7Verify that all input validation failures are logged.V5.8Verify that all input data is canonicalized for all downstream decoders or interpreters prior to validation.V5.10Verify that the runtime environment is not susceptible to SQL Injection, or that security controls prevent SQL Injection.V5.11Verify that the runtime environment is not susceptible to LDAP Injection, or that security controls prevent LDAP Injection.V5.12Verify that the runtime environment is not susceptible to OS Command Injection, or that security controls prevent OS Command Injection.V5.13Verify that the runtime environment is not susceptible to XML External Entity attacks or that security controls prevents XML External Entity attacks.V5.14Verify that the runtime environment is not susceptible to XML Injections or that security controls prevents XML Injections.V5.16Verify that all untrusted data that are output to HTML (including HTML elements, HTML attributes, JavaScript data values, CSS blocks, and URI attributes) are properly escaped for the applicable context.V5.17If the application framework allows automatic mass parameter assignment (also called automatic variable binding) from the inbound request to a model, verify that security sensitive fields such as “accountBalance”, “role” or “password” are protected from malicious automatic binding.V5.18Verify that the application has defenses against HTTP parameter pollution attacks, particularly if the application framework makes no distinction about the source of request parameters (GET, POST, cookies, headers, environment, etc.)V5.19Verify that for each type of output encoding/escaping performed by the application, there is a single security control for that type of output for the intended destination.V7.1Verify that all cryptographic functions used to protect secrets from the application user are implemented server side.V7.2Verify that all cryptographic modules fail securely.V7.3Verify that access to any master secret(s) is protected from unauthorized access (A master secret is an application credential stored as plaintext on disk that is used to protect access to security configuration information).V7.6Verify that all random numbers, random file names, random GUIDs, and random strings are generated using the cryptographic module’s approved random number generator when these random values are intended to be unguessable by an attacker.V7.7Verify that cryptographic modules used by the application have been validated against FIPS 140-2 or an equivalent standard.V7.8Verify that cryptographic modules operate in their approved mode according to their published security policies.V7.9Verify that there is an explicit policy for how cryptographic keys are managed (e.g., generated, distributed, revoked, expired). Verify that this policy is properly enforced.V8.1Verify that the application does not output error messages or stack traces containing sensitive datathat could assist an attacker, including session id and personal information.V8.2Verify that all error handling is performed on trusted devicesV8.3Verify that all logging controls are implemented on the server.V8.4Verify that error handling logic in security controls denies access by default.V8.5Verify security logging controls provide the ability to log both success and failure events that are identified as security-relevant.V8.6Verify that each log event includes: a timestamp from a reliablesource,severity level of the event, an indication that this is asecurity relevant event (if mixed with other logs), the identity of the user that caused the event (if there is a user associated with the event), the source IP address of the request associated with the event, whether the event succeeded or failed, and a description of the event.V8.7Verify that all events that include untrusted data will not execute as code in the intended log viewing software.V8.8Verify that security logs are protected from unauthorized access and modification.V8.9Verify that there is a single application-level logging implementation that isused by the software.V8.10Verify that the application does not log application-specific sensitive data that could assist an attacker, including user’s sessionidentifiers and personal orsensitive information. The length and existence of sensitive data can be logged.V8.11Verify that a log analysis tool is available which allows the analyst to search for log events based on combinations of search criteria across all fields in the log record format supported by this system.V8.13Verify that all non-printable symbols and field separators are properly encoded in log entries, to prevent log injection.V8.14Verify that log fields from trusted and untrusted sources are distinguishable in log entries.V8.15Verify that logging is performed before executing the transaction. If logging was unsuccessful (e.g. disk full, insufficient permissions) the application fails safe. This is for when integrity and non-repudiation are a must.V9.1Verify that all forms containing sensitive information have disabled client side caching, including autocomplete features.V9.2Verify that the list of sensitive data processed by this application is identified, and that there is an explicit policy for how access to this data must be controlled, and when this data must be encrypted (both at rest and in transit). Verify that this policy is properly enforced.V9.3Verify that all sensitive data is sent to the server in the HTTP message body (i.e., URL parameters are never used to send sensitive data).V9.4Verify that all cached or temporary copies of sensitive data sent to the client are protected from unauthorized access orpurged/invalidated after the authorized user accesses the sensitive data (e.g., the proper no-cache and no-store Cache-Control headers are set).V9.5Verify that all cached or temporary copies of sensitive data stored on the server are protected from unauthorized access orpurged/invalidated after the authorized user accesses the sensitive data.V9.6Verify that there is a method to remove each type of sensitive data from the application at the end of its required retention period.V9.7Verify the application minimizes the number of parameters sent to untrusted systems, such as hidden fields, Ajax variables, cookies and header values.V9.8Verify the application has theability to detect and alert on abnormal numbers of requests for information or processing high value transactions for that user role, such as screen scraping, automated use of web service extraction, or data loss prevention. For example, the average user should not be able to access more than 5 records per hour or 30 records per day, or add 10 friends to a social network per minute.V10.1Verify that a path can be built from a trusted CA to each Transport Layer Security (TLS) server certificate, and that each server certificate is valid.V10.2Verify that failed TLS connections do not fall back to an insecure HTTP connection.V10.3Verify that TLS is used for all connections (including both external and backend connections) that are authenticated or that involve sensitive data or functions.V10.4Verify that backend TLS connection failures are logged.V10.5Verify that certificate paths are built and verified for all client certificates using configured trust anchors and revocation information.V10.6Verify that all connections to external systems that involve sensitive information or functions are authenticated.V10.7Verify that all connections to external systems that involve sensitive information or functions use an account that has been set up to have the minimum privileges necessary for the application to function properly.V10.8Verify that there is a single standard TLS implementation that is used by the application that is configured to operate in an approved mode of operation (See/groups/STM/cmvp /documents/fips140-2/FIPS1402IG.pdf ).V10.9Verify that specific character encodings are defined for all connections (e.g., UTF-8).V11.2Verify that the application accepts only a defined set of HTTP request methods, such as GET and POST and unused methods are explicitly blocked.V11.3Verify that every HTTP response contains a content type header specifying a safe character set (e.g., UTF-8).V11.6Verify that HTTP headers in both requests and responses contain only printable ASCII characters.V11.8Verify that HTTP headers and / or other mechanisms for older browsers have been included to protect against clickjacking attacks.V11.9Verify that HTTP headers added by a frontend (such as X-Real-IP), and used by the application, cannot be spoofed by the end user.V11.10Verify that the HTTP header, X-Frame-Options is in use for sites where content should not be viewed in a 3rd-party X-Frame. A common middle ground is to send SAMEORIGIN, meaning only websites of the same origin may frame it.V11.12Verify that the HTTP headers do not expose detailed version information of system components.V13.1Verify that no malicious code is in any code that was either developed or modified in order to create the application.V13.2Verify that the integrity of interpreted code, libraries, executables, and configuration files is verified using checksums or hashes.V13.3Verify that all code implementing or using authentication controls is not affected by any malicious code.V13.4Verify that all code implementing or using session management controls is not affected by any malicious code.V13.5Verify that all code implementing or using access controls is not affected by any malicious code.V13.6Verify that all input validation controls are not affected by any malicious code.V13.7Verify that all code implementing or using output validation controls is not affected by any malicious code.V13.8Verify that all code supporting or using a cryptographic module is not affected by any malicious code.V13.9Verify that all code implementing or using error handling and logging controls is not affected by any malicious code.V13.10Verify all malicious activity is adequately sandboxed.V13.11Verify that sensitive data is rapidly sanitized from memory as soon as it is no longer needed and handled in accordance to functions and techniques supported by the framework/library/operating system.V15.1Verify the application processes or verifies all high value business logic flows in a trusted environment, such as on a protected and monitored server.V15.2allow spoofed high value transactions, such as allowing Attacker User A to process a transaction as Victim User B by tampering with or replaying session, transaction state, transaction or user IDs.V15.3Verify the application does not allow high value business logic parameters to be tampered with, such as (but not limited to): price, interest, discounts, PII, balances, stock IDs, etc.V15.4Verify the application has defensive measures to protect against repudiation attacks, such as verifiable and protected transaction logs, audit trails or system logs, and in highest value systems real time monitoring of user activities and transactions for anomalies.V15.5Verify the application protects against information disclosure attacks, such as direct object reference, tampering, session brute force or other attacks.V15.6Verify the application hassufficient detection and governor controls to protect against brute force (such as continuously using a particular function) or denial of service attacks.V15.7Verify the application hassufficient access controls to prevent elevation of privilege attacks, such as allowing anonymous users from accessing secured data or secured functions, or allowing users to access each other’s details or using privileged functions.V15.8process business logic flows in sequential step order, with all steps being processed in realistic human time, and not process out of order, skipped steps, process steps from another user, or too quickly submitted transactions.V15.9Verify the application hasadditional authorization (such as step up or adaptive authentication) for lower value systems, and / or segregation of duties for high value applications to enforce anti-fraud controls as per the risk of application and past fraud.V15.10Verify the application has business limits and enforces them in atrusted location (as on a protected server) on a per user, per day or daily basis, with configurable alerting and automated reactions to automated or unusual attack. Examples include (but not limited to): ensuring new SIM users don’t exceed $10 per day for a new phone account, a forum allowing more than 100 new users per day or preventing posts or private messages until the account has been verified, a health system should not allow a single doctor to access more patient records than they can reasonably treat in a day, or a small business finance system allowing more than 20 invoice payments or $1000 per day across all users. In all cases, the business limits and totals should be reasonable for the business concerned. The only unreasonable outcome is if there are no business limits, alerting or enforcement.V16.1Verify that URL redirects and forwards do not include unvalidated data.V16.2Verify that file names and path data obtained from untrusted sources is canonicalized to eliminate path traversal attacks.V16.3Verify that files obtained from untrusted sources are scanned by antivirus scanners to prevent upload of known malicious content.V16.4Verify that parameters obtained from untrusted sources are not used in manipulating filenames, pathnames or any file system object without first being canonicalized and input validated to prevent local file inclusion attacks.V16.5Verify that parameters obtained from untrusted sources are canonicalized, input validated, and output encoded to prevent remote file inclusion attacks, particularly where input could be executed, such as header, source, or template inclusionV16.6Verify remote IFRAMEs and HTML5 cross-domain resource sharing does not allow inclusion of arbitrary remote content.V16.7Verify that files obtained from untrusted sources are stored outside the webroot.V16.8Verify that web or application server is configured by default to deny access to remote resources or systems outside the web or application server.V16.9Verify the application code does not execute uploaded data obtained from untrusted sources.V16.10Verify if Flash, Silverlight or other rich internet application (RIA) cross domain resource sharing configuration is configured to prevent unauthenticated or unauthorized remote access.V17.1Verify that the client validates SSL certificatesV17.2Verify that unique device ID (UDID) values are not used as security controls.V17.3Verify that the mobile app does not store sensitive data onto shared resources on the device (e.g. SD card or shared folders)V17.4Verify that sensitive data is not stored in SQLite database on the device.V17.5Verify that secret keys or passwords are not hard-coded in the executable.V17.6Verify that the mobile app prevents leaking of sensitive data via auto-snapshot feature of iOS.V17.7Verify that the app cannot be run on a jailbroken or rooted device.V17.8Verify that the session timeout is of a reasonable value.V17.9Verify the permissions being requested as well as the resources that it is authorized to access (i.e. AndroidManifest.xml, iOS Entitlements) .V17.10Verify that crash logs do not contain sensitive data.V17.11Verify that the application binary has been obfuscated.V17.12Verify that all test data has been removed from the app container (.ipa, .apk, .bar).V17.13Verify that the application does not log sensitive data to the system log or filesystem.V17.14Verify that the application does not enable autocomplete for sensitive text input fields, such as passwords, personal information or credit cards.V17.15Verify that the mobile app implements certificate pinning to prevent the proxying of app traffic.V17.16Verify no misconfigurations are present in the configuration files (Debugging flags set, world readable/writable permissions) and that, by default, configuration settings are set to theirsafest/most secure value.V17.17Verify any 3rd-party libraries in use are up to date, contain no known vulnerabilities.V17.18Verify that web data, such as HTTPS traffic, is not cached.V17.19Verify that the query string is not used for sensitive data. Instead, a POST request via SSL should be used with a CSRF token.V17.20Verify that, if applicable, any personal account numbers are truncated prior to storing on the device.V17.21Verify that the application makes use of Address Space Layout Randomization (ASLR).V17.22Verify that data logged via the keyboard (iOS) does not contain credentials, financial information or other sensitive data.V17.23If an Android app, verify that the app does not create files with permissions of MODE_WORLD_READABLE or MODE_WORLD_WRITABLEV17.24Verify that sensitive data is stored in a cryptographically secure manner (even when stored in the iOS keychain).V17.25Verify that anti-debugging and reverse engineering mechanisms are implemented in the app.V17.26Verify that the app does not export sensitive activities, intents, content providers etc. on Android.V17.27Verify that mutable structures have been used for sensitive strings such as account numbers and are overwritten when not used. (Mitigate damage from memory analysis attacks).V17.28Verify that any exposed intents, content providers and broadcast receivers perform full data validation on input (Android).需求Level 1Level 2验证所有页面和资源要求除了那些专门旨在成为公共的Y Y(完整的调解原则)认证。
ca证书身份认证流程A digital certificate, also known as a CA certificate, is a crucial component of the process of verifying the identity of users online. It is essentially an electronic document issued by a trusted third party, the certificate authority (CA), which vouches for the authenticity of the credentials presented by an individual or an organization. This process plays a vital role in ensuring the security and integrity of online transactions, communications, and data exchanges.数字证书,也称为CA证书,是验证用户在线身份的过程中至关重要的组成部分。
它本质上是由受信任的第三方,即证书颁发机构(CA)发出的电子文档,证实个人或组织提出的凭证的真实性。
这个过程在确保在线交易、通信和数据交流的安全性和完整性方面起着至关重要的作用。
The process of CA certificate identity verification typically involves several steps. First, the individual or organization requesting the certificate must generate a pair of cryptographic keys, consisting of a public key for encryption and a private key for decryption. These keys are used to create a digital signature, which serves as a unique identifier for the entity. The certificate authority then verifies theidentity of the requester through various means, such as validating official documents, conducting background checks, and verifying the authenticity of the cryptographic keys.CA证书身份验证的过程通常涉及几个步骤。
云计算外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Technical Issues of Forensic Investigations in Cloud Computing EnvironmentsDominik BirkRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyAbstract—Cloud Computing is arguably one of the most discussedinformation technologies today. It presents many promising technological and economical opportunities. However, many customers remain reluctant to move their business IT infrastructure completely to the cloud. One of their main concerns is Cloud Security and the threat of the unknown. Cloud Service Providers(CSP) encourage this perception by not letting their customers see what is behind their virtual curtain. A seldomly discussed, but in this regard highly relevant open issue is the ability to perform digital investigations. This continues to fuel insecurity on the sides of both providers and customers. Cloud Forensics constitutes a new and disruptive challenge for investigators. Due to the decentralized nature of data processing in the cloud, traditional approaches to evidence collection and recovery are no longer practical. This paper focuses on the technical aspects of digital forensics in distributed cloud environments. We contribute by assessing whether it is possible for the customer of cloud computing services to perform a traditional digital investigation from a technical point of view. Furthermore we discuss possible solutions and possible new methodologies helping customers to perform such investigations.I. INTRODUCTIONAlthough the cloud might appear attractive to small as well as to large companies, it does not come along without its own unique problems. Outsourcing sensitive corporate data into the cloud raises concerns regarding the privacy and security of data. Security policies, companies main pillar concerning security, cannot be easily deployed into distributed, virtualized cloud environments. This situation is further complicated by the unknown physical location of the companie’s assets. Normally,if a security incident occurs, the corporate security team wants to be able to perform their own investigation without dependency on third parties. In the cloud, this is not possible anymore: The CSP obtains all the power over the environmentand thus controls the sources of evidence. In the best case, a trusted third party acts as a trustee and guarantees for the trustworthiness of the CSP. Furthermore, the implementation of the technical architecture and circumstances within cloud computing environments bias the way an investigation may be processed. In detail, evidence data has to be interpreted by an investigator in a We would like to thank the reviewers for the helpful comments and Dennis Heinson (Center for Advanced Security Research Darmstadt - CASED) for the profound discussions regarding the legal aspects of cloud forensics. proper manner which is hardly be possible due to the lackof circumstantial information. For auditors, this situation does not change: Questions who accessed specific data and information cannot be answered by the customers, if no corresponding logs are available. With the increasing demand for using the power of the cloud for processing also sensible information and data, enterprises face the issue of Data and Process Provenance in the cloud [10]. Digital provenance, meaning meta-data that describes the ancestry or history of a digital object, is a crucial feature for forensic investigations. In combination with a suitable authentication scheme, it provides information about who created and who modified what kind of data in the cloud. These are crucial aspects for digital investigations in distributed environments such as the cloud. Unfortunately, the aspects of forensic investigations in distributed environment have so far been mostly neglected by the research community. Current discussion centers mostly around security, privacy and data protection issues [35], [9], [12]. The impact of forensic investigations on cloud environments was little noticed albeit mentioned by the authors of [1] in 2009: ”[...] to our knowledge, no research has been published on how cloud computing environments affect digital artifacts,and on acquisition logistics and legal issues related to cloud computing env ironments.” This statement is also confirmed by other authors [34], [36], [40] stressing that further research on incident handling, evidence tracking and accountability in cloud environments has to be done. At the same time, massive investments are being made in cloud technology. Combined with the fact that information technology increasingly transcendents peoples’ private and professional life, thus mirroring more and more of peoples’actions, it becomes apparent that evidence gathered from cloud environments will be of high significance to litigation or criminal proceedings in the future. Within this work, we focus the notion of cloud forensics by addressing the technical issues of forensics in all three major cloud service models and consider cross-disciplinary aspects. Moreover, we address the usability of various sources of evidence for investigative purposes and propose potential solutions to the issues from a practical standpoint. This work should be considered as a surveying discussion of an almost unexplored research area. The paper is organized as follows: We discuss the related work and the fundamental technical background information of digital forensics, cloud computing and the fault model in section II and III. In section IV, we focus on the technical issues of cloud forensics and discuss the potential sources and nature of digital evidence as well as investigations in XaaS environments including thecross-disciplinary aspects. We conclude in section V.II. RELATED WORKVarious works have been published in the field of cloud security and privacy [9], [35], [30] focussing on aspects for protecting data in multi-tenant, virtualized environments. Desired security characteristics for current cloud infrastructures mainly revolve around isolation of multi-tenant platforms [12], security of hypervisors in order to protect virtualized guest systems and secure network infrastructures [32]. Albeit digital provenance, describing the ancestry of digital objects, still remains a challenging issue for cloud environments, several works have already been published in this field [8], [10] contributing to the issues of cloud forensis. Within this context, cryptographic proofs for verifying data integrity mainly in cloud storage offers have been proposed,yet lacking of practical implementations [24], [37], [23]. Traditional computer forensics has already well researched methods for various fields of application [4], [5], [6], [11], [13]. Also the aspects of forensics in virtual systems have been addressed by several works [2], [3], [20] including the notionof virtual introspection [25]. In addition, the NIST already addressed Web Service Forensics [22] which has a huge impact on investigation processes in cloud computing environments. In contrast, the aspects of forensic investigations in cloud environments have mostly been neglected by both the industry and the research community. One of the first papers focusing on this topic was published by Wolthusen [40] after Bebee et al already introduced problems within cloud environments [1]. Wolthusen stressed that there is an inherent strong need for interdisciplinary work linking the requirements and concepts of evidence arising from the legal field to what can be feasibly reconstructed and inferred algorithmically or in an exploratory manner. In 2010, Grobauer et al [36] published a paper discussing the issues of incident response in cloud environments - unfortunately no specific issues and solutions of cloud forensics have been proposed which will be done within this work.III. TECHNICAL BACKGROUNDA. Traditional Digital ForensicsThe notion of Digital Forensics is widely known as the practice of identifying, extracting and considering evidence from digital media. Unfortunately, digital evidence is both fragile and volatile and therefore requires the attention of special personnel and methods in order to ensure that evidence data can be proper isolated and evaluated. Normally, the process of a digital investigation can be separated into three different steps each having its own specificpurpose:1) In the Securing Phase, the major intention is the preservation of evidence for analysis. The data has to be collected in a manner that maximizes its integrity. This is normally done by a bitwise copy of the original media. As can be imagined, this represents a huge problem in the field of cloud computing where you never know exactly where your data is and additionallydo not have access to any physical hardware. However, the snapshot technology, discussed in section IV-B3, provides a powerful tool to freeze system states and thus makes digital investigations, at least in IaaS scenarios, theoretically possible.2) We refer to the Analyzing Phase as the stage in which the data is sifted and combined. It is in this phase that the data from multiple systems or sources is pulled together to create as complete a picture and event reconstruction as possible. Especially in distributed system infrastructures, this means that bits and pieces of data are pulled together for deciphering the real story of what happened and for providing a deeper look into the data.3) Finally, at the end of the examination and analysis of the data, the results of the previous phases will be reprocessed in the Presentation Phase. The report, created in this phase, is a compilation of all the documentation and evidence from the analysis stage. The main intention of such a report is that it contains all results, it is complete and clear to understand. Apparently, the success of these three steps strongly depends on the first stage. If it is not possible to secure the complete set of evidence data, no exhaustive analysis will be possible. However, in real world scenarios often only a subset of the evidence data can be secured by the investigator. In addition, an important definition in the general context of forensics is the notion of a Chain of Custody. This chain clarifies how and where evidence is stored and who takes possession of it. Especially for cases which are brought to court it is crucial that the chain of custody is preserved.B. Cloud ComputingAccording to the NIST [16], cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal CSP interaction. The new raw definition of cloud computing brought several new characteristics such as multi-tenancy, elasticity, pay-as-you-go and reliability. Within this work, the following three models are used: In the Infrastructure asa Service (IaaS) model, the customer is using the virtual machine provided by the CSP for installing his own system on it. The system can be used like any other physical computer with a few limitations. However, the additive customer power over the system comes along with additional security obligations. Platform as a Service (PaaS) offerings provide the capability to deploy application packages created using the virtual development environment supported by the CSP. For the efficiency of software development process this service model can be propellent. In the Software as a Service (SaaS) model, the customer makes use of a service run by the CSP on a cloud infrastructure. In most of the cases this service can be accessed through an API for a thin client interface such as a web browser. Closed-source public SaaS offers such as Amazon S3 and GoogleMail can only be used in the public deployment model leading to further issues concerning security, privacy and the gathering of suitable evidences. Furthermore, two main deployment models, private and public cloud have to be distinguished. Common public clouds are made available to the general public. The corresponding infrastructure is owned by one organization acting as a CSP and offering services to its customers. In contrast, the private cloud is exclusively operated for an organization but may not provide the scalability and agility of public offers. The additional notions of community and hybrid cloud are not exclusively covered within this work. However, independently from the specific model used, the movement of applications and data to the cloud comes along with limited control for the customer about the application itself, the data pushed into the applications and also about the underlying technical infrastructure.C. Fault ModelBe it an account for a SaaS application, a development environment (PaaS) or a virtual image of an IaaS environment, systems in the cloud can be affected by inconsistencies. Hence, for both customer and CSP it is crucial to have the ability to assign faults to the causing party, even in the presence of Byzantine behavior [33]. Generally, inconsistencies can be caused by the following two reasons:1) Maliciously Intended FaultsInternal or external adversaries with specific malicious intentions can cause faults on cloud instances or applications. Economic rivals as well as former employees can be the reason for these faults and state a constant threat to customers and CSP. In this model, also a malicious CSP is included albeit he isassumed to be rare in real world scenarios. Additionally, from the technical point of view, the movement of computing power to a virtualized, multi-tenant environment can pose further threads and risks to the systems. One reason for this is that if a single system or service in the cloud is compromised, all other guest systems and even the host system are at risk. Hence, besides the need for further security measures, precautions for potential forensic investigations have to be taken into consideration.2) Unintentional FaultsInconsistencies in technical systems or processes in the cloud do not have implicitly to be caused by malicious intent. Internal communication errors or human failures can lead to issues in the services offered to the costumer(i.e. loss or modification of data). Although these failures are not caused intentionally, both the CSP and the customer have a strong intention to discover the reasons and deploy corresponding fixes.IV. TECHNICAL ISSUESDigital investigations are about control of forensic evidence data. From the technical standpoint, this data can be available in three different states: at rest, in motion or in execution. Data at rest is represented by allocated disk space. Whether the data is stored in a database or in a specific file format, it allocates disk space. Furthermore, if a file is deleted, the disk space is de-allocated for the operating system but the data is still accessible since the disk space has not been re-allocated and overwritten. This fact is often exploited by investigators which explore these de-allocated disk space on harddisks. In case the data is in motion, data is transferred from one entity to another e.g. a typical file transfer over a network can be seen as a data in motion scenario. Several encapsulated protocols contain the data each leaving specific traces on systems and network devices which can in return be used by investigators. Data can be loaded into memory and executed as a process. In this case, the data is neither at rest or in motion but in execution. On the executing system, process information, machine instruction and allocated/de-allocated data can be analyzed by creating a snapshot of the current system state. In the following sections, we point out the potential sources for evidential data in cloud environments and discuss the technical issues of digital investigations in XaaS environmentsas well as suggest several solutions to these problems.A. Sources and Nature of EvidenceConcerning the technical aspects of forensic investigations, the amount of potential evidence available to the investigator strongly diverges between thedifferent cloud service and deployment models. The virtual machine (VM), hosting in most of the cases the server application, provides several pieces of information that could be used by investigators. On the network level, network components can provide information about possible communication channels between different parties involved. The browser on the client, acting often as the user agent for communicating with the cloud, also contains a lot of information that could be used as evidence in a forensic investigation. Independently from the used model, the following three components could act as sources for potential evidential data.1) Virtual Cloud Instance: The VM within the cloud, where i.e. data is stored or processes are handled, contains potential evidence [2], [3]. In most of the cases, it is the place where an incident happened and hence provides a good starting point for a forensic investigation. The VM instance can be accessed by both, the CSP and the customer who is running the instance. Furthermore, virtual introspection techniques [25] provide access to the runtime state of the VM via the hypervisor and snapshot technology supplies a powerful technique for the customer to freeze specific states of the VM. Therefore, virtual instances can be still running during analysis which leads to the case of live investigations [41] or can be turned off leading to static image analysis. In SaaS and PaaS scenarios, the ability to access the virtual instance for gathering evidential information is highly limited or simply not possible.2) Network Layer: Traditional network forensics is knownas the analysis of network traffic logs for tracing events that have occurred in the past. Since the different ISO/OSI network layers provide several information on protocols and communication between instances within as well as with instances outside the cloud [4], [5], [6], network forensics is theoretically also feasible in cloud environments. However in practice, ordinary CSP currently do not provide any log data from the network components used by the customer’s instances or applications. For instance, in case of a malware infection of an IaaS VM, it will be difficult for the investigator to get any form of routing information and network log datain general which is crucial for further investigative steps. This situation gets even more complicated in case of PaaS or SaaS. So again, the situation of gathering forensic evidence is strongly affected by the support the investigator receives from the customer and the CSP.3) Client System: On the system layer of the client, it completely depends on the used model (IaaS, PaaS, SaaS) if and where potential evidence could beextracted. In most of the scenarios, the user agent (e.g. the web browser) on the client system is the only application that communicates with the service in the cloud. This especially holds for SaaS applications which are used and controlled by the web browser. But also in IaaS scenarios, the administration interface is often controlled via the browser. Hence, in an exhaustive forensic investigation, the evidence data gathered from the browser environment [7] should not be omitted.a) Browser Forensics: Generally, the circumstances leading to an investigation have to be differentiated: In ordinary scenarios, the main goal of an investigation of the web browser is to determine if a user has been victim of a crime. In complex SaaS scenarios with high client-server interaction, this constitutes a difficult task. Additionally, customers strongly make use of third-party extensions [17] which can be abused for malicious purposes. Hence, the investigator might want to look for malicious extensions, searches performed, websites visited, files downloaded, information entered in forms or stored in local HTML5 stores, web-based email contents and persistent browser cookies for gathering potential evidence data. Within this context, it is inevitable to investigate the appearance of malicious JavaScript [18] leading to e.g. unintended AJAX requests and hence modified usage of administration interfaces. Generally, the web browser contains a lot of electronic evidence data that could be used to give an answer to both of the above questions - even if the private mode is switched on [19].B. Investigations in XaaS EnvironmentsTraditional digital forensic methodologies permit investigators to seize equipment and perform detailed analysis on the media and data recovered [11]. In a distributed infrastructure organization like the cloud computing environment, investigators are confronted with an entirely different situation. They have no longer the option of seizing physical data storage. Data and processes of the customer are dispensed over an undisclosed amount of virtual instances, applications and network elements. Hence, it is in question whether preliminary findings of the computer forensic community in the field of digital forensics apparently have to be revised and adapted to the new environment. Within this section, specific issues of investigations in SaaS, PaaS and IaaS environments will be discussed. In addition, cross-disciplinary issues which affect several environments uniformly, will be taken into consideration. We also suggest potential solutions to the mentioned problems.1) SaaS Environments: Especially in the SaaS model, the customer does notobtain any control of the underlying operating infrastructure such as network, servers, operating systems or the application that is used. This means that no deeper view into the system and its underlying infrastructure is provided to the customer. Only limited userspecific application configuration settings can be controlled contributing to the evidences which can be extracted fromthe client (see section IV-A3). In a lot of cases this urges the investigator to rely on high-level logs which are eventually provided by the CSP. Given the case that the CSP does not run any logging application, the customer has no opportunity to create any useful evidence through the installation of any toolkit or logging tool. These circumstances do not allow a valid forensic investigation and lead to the assumption that customers of SaaS offers do not have any chance to analyze potential incidences.a) Data Provenance: The notion of Digital Provenance is known as meta-data that describes the ancestry or history of digital objects. Secure provenance that records ownership and process history of data objects is vital to the success of data forensics in cloud environments, yet it is still a challenging issue today [8]. Albeit data provenance is of high significance also for IaaS and PaaS, it states a huge problem specifically for SaaS-based applications: Current global acting public SaaS CSP offer Single Sign-On (SSO) access control to the set of their services. Unfortunately in case of an account compromise, most of the CSP do not offer any possibility for the customer to figure out which data and information has been accessed by the adversary. For the victim, this situation can have tremendous impact: If sensitive data has been compromised, it is unclear which data has been leaked and which has not been accessed by the adversary. Additionally, data could be modified or deleted by an external adversary or even by the CSP e.g. due to storage reasons. The customer has no ability to proof otherwise. Secure provenance mechanisms for distributed environments can improve this situation but have not been practically implemented by CSP [10]. Suggested Solution: In private SaaS scenarios this situation is improved by the fact that the customer and the CSP are probably under the same authority. Hence, logging and provenance mechanisms could be implemented which contribute to potential investigations. Additionally, the exact location of the servers and the data is known at any time. Public SaaS CSP should offer additional interfaces for the purpose of compliance, forensics, operations and security matters to their customers. Through an API, the customers should have the ability to receive specific information suchas access, error and event logs that could improve their situation in case of aninvestigation. Furthermore, due to the limited ability of receiving forensic information from the server and proofing integrity of stored data in SaaS scenarios, the client has to contribute to this process. This could be achieved by implementing Proofs of Retrievability (POR) in which a verifier (client) is enabled to determine that a prover (server) possesses a file or data object and it can be retrieved unmodified [24]. Provable Data Possession (PDP) techniques [37] could be used to verify that an untrusted server possesses the original data without the need for the client to retrieve it. Although these cryptographic proofs have not been implemented by any CSP, the authors of [23] introduced a new data integrity verification mechanism for SaaS scenarios which could also be used for forensic purposes.2) PaaS Environments: One of the main advantages of the PaaS model is that the developed software application is under the control of the customer and except for some CSP, the source code of the application does not have to leave the local development environment. Given these circumstances, the customer obtains theoretically the power to dictate how the application interacts with other dependencies such as databases, storage entities etc. CSP normally claim this transfer is encrypted but this statement can hardly be verified by the customer. Since the customer has the ability to interact with the platform over a prepared API, system states and specific application logs can be extracted. However potential adversaries, which can compromise the application during runtime, should not be able to alter these log files afterwards. Suggested Solution:Depending on the runtime environment, logging mechanisms could be implemented which automatically sign and encrypt the log information before its transfer to a central logging server under the control of the customer. Additional signing and encrypting could prevent potential eavesdroppers from being able to view and alter log data information on the way to the logging server. Runtime compromise of an PaaS application by adversaries could be monitored by push-only mechanisms for log data presupposing that the needed information to detect such an attack are logged. Increasingly, CSP offering PaaS solutions give developers the ability to collect and store a variety of diagnostics data in a highly configurable way with the help of runtime feature sets [38].3) IaaS Environments: As expected, even virtual instances in the cloud get compromised by adversaries. Hence, the ability to determine how defenses in the virtual environment failed and to what extent the affected systems havebeen compromised is crucial not only for recovering from an incident. Also forensic investigations gain leverage from such information and contribute to resilience against future attacks on the systems. From the forensic point of view, IaaS instances do provide much more evidence data usable for potential forensics than PaaS and SaaS models do. This fact is caused throughthe ability of the customer to install and set up the image for forensic purposes before an incident occurs. Hence, as proposed for PaaS environments, log data and other forensic evidence information could be signed and encrypted before itis transferred to third-party hosts mitigating the chance that a maliciously motivated shutdown process destroys the volatile data. Although, IaaS environments provide plenty of potential evidence, it has to be emphasized that the customer VM is in the end still under the control of the CSP. He controls the hypervisor which is e.g. responsible for enforcing hardware boundaries and routing hardware requests among different VM. Hence, besides the security responsibilities of the hypervisor, he exerts tremendous control over how customer’s VM communicate with the hardware and theoretically can intervene executed processes on the hosted virtual instance through virtual introspection [25]. This could also affect encryption or signing processes executed on the VM and therefore leading to the leakage of the secret key. Although this risk can be disregarded in most of the cases, the impact on the security of high security environments is tremendous.a) Snapshot Analysis: Traditional forensics expect target machines to be powered down to collect an image (dead virtual instance). This situation completely changed with the advent of the snapshot technology which is supported by all popular hypervisors such as Xen, VMware ESX and Hyper-V.A snapshot, also referred to as the forensic image of a VM, providesa powerful tool with which a virtual instance can be clonedby one click including also the running system’s mem ory. Due to the invention of the snapshot technology, systems hosting crucial business processes do not have to be powered down for forensic investigation purposes. The investigator simply creates and loads a snapshot of the target VM for analysis(live virtual instance). This behavior is especially important for scenarios in which a downtime of a system is not feasible or practical due to existing SLA. However the information whether the machine is running or has been properly powered down is crucial [3] for the investigation. Live investigations of running virtual instances become more common providing evidence data that。
英汉网络安全词典英汉网络安全词典1. antivirus software / 杀毒软件Antivirus software, also known as anti-malware software, is a program designed to detect, prevent and remove malicious software from a computer or network.2. firewall / 防火墙A firewall is a network security device that monitors and filters incoming and outgoing network traffic based on predetermined security rules. It helps protect a computer or network from unauthorized access and potential threats.3. encryption / 加密Encryption is the process of converting plain text or data into an unreadable format using an algorithm and a key. It helps protect sensitive information and ensures secure communication.4. phishing / 钓鱼Phishing is a fraudulent practice where cybercriminals try to trick individuals into revealing sensitive information, such as passwords or credit card numbers, by pretending to be a legitimate entity.5. malware / 恶意软件Malware, short for malicious software, is any software designed to cause damage, disrupt operations, or gain unauthorized access to a computer or network. Common types of malware include viruses, worms, trojans, and ransomware.6. vulnerability / 漏洞A vulnerability is a weakness or flaw in a computer system or network that can be exploited by attackers. It can result in unauthorized access, data breaches, or system disruptions.7. authentication / 身份验证Authentication is the process of verifying the identity of an individual or device accessing a computer system or network. It can involve passwords, biometrics, or other means to ensure the authorized user's identity.8. intrusion detection system (IDS) / 入侵检测系统An intrusion detection system is a network security technology that monitors network traffic for malicious activity or unauthorized access. It alerts administrators or automatically takes action to prevent further damage.9. encryption key / 加密密钥An encryption key is a piece of information used in encryption algorithms to convert plain text into cipher text or vice versa. The key is necessary to decrypt the encrypted data and ensure secure communication.10. cybersecurity / 网络安全Cybersecurity refers to the practice of protecting computer systems, networks, and data from unauthorized access, damage, or theft. It involves implementing measures to prevent, detect, and respond to cyber threats.11. two-factor authentication (2FA) / 双因素身份验证Two-factor authentication is a security process that requires two different forms of identification before granting access to a computer system or network. It typically involves something the user knows (password) and something the user possesses (security token or mobile device).12. data breach / 数据泄露A data breach is an incident where unauthorized individuals gain access to protected or sensitive data without permission. It can result in the exposure or theft of personal information, financial records, or other confidential data.13. cyber attack / 网络攻击A cyber attack is an intentional act to compromise computer systems, networks, or devices by exploiting vulnerabilities. It can involve stealing sensitive data, disrupting operations, or causing damage to digital infrastructure.14. vulnerability assessment / 漏洞评估A vulnerability assessment is the process of identifying and evaluating vulnerabilities in a computer system, network, or application. It helps organizations understand their security weaknesses and take appropriate measures to mitigate risks.15. secure socket layer (SSL) / 安全套接字层Secure Socket Layer is a cryptographic protocol that ensures secure communication over a computer network. It provides encryption, authentication, and integrity, making it widely used for securing online transactions and data transfer.以上是英汉网络安全词典的部分词汇,以供参考。
[联邦法规][第21章第1卷][2006年04月01日修改] [代号:21CFR 11]第21章-食品与药品第1节-食品和药品管理局健康与人类服务部亚节-一般规定[Code of Federal Regulations][Title 21, Volume 1][Revised as of April 1, 2006][CITE: 21CFR 11]TITLE 21--Food And DrugsCHAPTER I--Food And Drug Administration Department of Health And Human Services Subchapter A--General第11款电子记录;电子签名PART 11 Electronic Records; Electronic Signatures分章A 一般规定适用范围Subpart A--General Provisions Sec. Scope.本条款的规则提供了标准,在此标准之下FDA将认为电子记录、电子签名、和在电子记录上的手签名是可信赖的、可靠的并且通常等同于纸制记录和在纸上的手写签名。
本条款适用于在FDA规则中阐明的在任何记录的要求下,以电子表格形式建立、修改、维护、归档、检索或传送的记录。
本条款同样适用于在《联邦食品、药品和化妆品法案》和《公众健康服务法案》要求下的呈送给FDA的电子记录,即使该记录没有在FDA规则下明确识别。
然而,本条款不适用于现在和已经以电子的手段传送的纸制记录。
一旦电子签名和与它相关的电子记录符合本条款的要求,FDA将会认为电子签名等同于完全手签名、缩写签名、和其他的FDA规则所求的一般签名。
除非被从1997年8月20日起(包括该日)生效后的规则明确地排除在外。
(a) The regulations in this part set forth the criteria under which the agency considers electronic records, electronic signatures, and handwritten signatures executed to electronic records to be trustworthy, reliable, and generally equivalent to paper records and handwritten signatures executed on paper.(b) This part applies to records in electronic form that are created, modified, maintained, archived, retrieved, or transmitted, under any records requirements set forth in agency regulations.This part also applies to electronic records submitted to the agency under requirements of the Federal Food, Drug, and Cosmetic Act and the Public Health Service Act, even if such records are not specifically identified in agency regulations.However, this part does not apply to paper records that are, or have been, transmitted by electronic means.(c) Where electronic signatures and their associated electronic records meet the requirements of this part, the agency will consider the electronic signatures to be equivalent to full handwritten signatures, initials, and other general signings as required by agency regulations, unless specifically excepted by regulation(s) effective on or after August 20, 1997.依照本条款,除非纸制记录有特殊的要求,符合本条款要求的电子记录可以代替纸制记录使用。
verifying signed artifacts报错的解决方法-回复Verifying Signed Artifacts: A Step-by-Step Troubleshooting GuideIntroduction:In this article, we will explore the common issues and troubleshooting steps associated with the "verifying signed artifacts" error. Verifying signed artifacts is a crucial process in software development and deployment, ensuring that the digital signature attached to a file or artifact is valid and has not been tampered with. However, sometimes these verifications may fail, leading to errors or unexpected behavior. In this comprehensive guide, we will delve into the potential causes of these errors and outline step-by-step solutions to rectify them.Step 1: Understand the BasicsBefore diving into troubleshooting, it is important to have a clear understanding of the concept of signed artifacts and why verification is essential. Signed artifacts are files that have a digital signature attached to them, intended to prove the authenticity and integrity of the file. The verification process involves checking the signature against a trusted certificate authority (CA) to ensure its validity.Step 2: Identify the Error MessageThe first step in troubleshooting is to identify the exact error message you encounter when trying to verify the signed artifact. This information is crucial as it provides important clues about the root cause of the problem. Common error messages may indicate issues with the certificate, the signing process, or the verification mechanism.Step 3: Check the Certificate ChainOne possible cause of the error is an incomplete or improperly installed certificate chain. Start by checking the certificate chain attached to the signed artifact. Ensure that all intermediate certificates are present and correctly installed on the machine doing the verification. If any of the certificates are missing or expired, obtain the correct certificates from the issuing CA and install them correctly.Step 4: Verify the Certificate Revocation StatusAnother potential issue could be the revocation status of the certificate used to sign the artifact. Certificates can be revoked for various reasons, including compromise, expiration, or a change inthe certificate owner's circumstances. Verify the revocation status of the certificate using the certificate's serial number or thumbprint. Check the certificate authority's Certificate Revocation List (CRL) or Online Certificate Status Protocol (OCSP) to ensure that the certificate is valid and has not been revoked.Step 5: Validate the Digital Signature AlgorithmSometimes, the error can be caused by an incompatibility between the signing algorithm used to sign the artifact and the verification mechanism being used to validate it. Ensure that the signature algorithm used is supported by the verification process. Check if the verification mechanism requires specific algorithms or cryptographic standards, and verify that the signature algorithm used is compliant.Step 6: Confirm the Trustworthiness of the CertificateThe error could also occur if the certificate used to sign the artifact is not trusted by the verification mechanism. Check the trust anchors, root certificates, or certificate trust lists (CTLs) used by the verification component. Ensure that the certificate used to sign the artifact is present and trusted.Step 7: Verify Time and Date SettingsIncorrect time and date settings on the machine performing the verification can also cause errors during the verification process. Ensure that the system clock is correctly set and synchronized with a reliable time source. An incorrect time or date can cause the verification mechanism to deem the certificate invalid.Step 8: Update Software and Security ComponentsOutdated or incompatible software or security components can introduce compatibility issues that lead to verification errors. Ensure that all relevant software, including certificate authorities, verification libraries, and operating systems, are up to date. Check with the software vendors for any known issues or updates specific to the verification process.Step 9: Seek Assistance from Trusted SourcesIf the above steps do not resolve the issue, it might be beneficial to seek assistance from trusted sources. Reach out to software vendors, certificate authorities, or online forums dedicated to software security and digital signatures. These sources may provide specific guidance or solutions tailored to your particular situation.Conclusion:Verifying signed artifacts is a critical step in ensuring the integrity and authenticity of software. This troubleshooting guide has provided a step-by-step approach to identify and resolve the "verifying signed artifacts" error. By understanding the basics, checking the certificate chain, validating the signing algorithm, and confirming trustworthiness, you can diagnose and rectify common issues that may arise during the verification process. Remember to keep all software up to date and seek assistance when needed. With these steps, you can ensure the successful verification of signed artifacts and maintain the security and trustworthiness of your software.。
cloudflare error code 526 -回复Cloudflare Error Code 526: A Comprehensive GuideIntroduction:In the modern digital landscape, website security plays a pivotal role in maintaining user trust and safeguarding sensitive information. However, even with robust security measures in place, certain errors can occur, impeding the seamless flow of web traffic. One such error is Cloudflare Error Code 526. In this article, we will delve into the intricacies of Error Code 526, discussing its causes, potential solutions, and best practices to mitigate its occurrence.Section 1: Understanding Cloudflare Error Code 5261.1 What is Cloudflare?Cloudflare is a popular content delivery network (CDN) and cybersecurity provider that acts as an intermediary between web servers and users. It optimizes website performance, offers DDoS protection, and ensures secure connections by employing advanced encryption protocols.1.2 What is Error Code 526?Cloudflare Error Code 526 indicates an invalid SSL certificate, preventing a secure connection between the user's browser and the web server. It usually occurs when a website requires HTTPS (Hypertext Transfer Protocol Secure) but presents an expired,self-signed, or untrusted SSL certificate.Section 2: Causes of Cloudflare Error Code 5262.1 Expired SSL CertificateAn expired SSL certificate is a common offender behind Error Code 526. SSL certificates have an expiration date, and if a website fails to renew it promptly, Cloudflare may flag it as invalid.2.2 Self-signed or Untrusted SSL CertificateWebsites using self-signed or untrusted SSL certificates are not recognized by major certificate authorities like Let's Encrypt or DigiCert. Consequently, Cloudflare deems them invalid, triggeringError Code 526.2.3 Misconfiguration of SSL SettingsIn certain cases, Error Code 526 stems from the misconfiguration of SSL settings within the Cloudflare dashboard or the origin web server. This includes selecting incorrect SSL options or failing to implement SSL/TLS correctly.Section 3: Troubleshooting Cloudflare Error Code 5263.1 Step 1: Verify SSL Certificate ExpiryBegin troubleshooting by checking the SSL certificate's expiration date. Utilize SSL certificate validation tools or consult with your certificate provider to ensure it is valid and has not expired.3.2 Step 2: Validate Certificate AuthorityConfirm whether the SSL certificate's issuing Certificate Authority (CA) is reputable and recognized by major browsers and Cloudflare. If the CA is not trusted, consider obtaining a certificate from arecognized authority.3.3 Step 3: Check SSL Certificate ConfigurationReview the SSL certificate's configuration on both Cloudflare and the origin web server. Ensure the certificate is correctly installed, and the cryptographic settings match on both ends, such as protocol version, cipher suites, and key exchange mechanisms.3.4 Step 4: Renew or Reissue SSL CertificateIf the SSL certificate has expired, renew or reissue it promptly. Work closely with your certificate provider to ensure a smooth transition. This process may require generating a new certificate signing request (CSR) and obtaining a fresh SSL certificate.3.5 Step 5: Enable Full or Strict SSL ModeWithin the Cloudflare dashboard, navigate to the SSL/TLS settings and set the SSL mode to "Full" or "Strict." This ensures a secure connection and reduces the possibility of encountering Error Code 526.Section 4: Best Practices to Avoid Cloudflare Error Code 5264.1 Regular SSL Certificate Monitoring and RenewalMaintain a vigilant approach towards SSL certificate management. Regularly monitor expiration dates and renew certificates in a timely manner, reducing the risk of encountering Error Code 526.4.2 Utilize Reliable Certificate AuthoritiesAlways obtain SSL certificates from reputable and widely recognized certificate authorities. This ensures compatibility with major browsers and Cloudflare, reducing the likelihood of encountering certificate-related errors.4.3 Double-check SSL ConfigurationThoroughly review SSL configuration settings on both Cloudflare and the origin web server. Ensure consistency across cryptographic settings and encryption protocols to establish a secure anderror-free connection.4.4 Implement Automated Certificate ManagementLeverage automated certificate management tools to streamline the expiration tracking and renewal process. These tools can significantly reduce the chances of encountering SSLcertificate-related errors.Conclusion:Cloudflare Error Code 526 can significantly impact website performance and compromise user security. Understanding its causes and implementing proactive measures is crucial for smooth web traffic operations. By verifying SSL certificates, checking Certificate Authorities, reviewing SSL configurations, and adopting best practices, website administrators can mitigate Error Code 526 and provide a secure browsing experience for users. Remember, consistent SSL certificate monitoring and strict adherence to SSL standards are vital pillars of a robust website security infrastructure.。
ATECC508AAtmel CryptoAuthentication DeviceSUMMARY DATASHEETFeatures∙ Cryptographic Co-processor with Secure Hardware-based Key Storage ∙ Performs High-Speed Public Key (PKI) Algorithms– ECDSA: FIPS186-3 Elliptic Curve Digital Signature Algorithm – ECDH: FIPS SP800-56A Elliptic Curve Diffie-Hellman Algorithm ∙ NIST Standard P256 Elliptic Curve Support ∙ SHA-256 Hash Algorithm with HMAC Option ∙ Host and Client Operations ∙ 256-bit Key Length ∙ Storage for up to 16 Keys∙ Two high-endurance monotonic counters ∙ Guaranteed Unique 72-bit Serial Number∙ Internal High-quality FIPS Random Number Generator (RNG) ∙ 10Kb EEPROM Memory for Keys, Certificates, and Data ∙ Storage for up to 16 Keys∙ Multiple Options for Consumption Logging and One Time Write Information∙ Intrusion Latch for External Tamper Switch or Power-on Chip Enablement. Multiple I/O Options:– High-speed Single Pin Interface, with One GPIO Pin – 1MHz Standard I 2C Interface ∙ 2.0V to 5.5V Supply Voltage Range ∙ 1.8V to 5.5V IO levels ∙ <150nA Sleep Current∙ 8-pad UDFN, 8-lead SOIC, and 3-lead CONTACT PackagesApplications∙ IoT Node Security and ID ∙ S ecure Download and Boot ∙ E cosystem Control ∙ M essage Security ∙ A nti-CloningThis is a summary document. The complete document is available under NDA. For more information, please contact your local Atmel sales office.Secure Download and BootAuthentication and Protect Code In-transitEcosystem ControlEnsure Only OEM/Licensed Nodes and Accessories WorkAnti-cloningPrevent Building with Identical BOM or Stolen CodeMessage SecurityAuthentication, Message Integrity,and Confidentiality of Network Nodes (IoT)CryptoAuthenticationEnsures Things and Code are Real, Untampered, and ConfidentialPin Configuration and Pinouts Table 1. Pin ConfigurationFigure 1. PinoutsATECC508A [Summary Datasheet]Atmel-8923BS-CryptoAuth-ATECC508A-Datasheet-Summary_102015221 Introduction1.1 ApplicationsThe Atmel® ATECC508A is a member of the Atmel CryptoAuthentication™ family of crypto engine authentication devices with highly secure hardware-based key storage.The ATECC508A has a flexible command set that allows use in many applications, including the following,among many others:∙Network/IoT Node ProtectionAuthenticates node IDs, ensures the integrity of messages, and supports key agreement to create sessionkeys for message encryption.∙Anti-CounterfeitingValidates that a removable, replaceable, or consumable client is authentic. Examples of clients could besystem accessories, electronic daughter cards, or other spare parts. It can also be used to validate asoftware/firmware module or memory storage element.∙Protecting Firmware or MediaValidates code stored in flash memory at boot to prevent unauthorized modifications, encrypt downloadedprogram files as a common broadcast, or uniquely encrypt code images to be usable on a single systemonly.∙Storing Secure DataStore secret keys for use by crypto accelerators in standard microprocessors. Programmable protection isavailable using encrypted/authenticated reads and writes.∙Checking User PasswordValidates user-entered passwords without letting the expected value become known, maps memorablepasswords to a random number, and securely exchanges password values with remote systems.1.2 Device FeaturesThe ATECC508A includes an EEPROM array which can be used for storage of up to 16 keys, certificates,miscellaneous read/write, read-only or secret data, consumption logging, and security configurations. Access to the various sections of memory can be restricted in a variety of ways and then the configuration can be locked to prevent changes.The ATECC508A features a wide array of defense mechanisms specifically designed to prevent physical attacks on the device itself, or logical attacks on the data transmitted between the device and the system. Hardware restrictions on the ways in which keys are used or generated provide further defense against certain styles of attack.Access to the device is made through a standard I2C Interface at speeds of up to 1Mb/s. The interface iscompatible with standard Serial EEPROM I2C interface specifications. The device also supports a Single-Wire Interface (SWI), which can reduce the number of GPIOs required on the system processor, and/or reduce the number of pins on connectors. If the Single-Wire Interface is enabled, the remaining pin is available for use as a GPIO, an authenticated output or tamper input.Using either the I2C or Single-Wire Interface, multiple ATECC508A devices can share the same bus, which saves processor GPIO usage in systems with multiple clients such as different color ink tanks or multiple spare parts, for example.Each ATECC508A ships with a guaranteed unique 72-bit serial number. Using the cryptographic protocolssupported by the device, a host system or remote server can verify a signature of the serial number to prove that the serial number is authentic and not a copy. Serial numbers are often stored in a standard Serial EEPROM;however, these can be easily copied with no way for the host to know if the serial number is authentic or if it is a clone.ATECC508A [Summary Datasheet]Atmel-8923BS-CryptoAuth-ATECC508A-Datasheet-Summary_1020153 3ATECC508A [Summary Datasheet]Atmel-8923BS-CryptoAuth-ATECC508A-Datasheet-Summary_10201544The ATECC508A can generate high-quality FIPS random numbers and employ them for any purpose, including usage as part of the device’s crypto protocols. Because each random number is guaranteed to be essentially unique from all numbers ever generated on this or any other device, their inclusion in the protocol calculation ensures that replay attacks (i.e. re-transmitting a previously successful transaction) will always fail.System integration is easy due to a wide supply voltage range (of 2.0V to 5.5V) and an ultra-low sleep current (of <150nA). Multiple package options are available.See Section 3 for information regarding compatibility with the Atmel ATSHA204 and ATECC108.1.3 Cryptographic OperationThe ATECC508A implements a complete asymmetric (public/private) key cryptographic signature solution based upon Elliptic Curve Cryptography and the ECDSA signature protocol. The device features hardware acceleration for the NIST standard P256 prime curve and supports the complete key life cycle from high quality private key generation, to ECDSA signature generation, ECDH key agreement, and ECDSA public key signature verification.The hardware accelerator can implement such asymmetric cryptographic operations from ten to one-thousand times faster than software running on standard microprocessors, without the usual high risk of key exposure that is endemic to standard microprocessors.The device is designed to securely store multiple private keys along with their associated public keys andcertificates. The signature verification command can use any stored or an external ECC public key. Public keys stored within the device can be configured to require validation via a certificate chain to speed-up subsequent device authentications.Random private key generation is supported internally within the device to ensure that the private key can never be known outside of the device. The public key corresponding to a stored private key is always returned when the key is generated and it may optionally be computed at a later time.The ATECC508A also supports a standard hash-based challenge-response protocol in order to simplifyprogramming. In its most basic instantiation, the system sends a challenge to the device, which combines that challenge with a secret key and then sends the response back to the system. The device uses a SHA-256cryptographic hash algorithm to make that combination so that an observer on the bus cannot derive the value of the secret key, but preserving that ability of a recipient to verify that the response is correct by performing the same calculation with a stored copy of the secret on the recipient’s system.Due to the flexible command set of the ATECC508A, these basic operation sets (i.e. ECDSA signatures, ECDH key agreement and SHA-256 challenge-response) can be expanded in many ways.In a host-client configuration where the host (for instance a mobile phone) needs to verify a client (for instance an OEM battery), there is a need to store the secret in the host in order to validate the response from the client. The CheckMac command allows the device to securely store the secret in the host system and hides the correct response value from the pins, returning only a yes or no answer to the system.All hashing functions are implemented using the industry-standard SHA-256 secure hash algorithm, which is part of the latest set of high-security cryptographic algorithms recommended by various government agencies and cryptographic experts. The ATECC508A employs full-sized 256 bit secret keys to prevent any kind of exhaustive attack.2 Electrical Characteristics 2.1 Absolute Maximum Ratings*Operating Temperature .......................... -40°C to 85°C Storage Temperature ........................... -65°C to 150°C Maximum Operating Voltage................................. 6.0V DC Output Current ................................................ 5mA Voltage on any pin ...................... -0.5V to (V CC + 0.5V) *Notice: Stresses beyond those listed under “AbsoluteMaximum Ratings” may cause permanent damage tothe device. This is a stress rating only and functionaloperation of the device at these or any otherconditions beyond those indicated in the operationalsections of this specification are not implied.Exposure to absolute maximum rating conditions forextended periods may affect device reliability.2.2 ReliabilityThe ATECC508A is fabricated with the Atmel high reliability of the CMOS EEPROM manufacturing technology.Table 2-1. EEPROM Reliability2.3 AC Parameters: All I/O InterfacesFigure 2-1. AC Parameters: All I/O InterfacesNote: 1. These parameters are guaranteed through characterization, but not tested.ATECC508A [Summary Datasheet]Atmel-8923BS-CryptoAuth-ATECC508A-Datasheet-Summary_1020155 5ATECC508A [Summary Datasheet]Atmel-8923BS-CryptoAuth-ATECC508A-Datasheet-Summary_102015662.3.1AC Parameters: Single-Wire InterfaceTable 2-2.AC Parameters: Single-Wire InterfaceApplicable from T A = -40°C to +85°C, V CC = +2.0V to +5.5V, CL =100pF (unless otherwise noted).Note: 1.START, ZLO, ZHI, and BIT are designed to be compatible with a standard UART running at 230.4Kbaud for both transmit and receive. The UART should be set to seven data bits, no parity and one Stop bit.2.3.2 AC Parameters: I2C InterfaceTable 2-3. AC Characteristics of I2C InterfaceApplicable over recommended operating range from TA = -40°C to + 85°C, V CC = +2.0V to +5.5V, CL = 1 TTL Gate and 100pF (unless otherwise noted).Note: 1. Values are based on characterization and are not tested.AC measurement conditions:∙RL (connects between SDA and V CC): 1.2k (for V CC +2.0V to +5.0V)∙Input pulse voltages: 0.3V CC to 0.7V CC∙Input rise and fall times: ≤ 50ns∙Input and output timing reference voltage: 0.5V CCATECC508A [Summary Datasheet]Atmel-8923BS-CryptoAuth-ATECC508A-Datasheet-Summary_1020157 7ATECC508A [Summary Datasheet]Atmel-8923BS-CryptoAuth-ATECC508A-Datasheet-Summary_102015882.4DC Parameters: All I/O InterfacesTable 2-4. DC Parameters on All I/O Interfaces2.4.1V IH and V IL SpecificationsThe input voltage thresholds when in sleep or idle mode are dependent on the V CC level as shown in the graphbelow. When the device is active (i.e. not in sleep or idle mode), the input voltage thresholds are different depending upon the state of TTLenable (bit 1) within the ChipMode byte in the Configuration zone of theEEPROM. When a common voltage is used for the ATECC508A V CC pin and the input pull-up resistor, then this bit should be set to a one, which permits the input thresholds to track the supply.If the voltage supplied to the V CC pin of the ATECC508A is different than the system voltage to which the input pull-up resistor is connected, then the system designer may choose to set TTLenable to zero, which enables a fixed input threshold according to the following table. The following applies only when the device is active:Table 2-5. V IL , V IH on All I/O Interfaces3 Compatibility3.1 Atmel ATSHA204ATECC508A is fully compatible with the ATSHA204 and ATSHA204A devices. If properly configured, it can be used in all situations where the ATSHA204 or ATSHA204A is currently employed. Because the Configuration zone is larger, the personalization procedures for the device must be updated when personalizing theATSHA204 or ATSHA204A.3.2 Atmel ATECC108ATECC508A is designed to be fully compatible with the ATECC108 and ATECC108A devices. If properlyconfigured, can be used in all situations where ATECC108 is currently employed. In many situations, theATECC508A can also be used in an ATECC108 application without change. The new revisions providesignificant advantages as outlined below:New Features in ATECC108A vs. ATECC108∙Intrusion Detection Capability, Including Gating Key Use∙New SHA Command, Also Computes HMAC∙X.509 Certificate Verification Capability∙Programmable Watchdog Timer Length∙Programmable Power Reduction∙Shared Random Nonce and Key Configuration Validation (Gendig Command)∙Larger Slot 8 which is Extended to 416 bytes4 Ordering InformationNotes: 1. Please contact Atmel for availability.2. Please contact Atmel for thinner packages.ATECC508A [Summary Datasheet]Atmel-8923BS-CryptoAuth-ATECC508A-Datasheet-Summary_1020159 9ATECC508A [Summary Datasheet]Atmel-8923BS-CryptoAuth-ATECC508A-Datasheet-Summary_1020151105Package Drawings5.18-lead SOICATECC508A [Summary Datasheet]Atmel-8923BS-CryptoAuth-ATECC508A-Datasheet-Summary_10201511 115.2 8-pad UDFNATECC508A [Summary Datasheet]Atmel-8923BS-CryptoAuth-ATECC508A-Datasheet-Summary_1020151125.33-lead CONTACTATECC508A [Summary Datasheet]Atmel-8923BS-CryptoAuth-ATECC508A-Datasheet-Summary_10201513 136 Revision HistoryATECC508A [Summary Datasheet] Atmel-8923BS-CryptoAuth-ATECC508A-Datasheet-Summary_102015114Atmel Corporation 1600 Technology Drive, San Jose, CA 95110 USA T: (+1)(408) 441.0311 F: (+1)(408) 436.4200 │ © 2015 Atmel Corporation. / Rev.:Atmel-8923BS-CryptoAuth-ATECC508A-Datasheet-Summary_102015.Atmel ®, Atmel logo and combinations thereof, Enabling Unlimited Possibilities ®, CryptoAuthentication™, and others are registered trademarks or trademarks of Atmel Corporation in U.S. and other countries.DISCLAIMER: The information in this document is provided in connection with Atmel products. No license, express or implied, by estoppel or otherwise, to any intellectual property right is granted by this document or in connection with the sale of Atmel products. EXCEPT AS SET FORTH IN THE ATMEL TERMS AND COND ITIONS OF SALES LOCATED ON THE ATMEL WEBSITE, ATMEL ASSUMES NO LIABILITY WHATSOEVER AND DISCLAIMS ANY EXPRESS, IMPLIED OR STATUTORY WARRANTY RELATING TO ITS PRODUCTS INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON -INFRINGEMENT. IN NO EVENT SHALL ATMEL BE LIABLE FOR ANY DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE, SPECIAL OR INCIDENTAL DAMAGES (INCLUDING, WITHOUT LIMITATION, DAMAG ES FOR LOSS AND PROFITS, BUSINESS INTERRUPTION, OR LOSS OF INFORMATION) ARISING OUT OF THE USE OR INABILITY TO USE THIS DOCUMENT , EVEN IF ATMEL HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Atmel makes no representations or warranties with respect to the accuracy or completeness of the contents of this document and reserves the right to make changes to specifications and products descriptions at any time without notice. Atmel does not make any commitment to update the information contained herein. Unless specifically provided otherwise, Atmel products are not suitable for, and shall not be used in, automotive applications. At mel products are not intended, authorized, or warranted for use as components in applications intended to support or sustain life.SAFETY-CRITICAL, MILITARY, AND AUTOMOTIVE APPLICATIONS DISCLAIMER: Atmel products are not designed for and will not be used in connection with any applications where the failure of such products would reasonably be expected to result in significant personal injury or death (“Safety -Critical Applications”) without an Atmel officer's specific written consent. Safety-Critical Applications include, without limitation, life support devices and systems, equipment or systems for the operation of nuclear fac ilities and weapons systems. Atmel products are not designed nor intended for use in military or aerospace applications or environments unless specifically designated by Atmel as military-grade. Atmel products are not designed nor intended for use in automotive applications unless specifically designated by Atmel as automotive -grade.。
烽火无线接入点FH-AP2400-27G 固件烽火无线接入点 FH-AP2400-27G 胖模式固件刚朋友给了台烽火无线设备,但通过Serial连接后发现该设备处于瘦模式,现请各位朋友帮忙给找个胖模式的固件,谢谢!*****************PS:U-Boot 1.2.0 (Jul 14 2010 - 11:05:53)Pismo 71ExDRAM: 64 MBTop of RAM usable for U-Boot at: 84000000Reserving 258k for U-Boot at: 83fbc000Reserving 192k for malloc() at: 83f8c000Reserving 44 Bytes for Board Info at: 83f8bfd4Reserving 40 Bytes for Global Data at: 83f8bfacReserving 128k for boot params() at: 83f6bfacStack Pointer at: 83f6bf88Now running in RAM - U-Boot at: 83fbc000Flash: 16 MBIn: serialOut: serialErr: serialNet: ag7240_enet_initialize...eth0: 00:26:7a:15:f8:08eth0bootcmd="bootm 0xbf150000"## Booting image at bf150000 ...Image Name: Linux Kernel ImageCreated: 2010-11-18 10:11:56 UTCImage Type: MIPS Linux Kernel Image (gzip compressed) Data Size: 957290 Bytes = 934.9 kBLoad Address: 80002000Entry Point: 801e1000Verifying Checksum ... OKUncompressing Kernel Image ... OKNo initrd## Transferring control to Linux (at address 801e1000) ... ## Giving linux memsize in bytes, 67108864Starting kernel ...Booting AR7240(Python)...Linux version 2.6.15--LSDK-7.3.0.387([email=blackdragon@aqlinux]blackdragon@aqlinux[/email]) (gcc version 3.4.4) #200 Mon Nov 8 15:36:42 HKT 2010flash_size passed from bootloader = 16arg 1: console=ttyS0,115200arg 2: mem=64Marg 3: panic=1arg 4: noinitrdarg 5: rootfstype=squashfs,jffs2arg 6: root=31:05CPU revision is: 00019374Determined physical RAM map:memory: 02000000 @ 00000000 (usable)User-defined physical RAM map:memory: 04000000 @ 00000000 (usable)Built 1 zonelistsKernel command line: console=ttyS0,115200 mem=64M panic=1 noinitrd rootfstype=squashfs,jffs2 root=31:05Primary instruction cache 64kB, physically tagged, 4-way, linesize 32 bytes.Primary data cache 32kB, 4-way, linesize 32 bytes. Synthesized TLB refill handler (20 instructions). Synthesized TLB load handler fastpath (32 instructions). Synthesized TLB store handler fastpath (32 instructions). Synthesized TLB modify handler fastpath (31 instructions). Cache parity protection disabledPID hash table entries: 512 (order: 9, 8192 bytes)Using 175.000 MHz high precision timer.Dentry cache hash table entries: 16384 (order: 4, 65536 bytes) Inode-cache hash table entries: 8192 (order: 3, 32768 bytes) Memory: 62720k/65536k available (1567k kernel code, 2776k reserved, 344k data, 140k init, 0k highmem)Mount-cache hash table entries: 512Checking for 'wait' instruction... available.NET: Registered protocol family 16PCI init:ar7240_pcibios_initSCSI subsystem initializedReturning IRQ 48AR7240 GPIOC major 0JFFS2 version 2.2. (C) 2001-2003 Red Hat, Inc.Initializing Cryptographic APIio scheduler noop registeredio scheduler deadline registeredSerial: 8250/16550 driver $Revision: #1 $ 1 ports, IRQ sharing disabledserial8250.0: ttyS0 at MMIO 0x0 (irq = 19) is a 16550A RAMDISK driver initialized: 1 RAM disks of 8192K size 1024 blocksizeloop: loaded (max 8 devices)Creating 11 MTD partitions on "ar7240-nor0":0x00000000-0x00040000 : "red-boot"0x00040000-0x00070000 : "pepboot"0x00070000-0x00080000 : "magicblock"0x00080000-0x00150000 : "storage"0x00150000-0x00250000 : "kernel"0x00250000-0x00ed0000 : "rootfs"0x00ed0000-0x00fd0000 : "config"0x00fd0000-0x00fe0000 : "u-boot config"0x00fe0000-0x00ff0000 : "board config"0x00ff0000-0x01000000 : "radio config"0x00150000-0x00fe0000 : "thinap-image"NET: Registered protocol family 2IP route cache hash table entries: 1024 (order: 0, 4096 bytes) TCP established hash table entries: 4096 (order: 2, 16384 bytes)TCP bind hash table entries: 4096 (order: 2, 16384 bytes) TCP: Hash tables configured (established 4096 bind 4096) TCP reno registeredTCP bic registeredNET: Registered protocol family 1NET: Registered protocol family 17802.1Q VLAN Support v1.8 Ben Greear <***********************> ler<****************>ar7240wdt_init: Registering WDT successVFS: Mounted root (jffs2 filesystem) readonly.Freeing unused kernel memory: 140k freedinit started: BusyBox v1.4.2 (2010-11-18 18:13:56 HKT) multi-call binaryMounting file systemschown: unknown user/group root:rootchown: unknown user/group root:rootDevice version is the newest.APin ap83 pro_ctl_mod initAQ2000-SNHAQ2000-SNHAG7240: Length per segment 1536AG7240: Max segments per packet 1AG7240: Max tx descriptor count 40AG7240: Max rx descriptor count 252AG7240: fifo cfg 3 01f00140AG7240CHH: Mac address for unit 0AG7240CHH: 55:f4:95:c6:fc:3fAG7240CHH: Mac address for unit 1AG7240CHH: 0e:b9:fb:99:6f:54ath_hal: module license 'Proprietary' taints kernel.ath_hal: 0.9.17.1 (AR5416, DEBUG, REGOPS_FUNC, WRITE_EEPROM, 11D)wlan: 0.8.4.2 (Atheros/multi-bss)ath_rate_atheros: Copyright (c) 2001-2005 Atheros Communications, Inc, All Rights Reservedath_dev: Copyright (c) 2001-2007 Atheros Communications, Inc, All Rights Reservedath_pci: 0.9.4.5 (Atheros/multi-bss)PismoLabs HALCUS128 board with high power antennaANT_DIV_COMB === Check for capability modal version 4 ANT_DIV_COMB === Check for capability ant_div_control1 2 ant_div_control2 6wifi0: Atheros 9285: mem=0x10000000, irq=48hw_base=0xb0000000wlan: mac acl policy registeredwlan_sms4: Version 1.0.1Copyright (c) 2001-2007 IWNCOMM Communications, Inc, All Rights Reservedag7240_ring_alloc Allocated 640 at 0x83ea0800ag7240_ring_alloc Allocated 4032 at 0x801f5000Setting PHY...reg 0x10 600device eth0 entered promiscuous mode'/www/image/logo.jpg' existssocket: Bad file descriptor/sbin/mini_httpd: started as root without requesting chroot(), warning onlyhostapd config dir okkillall: udhcpc: no process killeddev.wifi0.thinap = 1wtp.log.txt oksocket: Bad file descriptorbind: Bad file descriptor/sbin/mini_httpd: can't bind to any addressCompany Name:WUHAN HONGXIN TELECOMMUNICATION TECHNOLOGIES CO.,LTDSN:020*********A12010C00791Ap Mode:thinapDevice Type:FH-AP2400-27GMAC:00:26:7a:15:f8:08Software Version:3.5.12Hardware Version:2.5=========================Please press Enter to activate this console.。
ab srtp用法AB Secure Real-time Transport Protocol (AB SRTP) is a cryptographic protocol that provides secure communication for real-time streaming applications. It is an extension of the Secure Real-time Transport Protocol (SRTP), which is widely used for securing voice and video communication over IP networks.In this article, we will discuss the usage and implementation of AB SRTP, its benefits, and its comparison with other secure transport protocols.1. Introduction to AB SRTPAB SRTP is designed to provide confidentiality, integrity, and authenticity for real-time media streams. It ensures that the transmitted data remains secure and cannot be intercepted or tampered with by unauthorized entities. AB SRTP utilizes cryptographic algorithms to encrypt the media packets, making it difficult for attackers to decipher the content.2. Key Features of AB SRTP2.1 EncryptionAB SRTP uses encryption algorithms, such as Advanced Encryption Standard (AES) or Triple Data Encryption Standard (3DES), to encrypt the media packets. This ensures that the content of the packets is only accessible to the intended recipients.2.2 AuthenticationAB SRTP provides authentication mechanisms to verify theintegrity and authenticity of the media packets. It uses Message Authentication Codes (MACs), such as Hash-based Message Authentication Code (HMAC-SHA1) or HMAC-Secure Hash Algorithm 256 (HMAC-SHA256), to ensure that the packets have not been tampered with during transmission.2.3 Key ExchangeAB SRTP employs key exchange protocols, such as Datagram Transport Layer Security (DTLS) or Transport Layer Security (TLS), to securely exchange encryption keys between the communicating parties. This ensures that only authorized entities can decrypt the encrypted media packets.2.4 Forward SecrecyAB SRTP supports forward secrecy, which means that even if an attacker manages to compromise the encryption keys, they will not be able to decrypt the previously exchanged media packets. This is achieved by using ephemeral encryption keys that are generated for each session.3. Usage of AB SRTPAB SRTP can be used in various real-time streaming applications where security and privacy are paramount. Some common use cases include:3.1 Voice and Video CommunicationAB SRTP can be used to secure voice and video communication over IP networks, such as Voice over Internet Protocol (VoIP) systems, video conferencing applications, and streaming mediaservices. It ensures that the media content remains confidential and cannot be intercepted or modified by attackers.3.2 Secure File TransferAB SRTP can also be used for secure file transfer applications, where large files need to be transferred securely over the network. It provides confidentiality and integrity guarantees for the transferred files, ensuring that they are not accessed or modified by unauthorized entities.3.3 IoT ApplicationsWith the rise of Internet of Things (IoT) devices, AB SRTP can be used to secure real-time data streams from IoT sensors and actuators. It ensures that the data collected by the sensors is transmitted securely to the central server, preventing unauthorized access or tampering.4. Implementing AB SRTPImplementing AB SRTP requires integrating the protocol into the real-time streaming application. The steps involved in implementing AB SRTP are as follows:4.1 Design and PlanningBefore implementing AB SRTP, it is essential to define the security requirements of the application, such as the desired level of encryption, authentication, and key exchange. The application's architecture should be designed accordingly to accommodate AB SRTP.4.2 Integration with the ApplicationAB SRTP libraries and APIs should be integrated into the application's codebase. These libraries provide functions and methods for encrypting and decrypting the media packets, generating and exchanging encryption keys, and verifying the integrity of the packets.4.3 ConfigurationAfter integrating AB SRTP into the application, the necessary configuration parameters should be set. This includes specifying the encryption algorithm, authentication mechanism, key exchange protocol, and other security-related parameters.4.4 Testing and DeploymentOnce the implementation and configuration are complete, the application should undergo rigorous testing to ensure that AB SRTP is functioning correctly. This includes testing the encryption, decryption, authentication, and key exchange functionalities. After successful testing, the application can be deployed in a production environment.5. Comparison with Other Secure Transport ProtocolsAB SRTP is similar to other secure transport protocols, such as Datagram Transport Layer Security (DTLS) and Transport Layer Security (TLS). However, there are some differences that set AB SRTP apart:5.1 Real-time Media-orientedAB SRTP is specifically designed for securing real-time mediastreams, such as voice and video communication. It focuses on providing low latency and high throughput for real-time applications, making it ideal for time-sensitive applications.5.2 Easier IntegrationCompared to DTLS and TLS, AB SRTP is relatively easy to integrate into real-time streaming applications. AB SRTP libraries and APIs are readily available, providing developers with the necessary tools for integrating security features into their applications.5.3 Forward SecrecyAB SRTP supports forward secrecy, which is not provided by DTLS and TLS. This ensures that even if the encryption keys are compromised, previously exchanged media packets cannot be decrypted by attackers.6. ConclusionAB SRTP is a cryptographic protocol that provides secure communication for real-time streaming applications. It offers encryption, authentication, key exchange, and forward secrecy features, ensuring the confidentiality, integrity, and authenticity of the transmitted media packets.By implementing AB SRTP, developers can enhance the security of real-time streaming applications, such as voice and video communication, secure file transfer, and IoT applications. Its ease of integration and support for forward secrecy make it a robust choice for securing real-time media streams.In conclusion, AB SRTP is a valuable tool for ensuring the privacy and security of real-time streaming applications, and its usage will continue to grow as the need for secure communication becomes increasingly important in our digitally connected world.。
Network Working Group J. Salowey Request for Comments: 5288 A. Choudhury Category: Standards Track D. McGrew Cisco Systems, Inc. August 2008 AES Galois Counter Mode (GCM) Cipher Suites for TLSStatus of This MemoThis document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions forimprovements. Please refer to the current edition of the "InternetOfficial Protocol Standards" (STD 1) for the standardization stateand status of this protocol. Distribution of this memo is unlimited.AbstractThis memo describes the use of the Advanced Encryption Standard (AES) in Galois/Counter Mode (GCM) as a Transport Layer Security (TLS)authenticated encryption operation. GCM provides bothconfidentiality and data origin authentication, can be efficientlyimplemented in hardware for speeds of 10 gigabits per second andabove, and is also well-suited to software implementations. Thismemo defines TLS cipher suites that use AES-GCM with RSA, DSA, andDiffie-Hellman-based key exchange mechanisms.Table of Contents1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 22. Conventions Used in This Document . . . . . . . . . . . . . . . 23. AES-GCM Cipher Suites . . . . . . . . . . . . . . . . . . . . . 24. TLS Versions . . . . . . . . . . . . . . . . . . . . . . . . . 35. IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 46. Security Considerations . . . . . . . . . . . . . . . . . . . . 4 6.1. Counter Reuse . . . . . . . . . . . . . . . . . . . . . . . 46.2. Recommendations for Multiple Encryption Processors . . . . 47. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 58. References . . . . . . . . . . . . . . . . . . . . . . . . . . 6 8.1. Normative References . . . . . . . . . . . . . . . . . . . 6 8.2. Informative References . . . . . . . . . . . . . . . . . . 6 Salowey, et al. Standards Track [Page 1]1. IntroductionThis document describes the use of AES [AES] in Galois Counter Mode(GCM) [GCM] (AES-GCM) with various key exchange mechanisms as acipher suite for TLS. AES-GCM is an authenticated encryption withassociated data (AEAD) cipher (as defined in TLS 1.2 [RFC5246])providing both confidentiality and data origin authentication. Thefollowing sections define cipher suites based on RSA, DSA, andDiffie-Hellman key exchanges; ECC-based (Elliptic Curve Cryptography) cipher suites are defined in a separate document [RFC5289].AES-GCM is not only efficient and secure, but hardwareimplementations can achieve high speeds with low cost and lowlatency, because the mode can be pipelined. Applications thatrequire high data throughput can benefit from these high-speedimplementations. AES-GCM has been specified as a mode that can beused with IPsec ESP [RFC4106] and 802.1AE Media Access Control (MAC) Security [IEEE8021AE].2. Conventions Used in This DocumentThe key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT","SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].3. AES-GCM Cipher SuitesThe following cipher suites use the new authenticated encryptionmodes defined in TLS 1.2 with AES in Galois Counter Mode (GCM) [GCM]: CipherSuite TLS_RSA_WITH_AES_128_GCM_SHA256 = {0x00,0x9C}CipherSuite TLS_RSA_WITH_AES_256_GCM_SHA384 = {0x00,0x9D}CipherSuite TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 = {0x00,0x9E}CipherSuite TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 = {0x00,0x9F}CipherSuite TLS_DH_RSA_WITH_AES_128_GCM_SHA256 = {0x00,0xA0}CipherSuite TLS_DH_RSA_WITH_AES_256_GCM_SHA384 = {0x00,0xA1}CipherSuite TLS_DHE_DSS_WITH_AES_128_GCM_SHA256 = {0x00,0xA2}CipherSuite TLS_DHE_DSS_WITH_AES_256_GCM_SHA384 = {0x00,0xA3}CipherSuite TLS_DH_DSS_WITH_AES_128_GCM_SHA256 = {0x00,0xA4}CipherSuite TLS_DH_DSS_WITH_AES_256_GCM_SHA384 = {0x00,0xA5}CipherSuite TLS_DH_anon_WITH_AES_128_GCM_SHA256 = {0x00,0xA6}CipherSuite TLS_DH_anon_WITH_AES_256_GCM_SHA384 = {0x00,0xA7}These cipher suites use the AES-GCM authenticated encryption withassociated data (AEAD) algorithms AEAD_AES_128_GCM andAEAD_AES_256_GCM described in [RFC5116]. Note that each of theseAEAD algorithms uses a 128-bit authentication tag with GCM (inparticular, as described in Section 3.5 of [RFC4366], theSalowey, et al. Standards Track [Page 2]"truncated_hmac" extension does not have an effect on cipher suitesthat do not use HMAC). The "nonce" SHALL be 12 bytes long consisting of two parts as follows: (this is an example of a "partiallyexplicit" nonce; see Section 3.2.1 in [RFC5116]).struct {opaque salt[4];opaque nonce_explicit[8];} GCMNonce;The salt is the "implicit" part of the nonce and is not sent in thepacket. Instead, the salt is generated as part of the handshakeprocess: it is either the client_write_IV (when the client issending) or the server_write_IV (when the server is sending). Thesalt length (SecurityParameters.fixed_iv_length) is 4 octets.The nonce_explicit is the "explicit" part of the nonce. It is chosen by the sender and is carried in each TLS record in theGenericAEADCipher.nonce_explicit field. The nonce_explicit length(SecurityParameters.record_iv_length) is 8 octets.Each value of the nonce_explicit MUST be distinct for each distinctinvocation of the GCM encrypt function for any fixed key. Failure to meet this uniqueness requirement can significantly degrade security. The nonce_explicit MAY be the 64-bit sequence number.The RSA, DHE_RSA, DH_RSA, DHE_DSS, DH_DSS, and DH_anon key exchanges are performed as defined in [RFC5246].The Pseudo Random Function (PRF) algorithms SHALL be as follows:For cipher suites ending with _SHA256, the PRF is the TLS PRF[RFC5246] with SHA-256 as the hash function.For cipher suites ending with _SHA384, the PRF is the TLS PRF[RFC5246] with SHA-384 as the hash function.Implementations MUST send TLS Alert bad_record_mac for all types offailures encountered in processing the AES-GCM algorithm.4. TLS VersionsThese cipher suites make use of the authenticated encryption withadditional data defined in TLS 1.2 [RFC5246]. They MUST NOT benegotiated in older versions of TLS. Clients MUST NOT offer thesecipher suites if they do not offer TLS 1.2 or later. Servers thatselect an earlier version of TLS MUST NOT select one of these cipher suites. Because TLS has no way for the client to indicate that it Salowey, et al. Standards Track [Page 3]supports TLS 1.2 but not earlier, a non-compliant server mightpotentially negotiate TLS 1.1 or earlier and select one of the cipher suites in this document. Clients MUST check the TLS version andgenerate a fatal "illegal_parameter" alert if they detect anincorrect version.5. IANA ConsiderationsIANA has assigned the following values for the cipher suites defined in this document:CipherSuite TLS_RSA_WITH_AES_128_GCM_SHA256 = {0x00,0x9C}CipherSuite TLS_RSA_WITH_AES_256_GCM_SHA384 = {0x00,0x9D}CipherSuite TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 = {0x00,0x9E}CipherSuite TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 = {0x00,0x9F}CipherSuite TLS_DH_RSA_WITH_AES_128_GCM_SHA256 = {0x00,0xA0}CipherSuite TLS_DH_RSA_WITH_AES_256_GCM_SHA384 = {0x00,0xA1}CipherSuite TLS_DHE_DSS_WITH_AES_128_GCM_SHA256 = {0x00,0xA2}CipherSuite TLS_DHE_DSS_WITH_AES_256_GCM_SHA384 = {0x00,0xA3}CipherSuite TLS_DH_DSS_WITH_AES_128_GCM_SHA256 = {0x00,0xA4}CipherSuite TLS_DH_DSS_WITH_AES_256_GCM_SHA384 = {0x00,0xA5}CipherSuite TLS_DH_anon_WITH_AES_128_GCM_SHA256 = {0x00,0xA6}CipherSuite TLS_DH_anon_WITH_AES_256_GCM_SHA384 = {0x00,0xA7}6. Security ConsiderationsThe security considerations in [RFC5246] apply to this document aswell. The remainder of this section describes securityconsiderations specific to the cipher suites described in thisdocument.6.1. Counter ReuseAES-GCM security requires that the counter is never reused. The IVconstruction in Section 3 is designed to prevent counter reuse.Implementers should also understand the practical considerations ofIV handling outlined in Section 9 of [GCM].6.2. Recommendations for Multiple Encryption ProcessorsIf multiple cryptographic processors are in use by the sender, thenthe sender MUST ensure that, for a particular key, each value of the nonce_explicit used with that key is distinct. In this case, eachencryption processor SHOULD include, in the nonce_explicit, a fixedvalue that is distinct for each processor. The recommended format is nonce_explicit = FixedDistinct || VariableSalowey, et al. Standards Track [Page 4]where the FixedDistinct field is distinct for each encryptionprocessor, but is fixed for a given processor, and the Variable field is distinct for each distinct nonce used by a particular encryptionprocessor. When this method is used, the FixedDistinct fields usedby the different processors MUST have the same length.In the terms of Figure 2 in [RFC5116], the Salt is the Fixed-Commonpart of the nonce (it is fixed, and it is common across allencryption processors), the FixedDistinct field exactly correspondsto the Fixed-Distinct field, the Variable field corresponds to theCounter field, and the explicit part exactly corresponds to thenonce_explicit.For clarity, we provide an example for TLS in which there are twodistinct encryption processors, each of which uses a one-byteFixedDistinct field:Salt = eedc68dcFixedDistinct = 01 (for the first encryption processor) FixedDistinct = 02 (for the second encryption processor)The GCMnonces generated by the first encryption processor, and their corresponding nonce_explicit, are:GCMNonce nonce_explicit------------------------ ----------------------------eedc68dc0100000000000000 0100000000000000eedc68dc0100000000000001 0100000000000001eedc68dc0100000000000002 0100000000000002...The GCMnonces generated by the second encryption processor, and their corresponding nonce_explicit, areGCMNonce nonce_explicit------------------------ ----------------------------eedc68dc0200000000000000 0200000000000000eedc68dc0200000000000001 0200000000000001eedc68dc0200000000000002 0200000000000002...7. AcknowledgementsThis document borrows heavily from [RFC5289]. The authors would like to thank Alex Lam, Simon Josefsson, and Pasi Eronen for providinguseful comments during the review of this document.Salowey, et al. Standards Track [Page 5]8. References8.1. Normative References[AES] National Institute of Standards and Technology,"Advanced Encryption Standard (AES)", FIPS 197,November 2001.[GCM] Dworkin, M., "Recommendation for Block Cipher Modes of Operation: Galois/Counter Mode (GCM) and GMAC",National Institute of Standards and Technology SP 800- 38D, November 2007.[RFC2119] Bradner, S., "Key words for use in RFCs to IndicateRequirement Levels", BCP 14, RFC 2119, March 1997.[RFC5116] McGrew, D., "An Interface and Algorithms forAuthenticated Encryption", RFC 5116, January 2008.[RFC5246] Dierks, T. and E. Rescorla, "The Transport LayerSecurity (TLS) Protocol Version 1.2", RFC 5246,August 2008.8.2. Informative References[IEEE8021AE] Institute of Electrical and Electronics Engineers,"Media Access Control Security", IEEE Standard 802.1AE, August 2006.[RFC4106] Viega, J. and D. McGrew, "The Use of Galois/CounterMode (GCM) in IPsec Encapsulating Security Payload(ESP)", RFC 4106, June 2005.[RFC4366] Blake-Wilson, S., Nystrom, M., Hopwood, D., Mikkelsen, J., and T. Wright, "Transport Layer Security (TLS)Extensions", RFC 4366, April 2006.[RFC5289] Rescorla, E., "TLS Elliptic Curve Cipher Suites withSHA-256/384 and AES Galois Counter Mode", RFC 5289,August 2008.Salowey, et al. Standards Track [Page 6]Authors’ AddressesJoseph SaloweyCisco Systems, Inc.2901 3rd. AveSeattle, WA 98121USAEMail: jsalowey@Abhijit ChoudhuryCisco Systems, Inc.3625 Cisco WaySan Jose, CA 95134USAEMail: abhijitc@David McGrewCisco Systems, Inc.170 W Tasman DriveSan Jose, CA 95134USAEMail: mcgrew@Salowey, et al. Standards Track [Page 7]Full Copyright StatementCopyright (C) The IETF Trust (2008).This document is subject to the rights, licenses and restrictionscontained in BCP 78, and except as set forth therein, the authorsretain all their rights.This document and the information contained herein are provided on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIEDWARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Intellectual PropertyThe IETF takes no position regarding the validity or scope of anyIntellectual Property Rights or other rights that might be claimed to pertain to the implementation or use of the technology described inthis document or the extent to which any license under such rightsmight or might not be available; nor does it represent that it hasmade any independent effort to identify any such rights. Information on the procedures with respect to rights in RFC documents can befound in BCP 78 and BCP 79.Copies of IPR disclosures made to the IETF Secretariat and anyassurances of licenses to be made available, or the result of anattempt made to obtain a general license or permission for the use of such proprietary rights by implementers or users of thisspecification can be obtained from the IETF on-line IPR repository at /ipr.The IETF invites any interested party to bring to its attention anycopyrights, patents or patent applications, or other proprietaryrights that may cover technology that may be required to implementthis standard. Please address the information to the IETF atietf-ipr@.Salowey, et al. Standards Track [Page 8]。
关于信息安全的英语作文Title: Ensuring Information Security in the Digital Age。
In today's interconnected world, information security has become paramount. With the exponential growth ofdigital technologies, the risk of cyber threats and data breaches has increased significantly. Therefore, it is crucial to implement robust measures to safeguard sensitive information and maintain the integrity of data. In this essay, we will explore the importance of informationsecurity and discuss effective strategies for its preservation.Firstly, information security is essential forprotecting personal privacy and confidential data. In the digital age, individuals and organizations store vast amounts of sensitive information online, includingfinancial records, personal identifiers, and proprietary business data. Without adequate security measures in place, this information is vulnerable to unauthorized access,theft, and exploitation by cybercriminals. Consequently, ensuring the confidentiality and privacy of data is paramount to maintaining trust and credibility in thedigital realm.Secondly, information security is critical for safeguarding against cyber threats and attacks. Cybercriminals employ various tactics, such as malware, phishing, and social engineering, to infiltrate networks, steal data, and disrupt operations. These attacks can have severe consequences, ranging from financial losses to reputational damage. Therefore, organizations must implement robust cybersecurity protocols to detect, prevent, and mitigate potential threats effectively. This includes deploying firewalls, encryption techniques, and intrusion detection systems to fortify digital defenses and thwart malicious activities.Moreover, information security is essential forensuring the integrity and authenticity of data. In today's digital landscape, the proliferation of fake news and misinformation poses a significant challenge to publicdiscourse and societal trust. By tampering with data or spreading false information, malicious actors can manipulate public opinion, undermine democratic processes, and sow discord within communities. Therefore, it is imperative to establish mechanisms for verifying the accuracy and reliability of information, including digital signatures, cryptographic hash functions, and blockchain technology.Furthermore, information security plays a vital role in promoting trust and confidence in online transactions ande-commerce. With the rise of online shopping and electronic payments, consumers expect their personal and financial information to be protected from unauthorized access and fraud. Therefore, businesses must prioritize security measures, such as secure socket layer (SSL) encryption, multi-factor authentication, and tokenization, to secure transactions and safeguard customer data. By fostering a secure online environment, businesses can enhance customer trust, drive sales, and foster long-term relationships with their clientele.In conclusion, information security is a critical component of the digital age, encompassing measures to protect privacy, defend against cyber threats, preservedata integrity, and secure online transactions. In an eraof rapid technological advancement and digital innovation, the importance of information security cannot be overstated. By implementing robust security measures and adopting best practices, individuals and organizations can mitigate risks, safeguard sensitive information, and build trust in the digital realm. Ultimately, a proactive approach to information security is essential for navigating the complexities of the digital landscape and ensuring a safe and secure online environment for all.。
The following paper was originally published in theProceedings of the Second USENIX Workshop on Electronic CommerceOakland, California, November 1996Verifying Cryptographic Protocols for Electronic CommerceRandall W. Lichota, HughesGrace L. Hammonds, AGCS, Inc.Stephen H. Brackin, Area Systems, Inc.For more information about USENIX Association contact:1. Phone:510 528-86492. FAX:510 548-57383. Email:office@4. WWW URL:Verifying Cryptographic ProtocolsforElectronic CommerceDr. Randall W. LichotaHughes Technical Services Company, P.O. Box 3310, Fullerton, CA 92834-3310lichota@Dr. Grace L. Hammonds,AGCS, Inc., 91 Montvale Avenue, Stoneham, MA 02180-3616hammonds@Dr. Stephen H. BrackinArca Systems, Inc., 303 E. Yates St., Ithaca, NY 14850Brackin@ABSTRACTThis paper describes the Convince toolset for detecting common errors in cryptographic protocols, protocols of the sort used in electronic commerce. We describe using Convince to analyze confidentiality, authentication, and key distribution in a recently developed protocol proposed for incorporation into a network bill-payment system, a public-key version of the Kerberos authentication protocol. Convince incorporates a “belief logic” formalism into a theorem-proving environment that automatically proves whether a protocol can meet its goals. Convince allows an analyst to model a protocol using a tool originally designed for Computer-Aided Software Engineering (CASE).1.0 INTRODUCTION1As electronic commerce on the Internet experiences explosive growth, so does the number of security protocols for safeguarding business transactions. Almost without exception, these protocols use cryptography, in the form of symmetric- and/or public-1This work has been sponsored by the Air Force Materiel Command, Electronic Systems Center/Software Center (ESC/AXS), at Hanscom AFB, MA, and funded by Rome Laboratory, through contract numbers F19628-92-C-0006 and F19628-92-C-0008.key algorithms.2 Using encryption does not guarantee protection, though. A protocol must be free of flaws that an electronic thief can exploit. Through such devices as clever replays and modifications of messages, legitimate parties to a protocol can be tricked into thinking they are communicating with each other when they are actually communicating with the thief.While the use of formal methods does not necessarily result in detection of all such flaws, it increases the level of confidence in protocols for electronic commerce. This paper describes an automated toolset, Convince, that facilitates the analysis of cryptographic protocols by systematically checking a number of their essential security properties.In general, cryptographic protocols use encryption to protect the confidentiality and/or integrity of message data, and to verify the identity of (i.e., authenticate) one or more of the parties involved in message transfers. To confirm that each message transfer in a protocol performs its intended security functions, one must ask questions such as the following:2 Some of the more widely publicized protocols of this type include the Secure Sockets Layer (SSL), Secure Hypertext Transfer Protocol (S-HTTP), Private Communications Technology (PCT), and Secure Electronic Payment Protocol (SEPP). [BERN96]a. Can the sender be confident that the data being senthas the expected properties.b. Can the sender and receiver be confident that theconfidentiality and integrity of the data are preserved in transit?c. Can the receiver be confident who sent the data?d. Can the sender later be confident that the intendedparty received the data sent?Assuming that the cryptographic algorithms used are themselves relatively “safe”3, the answers to these questions depend on whether the parties to the protocol can convince themselves that the protocol provides the necessary assurances.During the past decade, researchers have developed belief logics[BUR90, GON90, SYV94] that formalize inferences about what protocol parties “can be confident” of regarding authentication properties of protocols.4 Constructing formal proofs from a belief logic thus gives a means of testing whether a protocol serves its intended functions.Convince incorporates a belief logic into a specialized automatic theorem-proving environment. In this environment, a protocol designer or analyst uses Computer-Aided Software Engineering (CASE) tools as a front end to a formal theorem prover. Convince makes the formal verification process similar to debugging software. An analyst creates a protocol model (the “code”), specifies its associated initial conditions and goals (identifies the “code’s” expected behavior), and makes incremental revisions to the model until the goals are either proved or the protocol is judged to be fatally flawed (the “code” executes correctly or is abandoned). Convince makes it possible to maximize the early detection of security-related design errors, without requiring a lot of theorem-proving expertise.Convince’s CASE-based interface is implemented using Interactive Development Environments’ Software Through Pictures™ [IDE94a] system, which allows an analyst to model a protocol using a combination of familiar graphical and textual notations. Convince’s proof process is implemented using a well-known Higher Order Logic (HOL) [GOR93] theorem prover.3 The strength of encryption algorithms is not covered by Convince.4 While the emphasis in belief logics is on authentication, their rules implicitly address basic aspects of confidentiality and integrity.We used Convince to analyze aspects of confidentiality, authentication, and key distribution in a recently proposed public-key version of the Kerberos authentication protocol, which the remainder of this paper will refer to as PK Kerberos. The PK Kerberos protocol is a component of the NetBill system for secure electronic commerce between on-line customers and merchants of on-line goods (e.g., reports) [COX95]. This protocol is being proposed as an Internet standard [CHU96].Within NetBill, PK Kerberos is used to establish the initial authentication between customer and merchant. Consequently, we examined this protocol from two points of view: whether it is secure for the purpose for which it is intended (providing authentication services for NetBill); and whether it is reasonable for use in more general contexts (as would be expected for an Internet standard).This work is part of a series of efforts, begun under the Air Force’s Portable, Reusable, Integrated Software Modules (PRISM) program, to identify emerging technologies that are ready to be incorporated into ongoing Air Force programs. Convince development came after the review of a Rome Laboratory research prototype, the Romulus Verification Environment [ORA94]. This review clearly established the value of protocol analysis based on belief logic, but in order to effectively interact with Romulus, the user had to have specialized knowledge of its HOL-based theorem-proving environment. We quickly recognized that the effort needed to acquire this specialized knowledge would limit user acceptance. We also considered other protocol analysis tools, described in Section 5, but each of these also had serious limitations.The remainder of this paper is organized as follows: Section 2 gives an overview of Convince’s theoretical foundation, its belief logic; Section 3 gives an overview of Convince’s software components; Section 4 describes using Convince to model and analyze PK Kerberos; Section 5 gives an overview of related work; and Section 6 gives our conclusions and recommendations for future work.2.0 CONVINCE’S BELIEF LOGICLike all other belief logics, the Convince belief logic grew out of the BAN logic developed by Burrows, Abadi, and Needham [BUR90]. In the BAN logic, anauthentication protocol is transformed into a sequence of logical statements that are then analyzed.Gong, Needham, and Yahalom developed another belief logic, the GNY logic, based on BAN but expressed at a lower level of abstraction [GON90]. This makes it able to identify a somewhat larger class of protocol flaws.Gong then discovered that it is possible to specify and “verify” protocols, using the original GNY logic, that are impossible or unreasonable to implement, resulting in situations where the causality of beliefs is not preserved [GON91]. He developed conditions for excluding these “infeasible” protocols.The Romulus prototype [ORA94] implemented part of the GNY logic, in HOL, and implemented Gong’s refinement to the original GNY logic.Brackin [BRA96a] subsequently developed a HOL implementation of the full GNY logic, including Gong’s refinement, and developed logics extending this logic. One of these extensions, called BGNY, is the foundation for the Convince toolset. It covers protocols using symmetric- and public-key encryption, ordinary and key-dependent hash codes, key-exchange algorithms, multiple encryption and hash algorithms, and protocols using hash codes as keys.At a high level, BGNY is a set of rules identifying the conditions under which protocol participants can obtain data and draw conclusions about this data and other protocol participants. While most of the BGNY rules are based on GNY, there are omissions, additions, and modifications. The omitted rules reflect making more restrictive use of the concepts of “conveyance” and “trust” (see Table 1 below). The new and modified rules implement extensions to the GNY logic, remove unnecessary restrictions in the GNY logic, and correct errors in the GNY logic [BRA96a].In the following informal descriptions of sample BGNY rules, the rules describe how a principal B can obtain data sent in encrypted form:Rule P1:If B receives a message M, then B possesses M. Rule P4:If B possesses a decryption algorithm and a key, then B possesses the result of applying this decryption algorithm, with this key, to any message it possesses.Rule P7:If B possesses the result of applying a decryption function, with a key, to a message encrypted with the corresponding encryption function and key, then B possesses the decrypted message.While a complete description of the BGNY logic is beyond the scope of this paper, Table 1 lists the logical statements and symbols used in the discussions that follow. These constructs are part of Convince’s Intermediate Specification Language (ISL) [BRA97]. ISL is used to describe protocols and their expected authentication properties, as well as their principals and these principals’ initial conditions.3.0 C ONVINCE SOFTWAREThree major software tools lie at the heart of Convince: the Software through Pictures™ (StP), version 2.0, Object Modeling Tool (OMT), a Higher Order Logic (HOL) theorem prover, and a translator, based on LEX and YACC, to convert ISL specifications into HOL specifications.5 Figure 1 depicts the process, and the data flow between software components, when a user analyzes a protocol. The dashed lines show where user input is required. As the figure indicates, once the protocol is specified, most of the remaining work is done automatically.From a textual or other description of the protocol, the user creates a model — a high-level representation —under StP/OMT. This model identifies the important attributes of the principals, messages, and encryption services (e.g., keys and other parameters), used in the protocol.From this model, Convince generates an ISL specification, which provides a representation of all the defined elements of the protocol. Convince translates the ISL representation into an internal HOL specification, processes the HOL specification to create a HOL theory of the protocol, and executes a set of functions that automatically make deductions in this theory from the rules in the BGNY logic.Convince produces screen output telling whether it proved all the goals. If it cannot automatically prove a goal, Convince displays the goal to the user and terminates its theorem-proving process. In this case, it 5 The Convince components are hosted on Sun SPARCstation platforms, running SunOS 4.1.3.3.1 StP/OMT COMPONENTtransfer occurs. Dynamic models are used to depict the state of each principal between message transfers.LEGENDIn order to completely describe the properties ofauthentication protocols, we had to extend the notation provided by OMT. We did this primarily by using annotations. An annotation represents additional protocol information that is associated with StP/OMT model elements. The model elements that require annotations include principals, message transfers, context objects, and states.An annotation associated with a principal denotes the name to use for the principal in message descriptions, initial conditions, and goal statements. This allows one to use longer, more descriptive principal names in OMT diagrams, while using shorter, equivalent names in formulas.For message transfers, annotations represent the structure of messages conveyed between principals.Annotations associated with context objects represent definitions of cryptographic and hash functions, keys, principal names, and other variables (e.g., timestamps and nonces).Annotations associated with a principal's state correspond to ISL statements. In the case of start states, annotations represent initial conditions assumed to betrue at the start of a protocolexecution. Because theyrepresent initial conditions,these annotations are limitedto statements constructedfrom the Received andBelieves operators.Annotations associated withother states (i.e., intermediateand end states) are not sorestricted; these may becomposed from any of theISL statements.Examples of an Event Tracediagram, Dynamic Model,and associated annotationsare given in Figures 2 and 3.3.2 LEX/YACCLEX and YACC are standardUNIX utilities used toimplement a parser to convertformats inside Convince. ISLis a textual language whosesyntax is a superset of theannotation syntax employedunder StP/OMT. ISLspecifications are generatedfrom a Convince model via asimple command optioninvoking the parser. ISLspecifications have fourmajor sections:1.A set of definitions for certain data types, including principals, algorithms, and keys;2. A set of initial conditions, indicating data items and beliefs of principals;3. A sequence of message transfers denoting the protocol steps, or stages; and4. A set of goal conditions showing what the protocol should achieve from the point of view of the principals.Table 1. Elements of ISL Syntax and SemanticsGoal conditions are numbered according to the transfer stages defined in the protocol model. The number of a goal condition is the stage of the message transfer expected to cause the goal to become true.An example of a complete ISL specification for PK Kerberos is given in Appendix A.In Convince, verification of an authentication protocol uses Higher Order Logic (HOL). This necessitates translating the ISL specification into a HOL internal form prior to the actual proof process. The LEX/YACC translator makes this translation. It produces HOL code that defines a theory of the protocol and invokes the automatic proof process.3.3 HOL COMPONENTThe core of Convince is the Highter Order Logic (HOL) implementation of the BGNY logic together with a proof procedure that automates the construction of proofs in this logic. The proof procedure checks whether the protocol’s goals follow from the protocol’s definition and the rules of the BGNY logic. If a goal’s proof fails, the problem might be an error in the initial assumptions, an overly ambitious goal, or a security flaw in the protocol.Convince’s output files listing proved and unproved goals and subgoals, in ISL, help identify the cause of proof failure.4.0 EXAMPLE: PK KERBEROSTo illustrate how Convince can be used to model and analyze cryptographic protocols that support electronic commerce, we provide the example of PK Kerberos [CHU96], a public-key version of the Kerberos authentication protocol [STE88].All versions of Kerberos seek to establish secure communication between two parties while maintaining confidentiality and data integrity and detecting masquerading and replays. In earlier versions of Kerberos, a centralized Key Distribution Center (KDC) authenticates a user through symmetric-key encryption,then gives this user a shared key for subsequent Figure 2. Event Trace Diagramcommunications with other parties. This makes the KDC a potential bottleneck in the system, as well as a single point of failure that could disrupt the entire system if compromisedPK Kerberos attempts to overcome this weakness by employing Public Key Certificates based on the X.509 standard [CCI88].6 After the initial authentication, PK 6 Full implementation of these certificates will later involve an infrastructure to support the creation and initial Kerberos continues as Kerberos does, with the exchange of symmetric keys to be used for later communication.4.1 PK KERBEROS PROTOCOL distribution of these certificates, but they are available todayfor both public and private users. Figure 3. Dynamic Model and AnnotationsThe PK Kerberos protocol involves three parties: a client C, a server S, and a certificate authority CA.7 Initially, C requests S's public-key certificate from CA. In a series of message exchanges, C receives S’s public key from CA, then, using this public key along with its own private key, requests and obtains a symmetric key for later use. By the end of the exchange, both C and S can believe that they have correctly identified each other, using certificates that they trust, and the key they share is known only to themselves.In the model of PK Kerberos shown below, we have excluded certain fields that would normally be present in the protocol and in X.509 certificates: message IDs; encryption, signature, and message-digest algorithms; version numbers; compromise key lists; and certificate serial numbers. While these fields are needed for an implementation, they are not relevant for determining the security properties of interest, i.e., confidentiality, integrity, and authentication. In another simplification, we leave out the validity periods for keys, assuming that the protocol is running when the keys are valid. The protocol’s description uses the following terms, along with the BGNY/ISL notation in Table 1:C ClientS ServerCA Certification Authority CertificateX Public-key certificate of X, definedbelow, signed by an authorizedCertification AuthorityTs#Time stamp number #; Ts1 is also aproxy for a current validity interval Kr Symmetric key to be used as a one-time session keyKcs Symmetric key to be used as a long-term session keyKs Symmetric key known only to S andused to protect ticketsPKC, PKS Public keys for C and S^PKC, ^PKS Private keys for C and SMD5Hashing algorithmrsa, des Public- and symmetric-keyencryption /decryption algorithms7The inclusion of the CA is optional; the source for S’s public-key certificate could be S itself. For the purpose of this analysis, we use CA as both the repository for certificates and the authority that verifies their integrity. This option allows us to explore issues of levels of trust, with CA having the highest level “authdata” is defined as data used to help authenticate C to S:authdata = S, CertificateS, Ts1, KrThe public-key certificate for a principal X is defined as follows:CertificateX = CA, Ts#, X, PKX{H(CA, Ts#, X, PKX)}rsa(^PKCA)“CA” is the certification authority for the certificate; CA serves as the certificate repository in our model. The transactions in PK Kerberos are as follows:1. C requests S’s public-key certificate; C could request it directly from S, but in our model asks CA.2. C receives the requested public-key certificate.3. C uses S’s public key to encrypt a new temporary symmetric key, Kr, for one-time use by S, along with C’s own public-key certificate and a signature created by encrypting the hash of Kr along with S’s public-key certificate and a timestamp. The ISL statement associated with this signature asserts that C believes Kr will be known only to itself and S.4. S decrypts the message to obtain Kr, and checks the signature to confirm that Kr came from the C named in the enclosed certificate. S creates a long-term symmetric key, Kcs, for itself and C, and sends it, encrypted under Kr, back to C. S also sends a “ticket”with Kcs, C's name, a timestamp, and possibly other security information not shown in the model (e.g., file access rights). S encrypts this “ticket” with Ks; C is to return this encrypted ticket when making later requests from S.5. C returns a timestamp encrypted with Kcs to confirm that it received Kcs. C also returns the encrypted ticket for additional validation.4.2 INITIAL CONDITIONS AND GOALS The initial conditions for this protocol consist of all the received items and beliefs that the analyst assumes are held by the principals at the start of the protocol. Typical initial conditions are that the principals hold their own public and private keys, and that they trust the appropriate authority that dispenses these keys. A complete list is included in Appendix A.Goal conditions should express the underlying purpose of the protocol’s exchanges, such as that the principals believe they each possess a common symmetric key. The following shows the major goals for PK Kerberos. The numbers represent the protocol stages after which the associated goals should be true.2. C Believes PublicKey S rsa PKS;3. S Possesses Kr;S Believes(SharedSecret C S Kr;C Possesses Kr;C Believes SharedSecret C S Kr);4. C Possesses Kcs;C Believes(SharedSecret C S Kcs;S Possesses Kcs;S Believes SharedSecret C S Kcs);5. S Believes(C Possesses Kcs;C Believes SharedSecret C S Kcs;SharedSecret C S Kr;C Possesses Kr;C Believes SharedSecret C S Kr); After the second transaction, for instance, C should have reason to believe that it has a bona-fide public key for S. By the third transaction, S should possess the session key (Kr) that it believes is a shared secret between itself and C. By the last step, 5, S should believe C holds the shared symmetric key Kcs.4.3 CONVERTING DESCRIPTIONS TO StPFrom a description of the protocol, usually text, the user creates a protocol model by first defining the protocol elements within StP/OMT.The user then constructs a Use Case diagram and associates it with a specific protocol scenario. In our example , we model only a single scenario, shown in the Event Trace diagram in Figure 2. StP’s Event Trace editor automatically provides a context object, here labeled as PK Protocol. The user next adds the vertical bars representing the principals, and labels them accordingly. The user adds a set of directed line segments to denote the message transfers that occur as part of the protocol scenario, and labels each message transfer with a text string denoting the nature of the message (e.g., “request for public key”) and the stage of the protocol at which the transfer occurs.After completing the Event Trace Diagram, the user constructs a Dynamic Model for each of the principals. As shown in Figure 3, the Dynamic Model for S in PK Kerberos is a state transition sequence. The start state is represented as a solid circle, the intermediate state as a rounded rectangle, and the end state as a bull’s eye. Transitions between states are represented by directed lines whose labels denote the received events responsible for triggering the transitions. Message transfers that are produced by the principal are represented as output events. These are associated with directed lines connecting a state transition to the principal who is the recipient of the message. Generally speaking, the start state of a Dynamic Model corresponds to a subset of the initial conditions for the protocol. Accordingly, for each start state, the user provides annotations that represent the initial conditions of the corresponding principal. In Convince, these conditions are limited to statements of belief or reception. The initial conditions of principal S are shown as a sequence of ISL statements at the bottom of Figure 3.After adding the initial conditions to the model, the user provides annotations for the intermediate and end states. These annotations represent goals for the protocol (e.g., C and S share a certain symmetric key), which should become true once the protocol reaches a specific state.4.4 CONVERTING StP TO ISLOnce the initial conditions, transactions, and goals have been input, the user directs Convince to convert the model to an ISL specification, then invoke the translation and proof processes. This is done with a single menu selection from StP/OMT. The full ISL specification for PK Kerberos is given in Appendix A. The LEX/YACC and HOL subsystems of Convince can be used without Convince’s StP interface. To do so, the user prepares an ISL specification directly, as a text file, and gives the name of this file as a command-line argument to the LEX/YACC translator, which invokes the proof process.4.5 RUNNING THE VERIFIERConvince attempts to verify a model by proving that it meets both its user-specified goals and a standard set of goals, originally derived from the GNY logic, that encompass all protocol properties that are typically of interest [BRA96b].During the first few iterations of creating or modifying a protocol model and seeing if Convinceproves that it meets its goals, proof failures will typically result from insufficient initial conditions, suchas a principal not possessing a needed algorithm. This was the case with our analysis of PK Kerberos. Insufficient initial conditions relating to possession often result in protocol feasibility failures (i.e., a principal attempting to send something it does not possess) [GON91].8In PK Kerberos, the most significant proof failure due to an insufficient initial condition involved S’s having to trust C to create the temporary symmetric key Kr. Our original model did not include this condition, and the proof failed at the subgoal of S believing Kr is a shared secret. Even though this key is for one-time use, production of weak or guessable keys by C could cause vulnerabilities in the protocol. Within the context of NetBill, C will be executing software with a predefined algorithm for creating these temporary keys, which is expected to limit their vulnerability. In more general contexts, this assumption should be examined closely. Problems due to insufficient initial conditions are generally easy to correct once the reason for proof failure is identified. Convince’s output files giving lists of proved and unproved standard goals, and their proved and unproved subgoals, are useful for this purpose. It should be noted, however, that some initial conditions might impose constraints on an implementation that are unacceptable.In addition to problems that result from insufficient initial conditions, proof failures can result from inadequate or inappropriate associations of properties, expressed via ISL statements, with messages. As a rule of thumb, encrypted messages used to convey keys that are shared secrets should include an associated statement expressing this fact.We call the types of errors noted above setup errors because they are due to the specific form of the model being constructed and do not necessarily show flaws in the protocol itself. Similarly, apparently redundant information in a protocol, which we found in the PK Kerberos example, might not cause security flaws.In translating the English descriptions of the PK Kerberos example into ISL, we uncovered a particular aspect of the protocol that demonstrated the need for one of our extensions to the GNY logic. In stage 4, S 8 Additional forms of insufficient initial conditions we encountered in modeling other protocols include beliefs relating to “freshness” (e.g., recent timestamps), recognition of key message elements (e.g., principal names), trust, and properties of keys (e.g., that a principal’s public key is believed to be what it is).sends out an encrypted copy of a ticket that only S can decrypt, along with the same and more information in a form that is readable by C. In stage 5, C uses the information available to it to prepare an appropriate authenticator, and sends that authenticator, along with the ticket that only S can decrypt, back to S. This is necessary because S has forgotten everything except the key it used to encrypt “send this back to me” copies of the tickets it has sent out in the last few hours. S uses this key to decrypt these tickets when they are sent back to it, to confirm that they were originally from S and go with the authenticators sent back with them. So rather than remember each ticket or a hash of this ticket, S remembers only the key Ks it uses to encrypt these tickets.This is not expressible in the GNY logic, which assumes that principals remember everything for the length of a protocol run; every principal has perfect memory of the messages it has sent or received. For a potential attacker, this is a good, conservative assumption, but for legitimate protocol principals, PK Kerberos shows that it might not be true.In total, it took us about 3 days of tool use, spread over a couple of weeks, to resolve all the problems in our model of PK Kerberos. Once the model was finished, the conversion to ISL and production of all the proofs took less than 5 minutes on a Sun SPARCstation 20.In the course of our analysis, we proved that by the end of PK Kerberos the keys are securely in place with the parties authenticated to each other, but this requires that the client be trusted to create a sufficiently strong symmetric session key. We concluded that the protocol contains elements that, while appropriate for NetBill, might be unnecessary or insufficient for use in other contexts. For example, in some environments, encryption keys should only be generated by a high-integrity source.5.0 RELATED WORKRomulus [ORA94] represents an early effort to automate the analysis of authentication protocols via theorem proving. Romulus implements belief logic, in HOL, in the form of a theory of authentication, crypto_90. This implementation requires that a user create protocol models in HOL, with all initial assumptions, protocol actions, initial conditions, and goals expressed as HOL statements. The user produces proofs by applying HOL tactics, by hand, using rules defined in crypto_90. A typical verification strategy is first proving a set of simple conditions that can be。