An Analysis of CVSS Version 2 Vulnerability Scoring
- 格式:pdf
- 大小:966.11 KB
- 文档页数:10
Supply Chain Vulnerability in Developing Markets: A Research NoteMichael A. Mayo, Ph.D., Department of Marketing, Kent State University, USALawrence J. Marks, Ph.D., Department of Marketing, Kent State University, USAABSTRACTFew research studies have systematically examined supply chain management and practices in developing markets. Consequently, firms sourcing from vendors in these markets have limited information to assess related risks and opportunities. The present study examines how the concept of supply chain vulnerability, which is gaining the attention of researchers in industrialized and develop markets, might be used to highlight the kind of disruptions and sourcing challenges found in the developing world. Supply chain vulnerability is defined as unexpected variations in the quantity and/or quality of supply flows resulting from the failure of a single, direct vendor (atomistic source) or multiple, collaborative channel partners (holistic source). A chi-square analysis was conducted to compare the types (quantitative or qualitative) and sources (atomistic or holistic) of inbound flow disruptions between developed and developing markets. Results indicated that these markets differ in terms of quantitative rather than qualitative disruptions. Drawing upon previous research, several explanations are offered to account for the results. Recommendations include the need to expand the concept of supply chain vulnerability to include a number of macro level variables to better anticipate disruptions.Keywords: Supply chain vulnerability, supply chain management, developing markets, supply chain disturbancesINTRODUCTIONAs firms seek the major performance gains promised by integrated global sourcing strategies (Sparks and Wagner 2003), it is increasing important both to anticipate the risks and assess the opportunities posed by suppliers from developing markets (Coates 2003). Developing market channels have been reported to be long, financially constrained, manufacturer dominated and fragmented (Tesfom, Lutz and Ghauri 2004). By and large, however, the study of sourcing and channel management in developing countries is limited (Mehta, Larsen, Rosenbloom and Ganitsky, 2006) and often anecdotal. The present study examines the concept of supply chain vulnerability, which has successfully identified the sources and types of disturbances found in supply chains from developed economies (Svensson 2000; 2002), to determine whether it is applicable in the developing world and if it might provide a more systematic approach to guide research efforts in this area. To begin, the literature on supply chain vulnerability (SCV) is reviewed, followed by a discussion of the results from a study designed to identify the sources and types of disturbances prevalent in developing markets.LITERATURE REVIEWSupply Chain VulnerabilitySupply chain vulnerability (SCV) is defined “as the existence of random disturbances that lead to deviations in the supply chain of components from normal, expected or planned schedules or events, all of which cause negative effects or consequences for the involved manufacturer of its sub-contractors (Swensson 2000). Other researchers since have noted that some drivers of SCV include the degree of time and relationship dependencies between firms (Mattssoon 1999) as well as the resilience of supply chains to absorb or mitigate the impact of a disturbance (Peck 2006). Additionally, Peck (2005) proposed a multi-level framework to describe how the risks posed by SVC may result from disruptions at the micro (value stream or product process between firms) and macro (informational and logics infrastructures, inter-organizational networks and general environment) levels.In addition to this conceptual work, Swensson (2000) developed a framework from in-depth field interviews to document the sources and types of inbound supply chain disturbances. This research suggest two sources of disturbances(Atomistic: supply disruption between two firms; Holistic: disruption results from lack of coordinationamong multiple supply chain members) and two types of disturbances (Qualitative: lack of product/service reliability, quality or precision; Quantitative: sources of deviation that lead to stock-outs or back-orders. Statistical analysis of the pattern of disturbances reported by Swensson (2000) indicated that atomistic and quantitative disturbances were most frequently reported (see table 1).Table 1: A conceptual framework for the analysis of vulnerabilityin supply chains and survey results (Svensson 2000)SCV and Developing MarketsAnecdotal reports of inbound supply chain disturbances from developing markets suggest that this framework may be applicable and provide insights into these markets as well. For example, Polat and Arditi (2005) found that managers from developing economies often employ a Just-In-Case (JIC) approach to purchasing and keeping extra inventory to offset supply chain uncertainties (and prevent quantitative disturbances). Humphrey and Schmitz (1998) report that when foreign buyers enter a developing market they often pit local firms competitively against each other damaging social bonds and trust within the channel (raising the possibility of holistic disturbances). Still, it is unknown how well the SCV framework captures supply chain disturbances in developing markets or if the pattern of disturbances is comparable to those found in more developed economies. Knowing this would help firms to develop better sourcing plans given the lack of experience most supply managers have with developing markets (Pedersen 2005).METHODOLOGYThe SCV framework was used to classify supply disruptions in developing country channels as depicted in case studies developed by the International Trade Center (ITC), a United Nations trade development agency, to adapt its standard purchasing-supply chain management (PSCM) curriculum (module learning system, MLS) to local business conditions1. 59 cases were used in this study from a wide range of developing countries (China 3; India 13; Indonesia 5; Malaysia 6; Nepal 7; Philippines 6; Sri Lanka 11; Thailand 8). Cases are approximately 1 ½ to 3 pages in length and were written and published within the past 5 years.Data AnalysisSince the MLS is a comprehensive PSCM training program it includes general administrative and management issues (e.g., budgeting) as well. Consequently, not all of the cases depicted supply disruptions (27 of 59 cases or 45.7%) and were eliminated in the analysis. Each of the remaining cases was analyzed and assigned to one of the four SCV cells. The sources and types of inbound supply disturbances reported in the remaining cases (n = 32) are reported in table 2.Table 2: Sources and Categories of Supply Disturbances from Developing Markets1The authors are grateful to the ITC for permission to use its case materials in this publication.RESULTSTo determine if the sources and types of inbound supply disturbances varied by market type (developing versus develop), independent percentage chi-square tests (Agresti 1996) were conducted (see table 3; in each cell, percentages for developed countries are reported first, and those for developing countries next, in parentheses). Results indicated that differences between markets were found for quantitative as opposed to qualitative sources of disturbance. Specifically, more atomistic – quantitative disturbances are evidenced in developed versus developing markets (55.42% versus 31.25%; z = 2.754, p < .006) with fewer holistic – quantitative disturbances were reported (3.21% versus 31.25%; z = 3.390; p < .0007). No differences between developed versus developing markets were found for qualitative disturbances (atomistic: 39.36% versus 31.25%; z = .926; p > .355; holistic: 2.01% 6.25%; z = .971; p > .332).Table 3: Comparison of Sources and Categories of Supply Disturbances for Developed (and Developing Markets)DISCUSSIONThe results indicate that different sources for quantitative disruptions are found in developed as opposed to developing markets. Inasmuch as the SVC framework is a descriptive, rather than a causal model, it does not explain why such differences might exit. Developed markets may inherently have more atomistic quantitative disturbances due to the shear number of components sourced from multiple vendors to produce more technical and complex products there than generally found in developing markets. Indeed, the disturbances reported in the SCV framework for developed markets were collected from the transportation industry (Swensson 2000) while those from developing markets often involved more commodity-like sourced items (e.g., concrete telephone poles, rollers for granaries,…) employing fewer, less complex supply chains that may be in fact more reliable.When quantitative disruptions do occur in developing markets, they tend to be holistic rather than atomistic in contrast to what is found in developed markets. The literature provides two plausible reasons for this. First, foreign buyers often impose higher product standards and delivery terms which no one firm can meet alone. In response, more collaborative or holistic supply networks are forged by channel members in developing markets to meet these demands (Humphrey and Schmitz 1998). So when disruptions do occur, they are more likely to be holistic as reported in the present study. Second, as developing countries make infrastructure upgrades (e.g., runway or port expansions) to attract foreign trade, they often discount initiatives to make supply chains more efficient by improving network components (Dobberstein, Neumann and Zils 2005). Sawhney and Sumukadas (2005) for example, report the negative impact on the flow of goods to and from developing countries due to inefficient customers clearing activities. Thus, while access to developing markets may improve with “hard” infrastructure upgrades (helping to prevent atomistic quantitative disruptions) neglecting network challenges may lead to ongoing holistic challenges in the supply chain.The frequency of qualitative disturbances, both holistic and atomistic, was similar for developing and developed markets. This may be due in part to the fact that globalization has reduced the disparity between production management issues in developed and developing countries as noted by Onwubolu, Haupt, Ciercq and Visser (1999).Limitations, Implications and Research RecommendationsDue to the small number of case reports from developing markets, the reliability of results is of concern. Hence, the study should be considered exploratory. Its contribution is that it introduces recognized concepts and framework from an important and upcoming area in the supply chain literature, SCV, to help guide future research in developingmarkets in this area. This is of practical importance as more and more firms source from developing markets in order to lower costs and increase customer value and satisfaction (Sparks and Wagner 2003). Future research may want to consider SVC as a critical risk factor that the firm needs to include in its business continuity planning (Peck 2006). Finally, it is recommended that future research expand on this exploratory work by increasing the number of supply chains relationships analyzed and by adopting a multi-level framework to help analyze the various micro and macro causes of SCV. Doing so may help firms to better anticipate supply chain disruptions and assist policy makers to develop infrastructure and inter-organizational networks to mitigate SCV (Peck 2005).REFERENCESAgresti, A. (1996), “An Introduction to Categorical Data Analysis,” New York: Wiley.Coates, Douglas J. (2003), “Sourcing In Emerging Markets,” World Trade, Jul, Vol. 16 Issue 7, p.40Dobberstein, Nikolai., Carl-Stefan Neumann and Markus Zils (2005), “Logistics in Emerging Markets,” McKinsey Quarterly, Issue 1, p.15-18 Humphrey, John and Hubert Schmitz (1998), “Trust and Inter-firm Relations in Developing and Transition Economies,” Journal of Development Studies, Apr, Vol. 34 Issue 4, p32-61Mattssoon, S. A. (1999), “Effektivisering av Materialfloden I Supply Chains,” Acta Wexionesia, Samhallsvetenskap, Mr 2, Vexjo Universitet Mehta, Rajiv, Trina Larsen, Bert Rosenbloom and Joseph Ganitsky (2006) “The Impact of Cultural Differen ces in U.S. Business-to-Business Export Marketing Channel Strategic Alliances,” Industrial Marketing Management, Feb, Vol. 35 Issue 2, p156-165Onwubolu, Godfrey C., Wilhelm Haupt, Gerhard De Ciercq and Jan Visser (1999), “Production Management Issues in Developing Nations,”Production Planning and Control, Mar, Vol. 10 Issue 2, p110-117Peck, Helen (2005), “Drivers of Supply Chain Vulnerability: An Integrated Framework,” International Journal of Physical Distribution and Logistics Management, Vol. 35 Issue 4,p 210-232Peck, Helen (2006), “Reconciling Supply Chain Vulnerability, Risk and Supply Chain Management,” International Journal of Logistics: Research and Applications, Jun, Vol. 9 Issue 2, p127-142Pedersen, Arvid (2005), “Staffing an International Procurement Office,” Purchasing, April 7 (Metals Edition), Vol. 134 Issue 6, p60Polat, Gul and David Arditi (2005), “The JIT Materials Management System in Developing Countries,” Construction Management and Economics, Sep, Vol. 23 Issue 7, p697-712Sawhney, Rajeev and Narendar Sumukadas (2005), “Coping with Customs Clearance Uncertainties in Global Sourcing,” International Journal of Physical Distribution and Logistics Management, Vol. 35 Issue 4, p278-295Sparks, Leigh and Beverly A. Wagner (2003), “Retail exchanges: a research agenda,” Supply Chain Management, Vol. 8 Issue 1, p17-25 Svensson, Goran (2000), “A Conceptual Framework for the Analysis of Vulnerability in Supply Chains,” International Journal of Physical Distribution and Logistics Management, Vol. 30 Issue 9, p731-749Svensson, Goran (2002), “Dynamic Vulnerability in Companies' Inbound and Outbound Logistics Flows,” International Journal of Logistics: Research & Applications, Apr, Vol. 5 Issue 1, p13-43Tesfom, Goitom, Clemens Lutz and Pervez Ghaur i (2004), “Comparing Export Marketing Channels: Developed Versus Developing Countries,”International Marketing Review, Vol. 21 Issue 4/5, p409-422。
visual analysis 定义Visual Analysis(视觉分析)是一种通过观察和解读可视化数据来获取信息和洞察力的过程。
它可以应用于各个领域,包括艺术、设计、商业、科学和社会科学等。
在这篇文章中,我将探讨Visual Analysis的定义、应用和重要性。
Visual Analysis通过对图表、图像和其他可视化工具的观察和解读来揭示数据中的模式和趋势。
它通过视觉方式呈现数据,使人们能够更直观地理解和分析信息。
与传统的数据分析方法相比,Visual Analysis更加直观、易于理解,并且能够提供更丰富的信息。
Visual Analysis可以应用于各个领域。
在艺术和设计领域,艺术家和设计师可以使用Visual Analysis来理解和解释他们的作品。
他们可以观察作品中的颜色、形状、纹理等元素,并通过对这些元素的分析来传达特定的情感和意义。
在商业领域,Visual Analysis可以帮助企业理解市场趋势、消费者行为和竞争对手的策略。
通过对销售数据、市场份额和消费者调查等信息的可视化分析,企业可以更好地了解市场需求,制定营销策略,并做出更明智的决策。
在科学领域,Visual Analysis可以帮助科学家理解复杂的数据和模型。
例如,在天文学中,科学家可以使用可视化工具来观察和分析星系的运动模式,以及宇宙中的暗物质和黑洞等现象。
这种可视化分析可以帮助科学家发现新的规律和洞察力,推动科学的进步。
在社会科学领域,Visual Analysis可以帮助研究者理解社会现象和人类行为。
研究者可以使用可视化工具来分析调查数据、人口统计数据和社交媒体数据等,以揭示社会趋势、人类行为模式和社会关系。
这些洞察力可以帮助研究者更好地理解社会问题,并提出解决方案。
Visual Analysis的重要性不容忽视。
通过可视化数据,人们可以更好地理解和分析复杂的信息。
它可以帮助人们发现隐藏在数据中的模式和趋势,提供洞察力和决策支持。
Generalized Network Design ProblemsbyCorinne Feremans1,2Martine Labb´e1Gilbert Laporte3March20021Institut de Statistique et de Recherche Op´e rationnelle,Service d’Optimisation,CP210/01, Universit´e Libre de Bruxelles,boulevard du Triomphe,B-1050Bruxelles,Belgium,e-mail: mlabbe@smg.ulb.ac.be2Universiteit Maastricht,Faculty of Economics and Business Administration Depart-ment,Quantitative Economics,P.O.Box616,6200MD Maastricht,The Netherlands,e-mail:C.Feremans@KE.unimaas.nl3Canada Research Chair in Distribution Management,´Ecole des Hautes´Etudes Com-merciales,3000,chemin de la Cˆo te-Sainte-Catherine,Montr´e al,Canada H3T2A7,e-mail: gilbert@crt.umontreal.ca1AbstractNetwork design problems consist of identifying an optimal subgraph ofa graph,subject to side constraints.In generalized network design prob-lems,the vertex set is partitioned into clusters and the feasibility conditionsare expressed in terms of the clusters.Several applications of generalizednetwork design problems arise in thefields of telecommunications,trans-portation and biology.The aim of this review article is to formally definegeneralized network design problems,to study their properties and to pro-vide some applications.1IntroductionSeveral classical combinatorial optimization problems can be cast as Network Design Problems(NDP).Broadly speaking,an NDP consists of identifying an optimal subgraph F of an undirected graph G subject to feasibility conditions. Well known NDPs are the Minimum Spanning Tree Problem(MSTP),the Trav-eling Salesman Problem(TSP)and the Shortest Path Problem(SPP).We are interested here in Generalized NDPs,i.e.,in problems where the vertex set of G is partitioned into clusters and the feasibility conditions are expressed in terms of the clusters.For example,one may wish to determine a minimum length tree spanning all the clusters,a Hamiltonian cycle through all the clusters,etc.Generalized NDPs are important combinatorial optimization problems in their own right,not all of which have received the same degree of attention by operational researchers.In order to solve them,it is useful to understand their structure and to exploit the relationships that link them.These problems also underlie several important applications areas,namely in thefields of telecommu-nications,transportation and biology.Our aim is to formally define generalized NDPs,to study their properties and to provide examples of their applications.We willfirst define an unified notational framework for these problems.This will be followed by complexity results and by the study of seven generalized NDPs.2Definitions and notationsAn undirected graph G=(V,E)consists of afinite non-empty vertex set V= {1,...,n}and an edge set E⊆{{i,j}:i,j∈V}.Costs c i and c ij are assigned to vertices and edges respectively.Unless otherwise specified,c i=0for i∈V and c ij≥0for{i,j}∈E.We denote by E(S)={{i,j}∈E:i,j∈S},the subset of edges having their two end vertices in S⊆V.A subgraph F of G is denoted2by F=(V F,E F),V F⊆V,E F⊆E(V F),and its cost c(F)is the sum of its vertex and edge costs.It is convenient to define an NDP as a problem P associated with a subset of terminal vertices T⊆V.A feasible solution to P is a subgraph F=(V F,E F),where T⊆V F,satisfying some side constraints.If T=V,then the NDP is spanning;if T⊂V,it is non-spanning.Let G(T)=(T,E(T))and denote by F P(T)the subset of feasible solutions to the spanning problem P de-fined on the graph G(T).Let S⊆V be such that S∩T=∅,and denote by F P(T,S)the set of feasible solutions of the non-spanning problem P on graph G(S∪T)that spans T,and possibly some vertices from S.In this framework,feasible NDP solutions correspond to a subset of edges satisfying some constraints.Natural spanning NDPs are the following.1.The Minimum Spanning Tree Problem(MSTP)(see e.g.,Magnanti andWolsey[45]).The MSTP is to determine a minimum cost tree on G that includes all the vertices of V.This problem is polynomially solvable.2.The Traveling Salesman Problem(TSP)(see e.g.,Lawler,Lenstra,RinnooyKan and Shmoys[42]).The TSP consists offinding a minimum cost cycle that passes through each vertex exactly once.This problem is N P-hard.3.The Minimum Perfect Matching Problem(MPMP)(see e.g.,Cook,Cun-ningham,Pulleyblank and Schrijver[8]).A matching M⊆E is a subset of edges such that each vertex of M is adjacent to at most one edge of M.A perfect matching is a matching that contains all the vertices of G.The problem consists offinding a perfect matching of minimum cost.This problem is polynomial.4.The Minimum2-Edge-Connected Spanning Network(M2ECN)(see e.g.,Gr¨o tschel,Monma and Stoer[26]and Mahjoub[46].The M2ECN consists offinding a subgraph with minimal total cost for which there exists two edge-disjoint paths between every pair of vertices.5.The Minimum Clique Problem(MCP).The MCP consists of determining aminimum total cost clique spanning all the vertices.This problem is trivial since the whole graph corresponds to an optimal solution.We also consider the following two non-spanning NDPs.1.The Steiner Tree Problem(STP)(see Winter[61]for an overview).TheSTP is to determine a tree on G that spans a set T of terminal vertices at minimum cost.A Steiner tree may contain vertices other than those of T.These vertices are called the Steiner vertices.This problem is N P-hard.32.The Shortest Path Problem(SPP)(see e.g.,Ahuja,Magnanti and Orlin[1]).Given an origin o and a destination d,o,d∈V,the SPP consists of deter-mining a path of minimum cost from o to d.This problem is polynomially solvable.It can be seen as a particular case of the STP where T={o,d}.In generalized NDPs,V is partitioned into clusters V k,k∈K.We now formally define spanning and non-spanning generalized NDPs.Definition1(“Exactly”generalization of spanning problem).Let G= (V,E)be a graph partitioned into clusters V k,k∈K.The“exactly”generaliza-tion of a spanning NDP P on G consists of identifying a subgraph F=(V F,E F) of G yieldingmin{c(F):|V F∩V k|=1,F∈F P( k∈K(V F∩V k))}.In other words,F must contain exactly one vertex per cluster.Two differ-ent generalizations are considered for non-spanning NDPs.Definition2(“Exactly”generalizations of non-spanning problem).Let G=(V,E)be a graph partitioned into clusters V k,k∈K,and let{K T,K S}be a partition of K.The“exactly”T-generalization of a non-spanning problem NDP P on G consists of identifying a subgraph F=(V F,E F)of G yielding min{c(F):|V F∩V k|=1,k∈K T,F∈F P( k∈K T(V F∩V k), k∈K S V k)}.The“exactly”S-generalization of a non-spanning problem NDP P on G consists of identifying a subgraph F=(V F,E F)of G yieldingmin{c(F):|V F∩V k|=1,k∈K S,F∈F P( k∈K T V k, k∈K S(V F∩V k))}.In other words,in the“exactly”T-generalization,F must contain exactly one vertex per cluster V k with k∈K T,and possibly other vertices in k∈K S V k.In the“exactly”S-generalization,F must contain exactly one vertex per cluster V k with k∈K S,and all vertices of k∈K T V k.We can replace|V F∩V k|=1in the above definitions by|V F∩V k|≥1 or|V F∩V k|≤1,leading to the“at least”version or“at most”version of the generalization.The“exactly”,“at least”and“at most”versions of a generalized NDP P are denoted by E-P,L-P and M-P,respectively.In the“at most”and in the“exactly”versions,intra-cluster edges are neglected.In this case,we call the graph G,|K|-partite complete.In the“at least”version the intra-cluster edges are taken into account.43Complexity resultsWe provide in Tables1and2the complexity of the generalized versions in their three respective forms(“exactly”,“at least”and“at most”)for the seven NDPs considered.Some of these combinations lead to trivial problems.Obviously,if a classical NDP is N P-hard,its generalization is also N P-hard.The indication“∅is opt”means that the empty set is feasible and is optimal for the correspond-ing problem.References about complexity results for the classical version of the seven problems considered can be found in Garey and Johnson[20].As can be seen from Table2,two cases of the generalized SPP are N P-hard by reduction from the Hamiltonian Path Problem(see Garey and Johnson[20]). Li,Tsao and Ulular[43]show that the“at most”S-generalization is polynomial if the shrunk graph is series-parallel but provide no complexity result for the gen-eral case.A shrunk graph G S=(V S,E S)derived from a graph G partitioned into clusters is defined as follows:V S contains one vertex for each cluster of G, and there exists an edge in E S whenever an edge between the two corresponding clusters exists in G.An undirected graph is series-parallel if it is not contractible to K4,the complete graph on four vertices.A graph G is contractible to an-other graph H if H can be obtained from G by deleting and contracting edges. Contracting an edge means that its two end vertices are shrunk and the edge is deleted.We now provide a short literature review and applications for each of the seven generalized NDPs considered.Table1:Complexity of classical and generalized spanning NDPs Problem MSTP TSP MPMP M2ECN MCP Classical Polynomial N P-hard Polynomial N P-hard Trivial,polynomial Exactly N P-hard[47]N P-hard Polynomial N P-hard N P-hard(with vertexcost)[35]At least N P-hard[31]N P-hard Polynomial N P-hard Equivalent toexactlyAt most∅is opt∅is opt∅is opt∅is opt∅is opt5Table2:Complexity of classical and generalized non-spanning NDPsProblem STP SPPClassical N P-hard PolynomialExactly T-generalization N P-hard PolynomialExactly S-generalization N P-hard N P-hardAt least T-generalization N P-hard PolynomialAt least S-generalization N P-hard N P-hardAt most T-generalization∅is opt∅is optAt most S-generalization N P-hard Polynomial if shrunk graphis series-parallel[43]4The generalized minimum spanning tree prob-lemThe Generalized Minimum Spanning Tree Problem(E-GMSTP)is the problemoffinding a minimum cost tree including exactly one vertex from each vertexset from the partition(see Figure1a for a feasible E-GMSTP solution).Thisproblem was introduced by Myung,Lee and Tcha[47].Several formulations areavailable for the E-GMSTP(see Feremans,Labb´e and Laporte[17]).The Generalized Minimum Spanning Tree Problem in its“at least”version(L-GMSTP)is the problem offinding a minimum cost tree including at least onevertex from each vertex set from the partition(see Figure1b for a feasible solu-tion of L-GMSTP).This problem was introduced by Ihler,Reich and Widmayer[31]as a particular case of the Generalized Steiner Tree Problem(see Section9)under the name“Class Tree Problem”.Dror,Haouari and Chaouachi[11]showthat if the family of clusters covers V without being pairwise disjoint,then theL-GMSTP defined on this family can be transformed into the original L-GMSTPon a graph G′obtained by substituting each vertex v∈ ℓ∈L Vℓ,L⊆K by|L| copies vℓ∈Vℓ,ℓ∈L,and adding edges of weight zero between each pair of thesenew vertices(clique of weight zero between vℓforℓ∈L).This can be done aslong as there is nofixed cost on the vertices,and this transformation does nothold for the“exactly”version of the problem.Applications modeled by the E-GMSTP are encountered in telecommuni-cations,where metropolitan and regional networks must be interconnected by atree containing a gateway from each network.For this internetworking,a vertexhas to be chosen in each local network as a hub and the hub vertices must be con-nected via transmission links such as opticalfiber(see Myung,Lee and Tcha[47]).6Figure 1a: E−GMSTP Figure 1b: L−GMSTPFigure1:Feasible GMSTP solutionsThe L-GMSTP has been used to model and solve an important irrigation network design problem arising in desert environments,where a set of|K|poly-gon shaped parcels share a common source of water.Each parcel is represented by a cluster made up of the polygon vertices.Another cluster corresponds to the water source vertex.The problem consists of designing a minimal length irriga-tion network connecting at least one vertex from each parcel to the water source. This irrigation problem can be modeled as an L-GMSTP as follows.Edges corre-spond to the boundary lines of the parcel.The aim is to construct a minimal cost tree such that each parcel has at least one irrigation source(see Dror,Haouari and Chaouachi[11]).Myung,Lee and Tcha[47]show that the E-GMSTP is strongly N P-hard, using a reduction from the Node Cover Problem(see Garey and Johnson[20]). These authors also provide four integer linear programming formulations.A branch-and-bound method is developed and tested on instances involving up to 100vertices.For instances containing between120and200vertices,the method is stopped before thefirst branching.The lower bounding procedure is a heuris-tic method which approximates the linear relaxation associated with the dual of a multicommodityflow formulation for the E-GMSTP.A heuristic algorithm finds a primal feasible solution for the E-GMSTP using the lower bound.The branching strategy performed in this method is described in Noon and Bean[48].A cluster isfirst selected and branching is performed on each vertex of this cluster.In Faigle,Kern,Pop and Still[14],another mixed integer formulation for the E-GMSTP is given.The linear relaxation of this formulation is computed for a set of12instances containing up to120vertices.This seems to yield an7optimal E-GMSTP solution for all but one instance.The authors also use the subpacking formulation from Myung,Lee and Tcha[47]in which the integrality constraints are kept and the subtour constraints are added dynamically.Three instances containing up to75vertices are tested.A branch-and-cut algorithm for the same problem is described in Feremans[15].Several families of valid inequalities for the E-GMSTP are introduced and some of these are proved to be facet defiputational results show that instances involving up to200vertices can be solved to optimality using this method.A comparison with the computational results obtained in Myung,Lee and Tcha[47]shows that the gap between the lower bound and the upper bound obtained before branching is reduced by10%to20%.Pop,Kern and Still[51]provide a polynomial approximation algorithm for the E-GMSTP.Its worst-case ratio is bounded by2ρif the cluster size is bounded byρ.This algorithm is derived from the method described in Magnanti and Wolsey[45]for the Vertex Weighted Steiner Tree Problem(see Section9).Ihler,Reich,Widmayer[31]show that the decision version of the L-GMSTP is N P-complete even if G is a tree.They also prove that no constant worst-case ratio polynomial-time algorithm for the L-GMSTP exists unless P=N P,even if G is a tree on V with edge lengths1and0.They also develop two polynomial-time heuristics,tested on instances up to250vertices.Finally,Dror,Haouari and Chaouachi[11]provide three integer linear programming formulations for the L-GMSTP,two of which are not valid(see Feremans,Labb´e and Laporte[16]). The authors also describefive heuristics including a genetic algorithm.These heuristics are tested on20instances up to500vertices.The genetic algorithm performs better than the other four heuristics.An exact method is described in Feremans[15]and compared to the genetic algorithm in Dror,Haouari and Chaouachi[11].These results show that the genetic algorithm is time consuming compared to the exact approach of Feremans[15].Moreover the gap between the upper bound obtained by the genetic algorithm and the optimum value increases as the size of the problem becomes larger.5The generalized traveling salesman problem The Generalized Traveling Salesman Problem,denoted by E-GTSP,consists of finding a least cost cycle passing through each cluster exactly once.The sym-metric E-GTSP was introduced by Henry-Labordere[28],Saskena[56]and Sri-vastava,Kumar,Garg and Sen[60]who proposed dynamic programming formu-lations.Thefirst integer linear programming formulation is due to Laporte and Nobert[40]and was later enhanced by Fischetti,Salazar and Toth[18]who in-8troduced a number of facet defining valid inequalities for both the E-GTSP and the L-GTSP.In Fischetti,Salazar and Toth[19],a branch-and-cut algorithm is developed,based on polyhedral results developed in Fischetti,Salazar and Toth [18].This method is tested on instances whose edge costs satisfy the triangular inequality(for which E-GTSP and L-GTSP are equivalent).Moreover heuristics producing feasible E-GTSP solutions are provided.Noon[50]has proposed several heuristics for the GTSP.The most sophis-ticated heuristic published to date is due to Renaud and Boctor[53].It is a generalization of the heuristic proposed in Renaud,Boctor and Laporte[54]for the classical TSP.Snyder and Daskin[59]have developed a genetic algorithm which is compared to the branch-and-cut algorithm of Fischetti,Salazar and Toth[19]and to the heuristics of Noon[50]and of Renaud and Boctor[53].This genetic algorithm is slightly slower than other heuristics,but competitive with the CPU times obtained in Fischetti,Salazar and Toth[19]on small instances, and noticeably faster on the larger instances(containing up to442vertices).Approximation algorithms for the GTSP with cost function satisfying the triangle inequality are described in Slav´ık[58]and in Garg,Konjevod and Ravi [21].A non-polynomial-time approximation heuristic derived from Christofides heuristic for the TSP[7]is presented in Dror and Haouari[10];it has a worst-case ratio of2.Transformations of the GTSP instances into TSP instances are studied in Dimitrijevi´c and Saric[9],Laporte and Semet[41],Lien,Ma and Wah[44],Noon and Bean[49].According to Laporte and Semet[41],they do not provide any significant advantage over a direct approach since the TSP resulting from the transformation is highly degenerate.The GTSP arises in several application contexts,several of which are de-scribed in Laporte,Asef-Vaziri and Sriskandarajah[38].These are encountered in post box location(Labb´e and Laporte[36])and in the design of postal deliv-ery routes(Laporte,Chapleau,Landry,and Mercure[39]).In thefirst problem the aim is to select a post box location in each zone of a territory in order to achieve a compromise between user convenience and mail collection costs.In the second application,collection routes must be designed through several post boxes at known locations.Asef-Vaziri,Laporte,and Sriskandarajah[3]study the problem of optimally designing a loop-shaped system for material transportation in a factory.The factory is partitioned into|K|rectilinear zones and the loop must be adjacent to at least one side of each zone,which can be formulated as a GTSP.The GTSP can also be used to model a simple case of the stochastic vehicle routing problem with recourse(Dror,Laporte and Louveaux[12])and some families of arc routing problems(Laporte[37]).In the latter application,a9symmetric arc routing problem is transformed into an equivalent vertex routing problem by replacing edges by vertices.Since the distance from edge e1to edge e2depends on the traversal direction,each edge is represented by two vertices, only one of which is used in the solution.This gives rise to a GTSP.6The generalized minimum perfect matching problemThe E-GMPMP and L-GMPMP are polynomial.Indeed,the E-GMPMP remains a classical MPMP on the shrunk graph,where c kℓ:=min{c ij:i∈V k,j∈Vℓ}for {k,ℓ}∈E S.Moreover the L-GMPMP can be reduced to the E-GMPMP.7The generalized minimum2-edge-connected network problemThe Generalized Minimum Cost2-Edge-Connected Network Problem(E-G2ECN) consists offinding a minimum cost2-edge-connected subgraph that contains ex-actly one vertex from each cluster(Figure2).Figure2:A feasible E-G2ECN solutionThis problem arises in the context of telecommunications when copper wire is replaced with high capacity opticfiber.Because of its high capacity,this new technology allows for tree-like networks.However,this new network becomes failure-sensitive:if one edge breaks,all the network is disconnected.To avoid this situation,the network has to be reliable and must fulfill survivability condi-tions.Since two failures are not likely to occur simultaneously,it seems reasonable to ask for a2-connected network.10This problem is a generalization of the GMSTP.Local networks have to be interconnected by a global network;in every local network,possible locations for a gate(location where the global network and local networks can be intercon-nected)of the global network are given.This global network has to be connected, survivable and of minimum cost.The E-G2ECNP and the L-G2ECNP are studied in Huygens[29].Even when the edge costs satisfy the triangle inequality,the E-G2ECNP and the L-G2ECNP are not equivalent.These problems are N P-hard.There cannot exist a polynomial-time heuristic with bounded worst-case ratio for E-G2ECNP.In Huy-gens[29],new families of facet-defining inequalities for the polytope associated with L-G2ECNP are provided and heuristic methods are described.8The generalized minimum clique problemIn the Generalized Minimum Clique Problem(GMCP)non-negative costs are associated with vertices and edges and the graph is|K|-partite complete.The GMCP consists offinding a subset of vertices containing exactly one vertex from each cluster such that the cost of the induced subgraph(the cost of the selected vertices plus the cost of the edges in the induced subgraph)is minimized(see Figure3).Figure3:A feasible GMSCP solutionThe GMCP appears in the formulation of particular Frequency Assignment Problems(FAP)(see Koster[34]).Assume that“...we have to assign a frequency to each transceiver in a mobile telephone network,a vertex corresponds to a transceiver.The domain of a vertex is the set of frequencies that can be assigned to that transceiver.An edge indicates that communication from one transceiver may interfere with communication from the other transceiver.The penalty of an11edge reflects the priority with which the interference should be avoided,whereas the penalty of a vertex can be seen as the level of preference for the frequen-cies.”(Koster,Van Hoesel and Kolen[35]).The GMCP can also be used to model the conformations occurring in pro-teins(see Althaus,Kohlbacher,Lenhof and M¨u ller[2]).These conformations can be adequately described by a rather small set of so-called rotamers for each amino-acid.The problem of the prediction of protein complex from the structures of its single components can then be reduced to the search of the set of rotamers, one for each side chain of the protein,with minimum energy.This problem is called the Global Minimum Energy Conformation(GMEC).The GMEC can be formulated as follows.Each residue side chain of the protein can take a number of possible rotameric states.To each side chain is associated a cluster.The vertices of this cluster represent the possible rotameric states for this chain.The weight on the vertices is the energy associated with the chain in this rotameric state. The weight on the edges is the energy coming from the combination of rotameric states for different side chains.The GMCP is N P-hard(Koster,Van Hoesel and Kolen[35]).Results of polyhedral study for the GCP were embedded in a cutting plane approach by these authors to solve difficult instances of frequency assignment problems. The structure of the graph in the frequency assignment application is exploited using tree decomposition approach.This method gives good lower bounds for difficult instances.Local search algorithms to solve FAP are also investigated. Two techniques are presented in Althaus,Kohlbacher,Lenhof and M¨u ller[2]to solve the GMEC:a“multi-greedy”heuristic and a branch-and-cut algorithm. Both methods are able to predict the correct complex structure on the instances tested.9The generalized Steiner tree problemThe standard generalization of the STP is the T-Generalized Steiner Tree Prob-lem in its“at least”version(L-GSTP).Let T⊆V be partitioned into clusters. The L-GSTP consists offinding a minimum cost tree of G containing at least one vertex from each cluster.This problem is also known as the Group Steiner Tree Problem or the Class Steiner Tree Problem.Figure4depicts a feasible L-GSTP solution.The L-GSTP is a generalization of the L-GMSTP since the L-GSTP defined on a family of clusters describing a partition of V is a L-GMSTP.This problem was introduced by Reich and Widmayer[52].The L-GSTP arises in wire-routing with multi-port terminals in physical Very Large Scale Integration(VLSI)design.The traditional model assuming sin-12Figure4:A feasible L-GSTP solutiongle ports for each of the terminals to be connected in a net of minimum length is a case of the classical STP.When the terminal is a collection of different pos-sible ports,so that the net can be connected to any one of them,we have an L-GSTP:each terminal is a collection of ports and we seek a minimum length net containing at least one port from each terminal group.The multiple port locations for a single terminal may also model different choices of placing a single port by rotating or mirroring the module containing the port in the placement (see Garg,Konjevod and Ravi[21]).More detailed applications of the L-GSTP in VLSI design can be found in Reich and Widmayer[52].The L-GSTP is N P-hard because it is a generalization of an N P-hard problem.When there are no Steiner vertices,the L-GSTP remains N P-hard even if G is a tree(see Section4).This is a major difference from the classical STP(if we assume that either there is no Steiner vertices or that G is a tree,the complexity of STP becomes polynomial).Ihler,Reich and Widmayer[31]show that the graph G can be transformed(in linear time)into a graph G′(without clusters)such that an optimal Steiner tree on G′can be transformed back into an optimal generalized Steiner tree in G.Therefore,any algorithm for the STP yields an algorithm for the L-GSTP.Even if there exist several contributions on polyhedral aspects(see among others Goemans[24],Goemans and Myung[23],Chopra and Rao[5],[6])and exact methods(see for instance Koch and Martin[33])for the classical problem, only a few are known,as far as we are aware,for the L-GSTP.Polyhedral aspects are studied in Salazar[55]and a lower bounding procedure is described in Gillard and Yang[22].13A number of heuristics for the L-GSTP have been proposed.Early heuris-tics for the L-GSTP are developed in Ihler[30]with an approximation ratio of |K|−1.Two polynomial-time heuristics are tested on instances up to250vertices in Ihler,Reich and Widmayer[31],while a randomized algorithm with polylog-arithmic approximation guarantee is provided in Garg,Konjevod,Ravi[21].A series of polynomial-time heuristics are described in Helvig,Robins,Zelikovsky [27]with worst-case ratio of O(|K|ǫ)forǫ>0.These are proved to empirically outperform one of the heuristic developed in Ihler,Reich and Widmayer[31].In the Vertex Weighted Steiner Tree Problem(VSTP)introduced by Segev [57],weights are associated with the vertices in V.These weights can be negative, in which case they represent profit gained by selecting the vertex.The problem consists offinding a minimum cost Steiner tree(the sum of the weights of the selected vertices plus the sum of the weights of the selected edges).This problem is a special case of the Directed Steiner Tree Problem(DSP)(see Segev[57]). Given a directed graph G=(V,A)with arc weights,afixed vertex and a subset T⊆V,the DSP requires the identification of a minimum weighted directed tree rooted at thefixed vertex and spanning T.The VSTP has been extensively studied(see Duin and Volgenant[13],Gorres[25],Goemans and Myung[23], Klein and Ravi[32]).As far as we know,no Generalized Vertex Weighted Steiner Tree Problem has been addressed.An even more general problem would be the Vertex Weighted Directed Steiner Tree Problem.10The generalized shortest path problemLi,Tsao and Ulular[43]describe an S-generalization of the SPP in its“at most”version(M-GSPP).Let o and d be two vertices of G and assume that V\{o,d}is partitioned into clusters.The M-GSPP consists of determining a shortest path from o to d that contains at most one vertex from each cluster.Note that the T-generalization is of no interest since it reduces to computing the shortest paths between all the pairs of vertices belonging to the two different clusters.In the problem considered by Li,Tsao and Ulular[43],each vertex is as-signed a non-negative weight.The problem consists offinding a minimum cost path from o to d such that the total vertex weight on the path in each traversed cluster does not exceed a non-negative integerℓ(see Figure5).This problem with ℓ=1and vertex weights equal to one for each vertex coincides with the M-GSPP.The problem arises in optimizing the layout of private networks embedded in a larger telecommunication network.A vertex in V\{o,d}represents a digital cross connect center(DCS)that treats the information and insures the transmis-sion.A cluster corresponds to a collection of DCS located at the same location14。
2024届湖南省长沙市长郡中学高三下学期三模英语试题一、听力选择题1.Which film does Mary want to see?A.Ordinary Angels.B.Bob Marley: One Love.C.Kung Fu Panda 4.2.Where does the conversation probably take place?A.In an apartment.B.In a restaurant.C.In a shop.3.Who is the woman probably talking to?A.Her friend.B.A travel agent.C.A hotel receptionist.4.What is the weather like now?A.Cloudy.B.Sunny.C.Rainy.5.What happens to Sarah?A.She eats too much.B.She has a toothache.C.She needs an operation.听下面一段较长对话,回答以下小题。
6.What does the woman plan to do next?A.Drive home.B.Pick Jack up.C.See her husband.7.What is Jack doing?A.Watching TV.B.Practicing football.C.Walking with Tim.听下面一段较长对话,回答以下小题。
8.Why does Alice want to meet David?A.To seek for advice.B.To borrow some books.C.To invite him to a game.9.How does Ethan sound in the end?A.Humble.B.Proud.C.Satisfied.听下面一段较长对话,回答以下小题。
威科检索公式(最新版)目录1.威科的背景和概述2.威科检索公式的方法3.威科检索公式的优点和应用4.总结正文一、威科的背景和概述威科(Wikipedia)是一个基于维基技术的免费、开放式的在线百科全书,它收录了世界各地的知识和信息。
由于其内容丰富、免费、易于访问,威科已经成为了全球最大的知识库之一,被广泛地应用于学术研究、日常学习和信息查询等领域。
二、威科检索公式的方法在威科中,用户可以通过输入关键词进行检索,以获取相关的条目和信息。
这个过程实际上就是一个公式的计算过程。
威科的检索公式可以简单地表示为:检索结果 = 相关性×可读性×可信度其中,相关性指的是检索结果与关键词的关联程度;可读性是指检索结果的文字是否易于理解;可信度则是指检索结果的可靠性和权威性。
这个公式可以帮助用户更准确地获取所需的信息。
三、威科检索公式的优点和应用威科检索公式的优点在于,它将相关性、可读性和可信度这三个因素综合考虑,从而提高了检索结果的质量。
这种方法可以有效地帮助用户在大量的信息中筛选出最符合需求的内容。
在实际应用中,威科检索公式可以帮助用户在学术研究、日常学习和信息查询等领域快速、准确地获取所需的知识。
例如,一个学生如果想了解“量子力学”的相关知识,他可以在威科中输入这个关键词,然后系统会根据公式计算出与“量子力学”相关的条目和信息,供学生参考。
四、总结威科作为一个免费、开放式的在线百科全书,其丰富的知识和信息已成为全球最大的知识库之一。
威科检索公式通过对相关性、可读性和可信度的综合考虑,为用户提供了更准确、更高质量的检索结果。
2025届仿真模拟★第02套2025年普通高等学校招生全国统一考试英语注意事项:1.答卷前,考生务必将自己的姓名、准考证号填写在答题卡和试卷指定位置上。
2.回答选择题时,选出每小题答案后,用铅笔把答题卡上对应题目的答案标号涂黑。
如需改动,用橡皮擦干净后,再选涂其他答案标号。
回答非选择题时,将答案写在答题卡上,写在本试卷上无效。
3.考试结束后,将本试卷和答题卡一并交回。
英语听力 高三模拟 第2025-02套.mp4第一部分听力(共两节,满分30分)做题时,先将答案标在试卷上。
录音内容结束后,你将有两分钟的时间将试卷上的答案转涂到答题卡上。
第一节(共5小题;每小题1.5分,满分7.5分)听下面5段对话。
每段对话后有一个小题,从题中所给的A、B、C三个选项中选出最佳选项。
听完每段对话后,你都有10秒钟的时间来回答有关小题和阅读下一小题。
每段对话仅读一遍。
例:How much is the shirt?A. £19.15.B. £9.18.C. £9.15.答案是C。
1.Where does the conversation probably take place?A. In a supermarket.B. In the post office.C. In the street. 2.What did Carl do?A. He designed a medal.B. He fixed a TV set.C. He took a test.3.What does the man do?A. He’s a tailor.B. He’s a waiter.C. He’s a shop assistant. 4.When will the flight arrive?A. At 18:20.B. At 18:35.C. At 18:50.5.How can the man improve his article?A. By deleting unnecessary words.B. By adding a couple of points.C. By correcting grammar mistakes.第二节(共15小题;每小题1.5分,满分22.5分)听下面5段对话或独白。
Package‘robvis’October14,2022Title Visualize the Results of Risk-of-Bias(ROB)AssessmentsVersion0.3.0Description Helps users in quickly visualizing risk-of-biasassessments performed as part of a systematic review.It allows users tocreate weighted bar-plots of the distribution of risk-of-bias judgmentswithin each bias domain,in addition to traffic-light plots of thespecific domain-level judgments for each study.The resultingfigures areof publication quality and are formatted according the risk-of-biasassessment tool use to perform the assessments.Currently,the supportedtools are ROB2.0(for randomized controlled trials;Sterne et al(2019)<doi:10.1136/bmj.l4898>),ROBINS-I(for non-randomised studies ofinterventions;Sterne et al(2016)<doi:10.1136/bmj.i4919>),and QUADAS-2 (for diagnostic accuracy studies;Whiting et al(2011)<doi:10.7326/0003-4819-155-8-201110180-00009>).License MIT+file LICENSEEncoding UTF-8LazyData trueRoxygenNote6.1.1Depends R(>=2.10)Imports ggplot2,tidyr,scalesSuggests knitr,rmarkdown,covr,testthatVignetteBuilder knitr,rmarkdownBugReports https:///mcguinlu/robvisURL https:///mcguinlu/robvisNeedsCompilation noAuthor Luke McGuinness[aut,cre],Emily Kothe[ctb]Maintainer Luke McGuinness<**************************.uk> Repository CRANDate/Publication2019-11-2215:00:02UTC12data_quadas R topics documented:data_quadas (2)data_rob1 (3)data_rob2 (3)data_robins (4)robvis (5)rob_summary (5)rob_tools (6)rob_traffic_light (6)Index8 data_quadas Example QUADAS-2assessmentDescription#’@format A data frame:Study Study detailsD1Domain1D2Domain2D3Domain3D4Domain4Overall Overall risk of biasWeight Weight measure for each studyUsagedata_quadasFormatAn object of class data.frame with12rows and7columns.SourceCreated for this packagedata_rob13 data_rob1Example ROB1assessmentDescription#’@format A data frame:Study Study detailsD1Domain1D2Domain2D3Domain3D4Domain4D5Domain5D6Domain6D7Domain7Overall Overall risk of biasWeight Weight measure for each studyUsagedata_rob1FormatAn object of class data.frame with9rows and10columns.SourceCreated for this packagedata_rob2Example ROB2.0assessmentDescription#’@format A data frame:Study Study detailsD1Domain1D2Domain2D3Domain3D4Domain4D5Domain5Overall Overall risk of biasWeight Weight measure for each study4data_robins Usagedata_rob2FormatAn object of class data.frame with9rows and8columns.SourceCreated for this packagedata_robins Example ROBINS-I assessmentDescription#’@format A data frame:Study Study detailsD1Domain1D2Domain2D3Domain3D4Domain4D5Domain5D6Domain6D7Domain7Overall Overall risk of biasWeight Weight measure for each studyUsagedata_robinsFormatAn object of class data.frame with12rows and10columns.SourceCreated for this packagerobvis5 robvis robvis:A package for producing risk-of-bias assessmentfigures.DescriptionThe robvis package is designed to help users produce publication quality risk-of-bias assessmentfigures.rob_summary Produce summary weighted barplots of risk-of-bias assessments.DescriptionA function to convert standard risk-of-bias output to tidy data and plot a summary barplot.Usagerob_summary(data,tool,overall=FALSE,weighted=TRUE,colour="cochrane",quiet=FALSE)Argumentsdata A dataframe containing summary(domain)level risk-of-bias assessments,withthefirst column containing the study details,the second column containing thefirst domain of your assessments,and thefinal column containing a weight toassign to each study.The function assumes that the data includes a column foroverall risk-of-bias.For example,a ROB2.0dataset would have8columns(1for study details,5for domain level judgments,1for overall judgements,and1for weights,in that order).tool The risk of bias assessment tool used.RoB2.0(tool=’ROB2’),ROBINS-I(tool=’ROBINS-I’),and QUADAS-2(tool=’QUADAS-2’)are currently supported.overall An option to include a bar for overall risk-of-bias in thefigure.Default isFALSE.weighted An option to specify whether weights should be used in the barplot.Default isTRUE,in line with current Cochrane Collaboration guidance.colour An argument to specify the colour scheme for the plot.Default is’cochrane’which used the ubiquitous Cochrane colours,while a preset option for a colour-blind friendly palette is also available(colour=’colourblind’).quiet An option to quietly produce the plot without displaying it.ValueRisk of bias assessment barplotfigure.Examplesdata<-data.frame(stringsAsFactors=FALSE,Study=c("Study1","Study2"),D1=c("Low","Some concerns"),D2=c("Low","Low"),D3=c("Low","Low"),D4=c("Low","Low"),D5=c("Low","Low"),Overall=c("Low","Low"),Weight=c(33.33333333,33.33333333))rob_summary(data,"ROB2")rob_tools List tools covered by rob_summary().Descriptionrob_tools()will list the tools that can currently be plotted using the rob_summary()function.Usagerob_tools()Examplesrob_tools()rob_traffic_light Produce traffic-light plots of risk-of-bias assessments.DescriptionA function to take a summary table of risk of bias assessments and produce a traffic light plot fromit.Usagerob_traffic_light(data,tool,colour="cochrane",psize=20,quiet=FALSE)Argumentsdata A dataframe containing summary(domain)level risk-of-bias assessments,withthefirst column containing the study details,the second column containing thefirst domain of your assessments,and thefinal column containing a weight toassign to each study.The function assumes that the data includes a column foroverall risk-of-bias.For example,a ROB2.0dataset would have8columns(1for study details,5for domain level judgments,1for overall judgements,and1for weights,in that order).tool The risk of bias assessment tool used.RoB2.0(tool=’ROB2’),ROBINS-I(tool=’ROBINS-I’),and QUADAS-2(tool=’QUADAS-2’)are currently supported.colour An argument to specify the colour scheme for the plot.Default is’cochrane’which used the ubiquitous Cochrane colours,while a preset option for a colour-blind friendly palette is also available(colour=’colourblind’).psize Control the size of the traffic lights.Default is20.quiet An option to quietly produce the plot without displaying it.ValueRisk-of-bias assessment traffic light plot(ggplot2object)Examplesdata<-data.frame(stringsAsFactors=FALSE,Study=c("Study1","Study2"),D1=c("Low","Some concerns"),D2=c("Low","Low"),D3=c("Low","Low"),D4=c("Low","Low"),D5=c("Low","Low"),Overall=c("Low","Low"),Weight=c(33.33333333,33.33333333))rob_traffic_light(data,"ROB2")Index∗datasetsdata_quadas,2data_rob1,3data_rob2,3data_robins,4data_quadas,2data_rob1,3data_rob2,3data_robins,4rob_summary,5rob_tools,6rob_traffic_light,6robvis,5robvis-package(robvis),58。
vulnerability analysis of the financial networkAbstractThis paper presents a vulnerability analysis of the financial network. With the development of the network and computer technology, the financial network has become a main platform for financial business. However, the security of the financial network is not guaranteed. This paper analyzes the threats in the financial network, including network security threats, application security threats, and data security threats. Besides, this paper also introduces the means of preventing the security threats to the financial network, such as isolation network, authentication technologies, encryption technologies and access control systems. Finally, the recommendations are given on how to enhance the security of the financial network.IntroductionFinancial networks play an important role in the economic development. They provide new opportunities to create lots of financial services and operations, such as online banking, online payments, and stock trading. However, the development of the financial network also brings potential security threats. The vulnerabilities of the financial networks have become amajor concern to financial institutions, governments, and customers. Therefore, it is important for financial institutions to understand and analyze the threats to the financial network in order to ensure its security and stability.Network Security ThreatsNetwork security threats refer to the threats that target the infrastructure of the financial network. These threats can cause a lot of damage, from data theft to system failure. The main types of network security threats include:1. Malicious Software: Malicious software, such as viruses, worms, Trojans, and spyware, can cause serious damage to the data and systems in the financial network.2. Denial of Service (DoS): DoS attack is a type of attack which aims to exhaust the resources of the targeted system, making it unavailable for legitimate use. DoS attack is especially dangerous for financial networks, as it can lead to the disruption of essential services.3. Network Sniffers: Network sniffers are tools which can be used to intercept and record data passing through the network. Attackers can use network sniffers to steal financial data or disrupt systems.Application Security ThreatsApplication security threats refer to the threats that target the applications used in the financial network. These threats can cause serious damage to the data and systems in the financial network, such as data loss, data corruption, and system failure. The common types of application security threats include:1. SQL Injection: SQL injection is an attack technique which injects malicious code into the web applications. The injected code can be used to access to the financial system and steal financial data.2. buffer overflow: Buffer overflow is a type of security threat which occurs when an application receives more data than it can handle. The excess data can overwrite the application’s memory, which can lead to the execution of malicious code or the disruption of the application.3. Application Vulnerability Exploitation: Application vulnerability exploitation is a type of attack which exploits the vulnerabilities of the applications used in the financial network. Attackers can use this technique to gain access to the financial system and steal financial data.Data Security ThreatsData security threats refer to the threats that target the data stored in the financial network. These threats can cause serious damage to the system, such as data loss and data breaches. The common types of data security threats include: 1. Unauthorized Data Access: Unauthorized data access is an attack which aims to gain access to the financial data without authorization. Attackers can use this technique to gain access to the financial data and steal it.2. Data Theft: Data theft is an attack which involves stealing confidential financial data from the system. Attackers can use this technique to gain access to the financial data and use it for malicious purposes.3. Data Corruption: Data corruption is an attack which involves corrupting the data stored in the financial network. Attackers can use this technique to destroy the financial data and disrupt the system.Preventive MeasuresTo prevent the security threats to the financial network, financial institutions should implement the following preventive measures:1. Isolation Network: Isolation network is a type of network architecture which isolates the internal network fromthe external network. This can prevent attackers from gaining access to the financial system through the external network.2. Authentication Technologies: Authentication technologies, such as biometrics and strong authentication, can be used to verify the identity of the users and prevent unauthorized access to the financial network.3. Encryption Technologies: Encryption technologies, such as cryptographic algorithms and digital signatures, can be used to protect the data stored in the financial network.4. Access Control Systems: Access control systems, such as firewalls and intrusion detection systems, can be used to restrict access to the financial network and detect security breaches.ConclusionIn conclusion, the security of the financial network is very important. Understanding and analyzing the threats to the financial network is the first step towards securing it. By implementing the preventive measures mentioned above, financial institutions can protect themselves from the security threats and enhance the security of the financial network.。
asp校园网系统漏洞分析V3. 0 vulnerability analysis for star campus network systemThis article is original, reprint please specifyThis source is just next to the local time probably looked at as a special school site "program, the function is complete, even the forum and blog are safe. It can also go to the official look, and found" source code to set up is not the same. First is the trial version of. . . Don't know how much the ocean. . . ASP, estimates are not too expensive...The program uses the COOKIE N integrated in many places, the safety degree greatly. Regardless of the forum and blog, forum is a network BBS V7. X, alas! Unfortunately missed the network uploadvulnerability, now only listen to the elder said... If it is to write their own blog, may I have not seen this blog..ing COOKIE for SQL injectionVulnerability in the GuestDel. asp, is the message board to delete their own messages used in the document. This vulnerability is very troublesome to use. Look at the code first:^include file— CONN. ASP ”一 >!"一ftinclude file=" ConnUser. asp "一 >!"一ftinclude file=" config, asp "一 >!"一ftinclude file=" char, inc "一 >!"一ftinclude file=" ChkURL. asp! ”一 > / / char, inc these files define some function, where there is no call: ChkURL.asp prevent NC submission; no other effect<If request, cookies (Forcast SN) ("key") "and request, cookies” < > (ForcastSN) ("key") < > "super" thenAaas=0Set urs=server. createobject ("adodb. recordset")Sql= "select * from" & "where & db_User_Table" & ”=” db User Name & &Request. cookies (Forcast SN) ("username") & 〃,〃Urs. open, SQL, ConnUser, 1, 3If, urs. bof, or, urs. eof, thenAaas=0 / 1: levels to carry out our SQL into the statementEnd ifIF, Request, cookies (Forcast SN) ("passwd"), Ours(db User Password), THENAaas=0 / / 2: password authentication levelEND IFIf urs (dbUserbookgl) = "0" thenAaas=l / / I don't know what this is, it is their message, you can delete, does not affect theEnd ifUrs. closeSet urs=nothingEnd ifIf request, cookies (Forcast SN) ("key") = "super”, "or”, aaas=l , thenReviewid=cint (request ("reviewid"))Conn, execute /z delete from" & db_Review_Table & "where reviewid=" fereviewidConn, closeSet conn=nothingResponse, redirect "GuestBook. asp" / / return 1: message deleted successfullyElseShow Err (.) you have no right to delete this message! <br><br><a href=,javascript:history.back (</a>)) '> return ”Response, end / / return 2: failure, "you do not have permission todelete this message prompt!z/End if% >First you have to register a user (the file that you have not been able to find at all),People hang up quickly, and finally found the station from the front page and the forum to the blog for all is a ID, then please. Sounds) MyBrowser cheat tool veterans. The first landing. After going to publish a message. Then find your USERNAME in COOKIE, instead of USERNAME+ injection after the statement. To delete your message! ~ judgment after injection statement genuine. COOKIE information submitted to the 1 level according to the situation, if true, then the password authentication, then we successfully passed the COOKIE landed on their own account, and that the submitted SQL sentence is true. Otherwise, there will be second in return, because our information level 1 is judged to be false. . . This SQL guess will be dead...2.retrieve password for SQL injectionIn the password back password hungry first step getpwd2. asp appear in the landing box to submit information filtering lax, landing box injected SQL vulnerabilities, first look at the code:<Dim, RS, SQLDim usernameUsername=checkstr (request, form ("username"))li username= then% ><script language= "JavaScript" >Alert (please don't try illegal operation!!)Location. href= "javascript: history, back"()”</script><Response. EndEnd ifSet rs=Server. CreateObject ("Adodb. Recordset") Sql= "select" & ”& db User Name & db_User_Question & from & dbUserTable & where & dbUserName & "="&username& ""Rs. open, SQL, ConnUser, 1, 1If rs. eof Then% ><script language= "JavaScript" >Alert ("this user hasn't been registered yet. Please register at the front page. ”!”)Location. href= "javascript: history, back"()”</script><Response. EndEnd IfTo submit USERNAME without any filtering into the SQL. In the above mentioned char, inc has a custom function of the information submitted for verification and single quote character type filter, but unfortunately, here is not called, although char, inc is contained in this document. Because it is landing into the box the tool so flash side, then those tools"一ftinclude file="CONN. ASP >!ConnUser. asp "一 >!config, asp "一 >!"一ftinclude file="char, inc "一 >!"一ftinclude file—ChkUser. asp "一 >!HACK the flash side...3.voting section COOKIE spoofing to achieve SQL injectionOr say COOKIE is really fragile, because it is preserved in theclient's data, the so-called my site, listen to me, want to change how to change. First look at the code, the cause of the vulnerability must be clear"一ftinclude file=" ChkManage. asp "一 >!IF not (request, cookies (Forcast SN) ("ManageKEY") = "super”, or request, cookies (Forcast SN) ("ManageKEY") = "bigmaster”, 〃 〃 〃 』 〃 or , request.饼干(forcast sn) ( u managekey w )="检查”或请求。
An Analysis of CVSS Version 2 Vulnerability Scoring 11Official contribution of the National Institute of Standards and Technology; not subject to copyright in the United States.Karen ScarfoneNational Institute of Standards andTechnology (NIST) karen.scarfone@Peter MellNational Institute of Standards andTechnology (NIST) mell@AbstractThe Common Vulnerability Scoring System (CVSS) is a specification for measuring the relative severity of software vulnerabilities. Finalized in 2007, CVSS version 2 was designed to address deficiencies found during analysis and use of the original CVSS version. This paper analyzes how effectively CVSS version 2 addresses these deficiencies and what new deficiencies it may have. This analysis is based primarily on an experiment that applied both version 1 and version 2 scoring to a large set of recent vulnerabilities. Theoretical characteristics of version 1 and version 2 scores were also examined. The results show that the goals for the changes were met, but that some changes had a negligible effect on scoring while complicating the scoring process. The changes also had unintended effects on organizations that prioritize vulnerability remediation based primarily on CVSS scores.1. IntroductionThe Common Vulnerability Scoring System (CVSS) is a specification for documenting the major characteristics of vulnerabilities and measuring the potential impact of vulnerability exploitation [3]. The motivation for developing CVSS was to provide standardized information for organizations to use to prioritize vulnerability mitigation. CVSS is developed and maintained by the CVSS Special Interest Group (CVSS-SIG) working under the auspices of the Forum for Incident Response and Security Teams (FIRST). CVSS has been widely adopted by the information technology community. CVSS is mandated for use in evaluating the security of payment card systems worldwide [9]. The U.S. Federal government uses it for its National Vulnerability Database [7] and mandates its use by products in the Security Content Automation Protocol (SCAP) validation program [6]. CVSS hasalso been adopted by dozens of software vendors and service providers [1].There are many proprietary schemes for scoring software flaw vulnerabilities, most created by software vendors, but CVSS is the only known open specification. CVSS is also distinguished from other scoring systems in that CVSS was designed to be quantitative so that analysts would not have to perform qualitative evaluations of vulnerability severity. Significant effort has been put into developing the specification for CVSS so that any two vulnerability analysts should produce identical CVSS scores for the same vulnerability. In addition, CVSS is designed to provide visibility into how a score was calculated. Each CVSS score is provided with a CVSS vector. This vector includes metrics that categorize several characteristics of a vulnerability. The vector provides details on the nature of the vulnerability that help CVSS users to understand why a vulnerability received a particular score. These two attributes of CVSS, quantitative analysis and transparency through vectors, lend the specification to research and analysis. Large publicly available CVSS data sets from the National Vulnerability Database [7] further enable this research. The initial CVSS specification was developed by the National Infrastructure Advisory Council [5] and published in October 2004. As [10] explains, the original specification did not undergo widespread peer review, and adopters raised several concerns about it. The CVSS-SIG worked from April 2005 to June 2007 on identifying problems with version 1 and determining how best to solve them, which ultimately led to the release of version 2. The goal for this paper is to determine how effectively version 2 (v2) has addressed the version 1 (v1) problems. The analysis is based on a review of the v2 specification and the results of an experiment scoring 11,012 recent vulnerabilities using both v1 and v2. Section 2 provides background on CVSS, and Section 3 discusses the v1 problems and the methodology used tocreate v2. Section 4 provides an overview of the analysis process, and Sections 5 through 8 present the results of the analysis in four categories: base scores, subscores, vulnerability characteristics, and severity rankings. Section 9 provides conclusions for the work. 2. BackgroundCVSS uses three groups of metrics to calculate vulnerability scores. Base metrics are vulnerability attributes that are constant over time and across all implementations and environments. Temporal metrics are vulnerability attributes that change over time but which apply to all instances of a vulnerability in all environments (e.g., the public availability of exploit code or a remediation technique). A temporal score for a vulnerability is calculated with an equation that uses both the base score and temporal metric values as parameters. Environmental metrics are vulnerability attributes that are organization and implementation-specific, such as how prevalent a target is within an organization. An environmental score is calculated with an equation that uses both the temporal score and the environmental metric values as parameters.The focus of our research is base metrics. An equation is applied to their values to calculate a vulnerability’s base score. There are six base metrics in CVSS v2. The first three metrics relate to exploitability. AccessVector measures the range of exploitation (e.g., can it be launched over the network or only locally). Authentication measures the level to which an attacker must authenticate to the target before exploiting the vulnerability. AccessComplexity measures how difficult it is to exploit the vulnerability once the target is accessed. These three metrics, which collectively measure how readily an attacker can attempt to exploit a vulnerability, comprise an exploitability subvector from which an exploitability subscore can be calculated.In addition to the three exploitability metrics, v2 also has three base metrics related to impact. ConfImpact measures the level to which vulnerability exploitation can impact the target’s confidentiality, and IntegImpact and AvailImpact capture the same information for integrity and availability, respectively. The impact metrics collectively measure the extent to which an attacker can compromise a computer’s security by exploiting a particular vulnerability. The three impact metrics form the impact subvector, from which an impact subscore can be determined.Table 1 lists the possible values for each metric in CVSS v1 and v2, along with the abbreviations (in parentheses) for each metric and metric value. CVSS v1 had three additional base metrics, called impact bias metrics, that set the relative importance of the three impact metrics (ConfImpact, IntegImpact, and AvailImpact). The impact bias metrics were converted from base metrics to environmental metrics in v2.The equation for calculating the base score in v1 is round_to_1_decimal (10 * AccessVector * AccessComplexity * Authentication * ((ConfImpact * ConfImpactBias) + (IntegImpact * IntegImpactBias) + (AvailImpact * AvailImpactBias))). The base score ranges between 0.0 and 10.0. To calculate the v2 base score, the three exploitabilty metrics are combined into an exploitability subscore using the equation (20 * AccessVector * AccessComplexity * Authentication). The three impact metrics are combined into an impact subscore using the equation (10.41 * (1 – (1 – ConfImpact) * (1 – IntegImpact) * (1 – AvailImpact))). The base score is calculated from the subscores using the following equation: (round_to_1_decimal(((0.6 * Impact) + (0.4 * Exploitability) – 1.5) * f(Impact))), where f(impact) = 0 if Impact=0, 1.176 otherwise.Table 1. Possible values for base metrics Metric Name Possible ValuesAccessVector(AV), v1Remotely (R): 1.0Requires local authentication orphysical access (L): 0.7 AccessVector(AV), v2Network (N): 1.0Adjacent network (A): 0.646Requires local access (L): 0.395 AccessComplex-ity (AC), v1Low (L): 1.0High (H): 0.8 AccessComplex-ity (AC), v2Low (L): 0.71Medium (M): 0.61High (H): 0.35Authentication(Au), v1Not required (NR): 1.0Required (R): 0.6 Authentication(Au), v2Not required (N): 0.704Single instance (S): 0.56Multiple instances (M): 0.45 ConfImpact (C),IntegImpact (I),AvailImpact (A),v1Complete (C): 1.0Partial (P): 0.7None (N): 0.0 ConfImpact (C),IntegImpact (I),AvailImpact (A),v2Complete (C): 0.660Partial (P): 0.275None (N): 0.03. CVSS v2 design methodologyThe designers of CVSS v1 postulated an equation that appeared reasonable and then assigned metric values for the equation elements using trial and error.This resulted in a useful scoring specification, but adopters of v1 noted several deficiencies [2, 10]. These included base scores that were not properly reflecting the true severity of vulnerabilities, and less diversity in scores than expected (i.e., too many vulnerabilities having the same score). To address this, CVSS v2 was to correct the errant scores, which would also cause a higher average value for scores, and to improve score accuracy and diversity by making the metrics more granular and adjusting the metric values and equations. (For CVSS, “accuracy” refers to relative accuracy. CVSS scores are intended to provide a relative comparison of vulnerability severity, not exact measurements.) Another goal was to ensure that analysts would produce consistent and accurate v2 scores, so CVSS should not be made more complicated than necessary.CVSS v2 was designed using a more rigorous process than v1, which did not undergo extensive peer review. The first step in designing v2 was to evaluate the v1 metrics and propose changes that would enable users to better characterize the security-relevant aspects of a vulnerability. The impact bias metrics were removed from the base metric (and moved to the environmental metric), and more granularity was added to the AccessVector, AccessComplexity, and Authentication metrics.Once the v2 metrics were defined, opinions were gathered from the CVSS-SIG members and their organizations on what score each type of vulnerability should have. There were six v2 metrics, each of which had three possible values, resulting in 729 possible vulnerability types. It was not possible to rank, much less score, 729 vulnerability types in a justifiable manner. The problem was simplified by placing the base metrics into two groups, impact and exploitability, and generating approximated subscores for each group. Each group contained three metrics with three possible values, so only 27 vulnerability types per group had to be ranked and scored. The CVSS-SIG members reached consensus on the approximated rankings and scorings, resulting in the creation of lookup tables for exploitability and impact. To create a CVSS score from these two subscores, the researchers performed a weighted average of exploitability and impact, with exploitability having a weight of 0.4 and impact having a weight of 0.6. These weights were a simplified version of the weights effectively employed by v1, 0.428 and 0.572 [10].At this point the CVSS-SIG could have used the lookup tables for v2 scoring. However, the CVSS community desired an equation instead of lookup tables. Therefore, mathematicians proposed equations that approximated the lookup tables. The resulting equations underwent beta testing and a number of small scoring inconsistencies were encountered. To address these, modifications were made to particular metric input values, and then further beta testing was performed to ensure that the scores were as expected. The final change was to the equation itself; experts felt that the entire scoring distribution had been shifted too high. To lessen this, the researchers subtracted 1.5 from the base score and then multiplied the result by 1.176. This caused the desired shift downwards while maintaining the score range of 0.0 to 10.0 and keeping the scores for the different types of vulnerabilities in the same order. This concluded the design of v2, and it was finalized in June 2007.CVSS v2 has already been determined to meet the accuracy goals of the CVSS-SIG, because the CVSS-SIG extensively examined many test cases when designing v2 to confirm score accuracy. There was also a small experiment conducted [2] during v2’s development that involved calculating scores for 1156 vulnerabilities using both v1 and a pre-final version of v2. That experiment focused on analyzing the average of scores and score diversity, and it found improvements in both from v1. This paper has similar goals as the [2] experiment, but it uses the final version of v2, it examines a much larger data set, and it performs a far more detailed and thorough analysis of the experimental data.4. Analysis overviewWe performed a theoretical analysis of the CVSS v2 base score equation and metrics. We generated theoretical scoring distributions for v1 and v2 by considering all the possible sets of metric values and calculating the corresponding scores and the frequency of each score. First, we counted the number of possible combinations of metric values: for v1 there are 864, and for v2 there are 729. However, vulnerabilities with all impact metrics set to None are not possible in practice because each vulnerability must have some impact, so we subtracted those and had final counts of 832 for v1 and 702 for v2. Next, we calculated the score for each combination and counted the frequencies of each of the 101 possible score values (0.0-10.0). We then used this as the basis for analyzing the theoretical scoring distribution. We also generated theoretical scoring distributions for the impact and exploitability subscores using a similar process.We also performed an experimental analysis of CVSS scores. In the experiment, we calculated v2 base scores for 11,012 vulnerabilities listed in the Common Vulnerabilities and Exposures (CVE) dictionary [4]. This encompassed all valid CVE entries published between June 20, 2007 and April 30, 2009. The scoringwas performed by the National Vulnerability Database (NVD) [7] in accordance with the v2 specification [3].For the experiment, we mapped the v2 metrics assigned to each CVE entry back to their v1 equivalents. Three base metrics had no changes in options, so no mapping was needed. Three other base metrics had more granular options in v2 that could be mapped to broader v1 options. The mappings are shown in Table 6. The impact bias metrics from v1 were dropped from the base metrics for v2, and the [2] study indicated that in v1 they affected scoring less than 1% of the time, so we chose to disregard them.A small percentage of the experimental data is assumed to have scoring errors. Errors can occur from research sources, such as incorrect or incomplete information in vulnerability announcements, or analyst misinterpretation of vulnerability information. Errors can also occur by analysts misunderstanding the CVSS scoring guidelines or having differing assumptions, such as the default privileges under which a vulnerable application is typically run. One of the CVSS-SIG’s goals in developing v2 was to make the scoring process clearer for analysts to improve score consistency [10]. However, the scoring process is sufficiently complex that some misinterpretations likely still occur and cause occasional scoring discrepancies. The true error rate in the experimental data cannot be readily quantified because there is no authoritative source of CVSS scores, but there are extensive quality assurance efforts in place, with the analysts checking each others’ work and the researchers providing guidance whenever the analysts are unsure of the proper scoring. The analysts are knowledgeable about general security and have been specifically trained on vulnerability characteristics and CVSS scoring. Also, the scoring interface does not have default settings, so there should not be a default bias. The error rate should be sufficiently small so as not to affect the results of this experiment.5. Base score analysisThis section describes our theoretical and experimental analysis of the v1 and v2 base scores.5.1. Theoretical score distributionWe examined the theoretical distributions of v1 and v2 scores. For v2, the mean for the theoretical scores is 5.4, the median is 5.6, the standard deviation is 1.82, and the skew is -0.34. This is a significant change from v1, which had a mean of 3.6, a median of 3.3, a standard deviation of 1.91, and a skew of 0.81. This shift from v1’s characteristics is consistent with the CVSS-SIG’s goal to have higher scores, with the majority of scores being over 5.0. Figures 1 and 2 show the frequency of each possible score in the theoretical distributions for v1 and v2 scores, respectively.Figure 1. Theoretical distribution of v1 scoresFigure 2. Theoretical distribution of v2 scoresWe also mapped all the possible v2 vectors back to their v1 counterparts and looked at the score differences for each of the 702 vectors. This is different from the v1 mean and median described above, which were based on the 832 possible vectors for v1: this is a strict one-to-one comparison of the v1 and v2 scores for all the v2 vectors. From v1 to v2, scores increased an average of 2.1, with a median change of +2.3 and a standard deviation of 1.23. Of the 702 vectors, 664 (94.6%) had higher v2 scores, 31 (4.4%) had higher v1 scores, and 7 (1.0%) had the same v1 and v2 scores. This indicates that v2 should generally produce higher scores than v1.5.2. Experimental score distributionWe analyzed the base scores for the experimental data. Figure 3 shows how many vulnerabilities had each possible v1 score. The mean was 5.1 and the median 5.6. This was an increase of 1.5 in the mean and 2.3 in the median from the theoretical data. The standard deviation was 2.62 and the skew 0.11.Approximately 45% of the scores were below 5.0 and the other 55% above 5.0, with none at exactly 5.0.Figure 3. Experimental v1 scoresFor the v2 experimental scores, shown in Figure 4, the mean was 6.6, the median 6.8, the standard deviation 1.91, and the skew -0.05. This was an increase of 1.2 in the mean and 1.2 in the median from the theoretical data. Of the scores, approximately 25% were below 5.0, 10% were at 5.0, and 65% were above 5.0. This is consistent with the CVSS-SIG’s goal to have the majority of scores above 5.0.Both the v1 and v2 results show that their experimental scores are significantly higher than their theoretical scores, with the differences being more pronounced for v1.Figure 4. Experimental v2 scoresThe means and medians indicate that actual v2 scores are significantly higher than v1 scores. To investigate this further, we compared each v2 experimental score to the score achieved by mapping the v2 vector back to v1. Of the 11,012 vulnerabilities, 10,072 (91.5%) had higher v2 scores, 743 (6.7%) had the same v1 and v2 scores, and 197 (1.8%) had higher v1 scores. This is further confirmation that v2 has met the CVSS-SIG’s goal of increasing base scores.5.3. Score diversityScore diversity refers to the relative variety of scores. To look at score diversity, we started by reviewing theoretical v1 and v2 scores for all the possible vectors. There are more possible vectors than scores, but this does not necessarily indicate that every score has a corresponding vector. We confirmed the finding from [2] that not all base scores can occur: in v1, 66 of the 101 scores are possible, and in v2, 75 of the 101 scores are possible. Having more scores possible in v2 than v1 helps to support greater score diversity, although it does not ensure it.We also looked at the diversity of the experimental data. The 11,012 vulnerabilities in the data set produced 35 distinct v1 scores (53% of the 66 possible scores) and 51 distinct v2 scores (68% of the 75 possible scores), again showing the increased diversity of v2 over v1. We also looked to see how diverse the vectors in the experimental data were, and of the 702 possible v2 base vectors, only 143 (20%) were represented. The 10 most common vectors, listed in Table 2, comprised over 77% of all vulnerabilities. Table 2 also shows the v1 and v2 scores for the most common vectors.Table 2. Most common v2 vectors in experimentFreqcount and%AV/AC/Au C/I/A v1 v2 2662(24.2)N L N P P P 7.0 7.5 1527(13.9)N M N N P N 1.9 4.3 999 (9.1) N M N P P P 5.6 6.8896 (8.1) N M N C C C 8.0 9.3743 (6.7) N L N C C C 10.0 10.0577 (5.2) N L N P N N 2.3 5.0443 (4.0) N L N N N P 2.3 5.0251 (2.3) L L N C C C 7.0 7.2240 (2.2) N L N N N C 3.3 7.8217 (2.0) L M N C C C 5.6 6.9These results differ significantly from the theoretical score distribution, and an analysis of similar results from v1 experimental data in 2006 [2] had found this to be caused by certain types of vulnerabilities occurring much more often than others. Analysis of our experimental data, as shown in Table 2, reaches the same conclusion. Also, because CVSS treats confidentiality, integrity, and availability as equally important, vectors that are identical except for which of these attributes are impacted have the same base scores. For example, vectors 6 and 7 in Table 2 are the same except that one has a partial impact to confidentiality and the other a partial impact to availability.Table 3 presents the ten most commonly occurring scores in the v1 and v2 data. The two most frequent scores encompassed 44.5% of v1 vulnerabilities and 41.2% for v2. The ten most frequently occurring scores encompassed 90.1% of v1 vulnerabilities and 83.8% of v2 vulnerabilities. These are additional indications of the improved diversity of v2 scores over v1.Table 3. Most common scores in experimentv1 score v1 freq countand %v2scorev2 freq countand %7.0 2916 (26.5) 7.5 2662 (24.2)1.9 1979 (18.0) 4.3 1872 (17.0)2.3 1293 (11.7) 5.0 1153 (10.5)5.6 1291 (11.7)6.8 1038 (9.4)8.0 948 (8.6) 9.3 896 (8.1)10.0 745 (6.8) 10.0 743 (6.7)3.3 331 (3.0) 7.8 321 (2.9)4.2 183 (1.7) 7.2 251 (2.3)4.7 163 (1.5) 6.9 217 (2.0)2.7 140 (1.3) 6.5 167 (1.5)Most of the scores that appear in Table 3 also appear in Table 2, and their frequencies are similar. For example, in Table 2 the first vector has a v2 score of 7.5 and occurs 2662 times, and in Table 3 the most common v2 score is 7.5 and it also occurs 2662 times. So every instance of a 7.5 score in v2 has the same vector. This particular vector corresponds to vulnerabilities that can be exploited remotely, with low complexity and no authentication. The impact of exploiting this vector is a partial impact to confidentiality, integrity, and availability, which in most cases means that the attacker can gain user-level access. The next most common vector involves a partial impact to integrity through network access, medium attack complexity, and no authentication. This most often corresponds to cross-site scripting vulnerabilities, which have been quite prevalent the past few years.6. Subscore analysisTo better understand the composition of the v1 and v2 scores, we performed theoretical and experimental analysis of the v2 impact and exploitability subscores.6.1. Theoretical score distributionThere are 27 possible exploitability vectors, which map to 23 exploitability subscores. There are 26 possible impact vectors, but they only map to 9 impact subscores. So from a theoretical viewpoint, impact subscores have much less diversity than exploitability subscores. Figure 5 shows the frequency of each possible score in the theoretical distribution for the impact vectors. Approximately 23% of the impact vectors had subscores below 5.0. The mean was 7.3 and the median 7.8, the range was 2.9 to 10.0, the standard deviation was 2.12, and the skew was -0.88, indicating that most of the impact subscores are high values. Since an impact subscore is 60% of a base score, this is likely why the theoretical base scores have higher-than-expected values.Figure 5. Theoretical distribution of v2 impactsubscoresNext, we analyzed the theoretical distribution of exploitation subscores. As shown in Figure 6, two-thirds of the exploitation vectors had subscores below 5.0. The mean for the subscores was 4.3 and the median 3.9, the range was 1.2 to 10.0, the standard deviation was 2.20, and the skew was 0.83. This indicates that the exploitation subscores are somewhat low values, although not as far from the midpoint asFigure 6. Theoretical distribution of v2 exploitationsubscores6.2. Experimental score distributionWe analyzed the subscores from the experimental data to gain a better understanding of the differences between the theoretical and experimental scores. Forthe impact subscores, the mean was 6.1 and the median 6.4, the range 2.9 to 10.0, the standard deviation 2.57, and the skew 0.18. For the exploitability subscores, the mean was 8.6 and the median 8.6, the range 1.5 to 10.0, the standard deviation 1.94, and the skew -1.75. These results differed substantially from the theoretical analysis, which indicated means of 7.3 for impact subscores and 4.3 for exploitation subscores. As with the base scores, we also looked at the subscores based on an assumption of an ideal mean of 5. For the impact subscores, 35% were below 5 and 65% were above 5; for the exploitability subscores, 12% were below 5 and 88% were above 5. Figures 7 and 8 show the experimental distribution for impact and exploitation subscores, respectively.Figure 7. Experimental distribution of v2 impactsubscoresFigure 8. Experimental distribution of v2 exploitationsubscoresThe deviations from the theoretical distributions indicate that some subvectors are occurring more often than others. To further investigate this, we looked at the most common subvectors and their scores.6.3. Experimental subscore diversityWe examined the diversity of the experimental subscores. The 11,012 vulnerabilities in the data set produced 21 distinct exploitability subvectors (of 27 possible). The two most common exploitability subvectors comprised 82% of the vulnerabilities, and the ten most common (shown in Table 4) comprised over 99%. The two most common vectors have high scores, 10.0 and 8.6. Since these comprise over 82% of the vulnerabilities, the prevalence of these two vectors is likely the main cause of the experimental subscores being higher than the theoretical subscores.Table 4. Common exploitability subvectors inexperiment# AVACAuSubscore5045 (45.8%) N L N 10.04005(36.4%) N M N 8.6624 (5.7%) L L N 3.9408 (3.7%) N L S 8.0356 (3.2%) L M N 3.4231 (2.1%) N M S 6.8186(1.7%) N H N 4.941 (0.4%) L L S 3.135 (0.3%) L H N 1.928 (0.3%) N H S 3.9Next, we looked at the impact subscores. The experimental data had 21 distinct impact subvectors (of 26 possible). The two most common subvectors comprised over 58% of the data, and the ten most common comprised over 99%. Table 5 shows the experimental results for the ten most common impact subvectors. The most common has a mid-range score (6.4) and the next two have scores at the high and low ends of the range. This helps explain why impact subscores do not have a strong bias. Table 5 also shows examples of subvectors mapping to the same score.Table 5. Common impact subvectors in experiment # C I ASubscore4108(37.3%) P P P 6.42347(21.3%) C C C 10.01827(16.6%) N P N 2.9870 (7.9%) P N N 2.9747 (6.8%) N N P 2.9522 (4.7%) N N C 6.9208 (1.9%) P P N 4.9139 (1.3%) C N N 6.9132 (1.2%) N P P 4.924 (0.2%) N C C 9.2We were surprised that the impact subvectors showed more diversity than the exploitability subvectors. From our experience with CVSS scoring,。