Characterizing Secure Dynamic Web Applications Scalability
- 格式:pdf
- 大小:143.59 KB
- 文档页数:10
miniapi authorizeattribute -回复什么是[miniapi authorizeattribute],以及它在软件开发中的作用?在软件开发中,数据的安全性和访问控制是非常重要的。
为了保护敏感数据和确保只有授权用户能够访问相关资源,开发人员需要实施一些安全措施。
其中一个常用的技术就是使用[miniapi authorizeattribute]。
本文将逐步讨论[miniapi authorizeattribute] 的作用、实现方式和在软件开发中的应用。
首先,让我们来了解[miniapi authorizeattribute] 是什么。
[miniapi authorizeattribute] 是一种在 中常见的属性,用于对Web API 或MVC 控制器和操作进行授权验证。
它使开发人员能够轻松地为API 或MVC 控制器添加身份验证和授权功能。
通过使用[miniapi authorizeattribute],我们可以指定访问API 或MVC 控制器的用户必须满足哪些条件,例如登录、角色、权限等。
具体来说,[miniapi authorizeattribute] 是[System.Web.Http.AuthorizeAttribute] 类的一个派生类。
在 MVC 中,我们能通过在控制器或操作方法上应用[miniapi authorizeattribute] 来完成授权。
例如,我们可以在一个控制器的类级别上应用[miniapi authorizeattribute],以确保只有已经登录的用户才能访问该控制器的所有操作方法。
接下来,我们将讨论[miniapi authorizeattribute] 在软件开发中的应用。
其中一个典型的应用场景是保护敏感数据的访问。
许多Web 应用程序需要存储和处理一些敏感的用户数据,例如个人身份信息、银行账户信息等。
为了确保这些数据不被未经授权的用户访问,开发人员可以使用[miniapi authorizeattribute] 来限制只有经过身份验证的用户才能够访问相关资源。
websecuritycustomizer 模糊匹配在网络安全领域,WebSecurityCustomizer是一种用于定制化Web安全防护措施的工具,它能够实现模糊匹配的功能,有效地提高了Web应用程序的安全性。
WebSecurityCustomizer的模糊匹配功能主要是指在对用户输入的数据进行验证时,能够容忍一定程度的不精确匹配,从而避免因为过于严格的匹配规则导致的误报或漏报。
这在处理一些特定场景下非常重要,比如处理用户搜索关键词、文件上传时的文件类型、URL参数等。
通过模糊匹配,WebSecurityCustomizer能够更灵活地对输入数据进行验证,提高了Web应用程序的安全性和用户体验。
为了实现模糊匹配,WebSecurityCustomizer通常会结合一些模糊匹配算法,比如通配符、正则表达式、编辑距离等。
这些算法能够帮助开发人员快速定制化安全防护规则,提高了Web应用程序的安全性和灵活性。
通过合理地使用模糊匹配,开发人员可以更好地平衡安全性和用户体验,提高了Web应用程序的整体安全水平。
除了模糊匹配功能,WebSecurityCustomizer还具有其他一些重要的特性,比如自定义规则、实时监控、日志记录、安全报告等。
开发人员可以根据实际需求灵活地定制化安全规则,实时监控Web应用程序的安全状态,并生成详细的安全报告,帮助及时发现和处理安全漏洞。
总的来说,WebSecurityCustomizer是一种功能强大的Web安全定制化工具,通过模糊匹配等功能,可以有效提高Web应用程序的安全性和灵活性,帮助开发人员更好地应对各种安全挑战。
在当前Web安全形势日益严峻的情况下,WebSecurityCustomizer的应用将为Web应用程序的安全防护提供有力的支持,值得开发人员深入研究和应用。
第一题为Internet Explorer中Internet区域的Web内容指定安全设置,禁止对没有标记为安全的ActiveX控件进行初始化和脚本运行,禁止下载未签名的ActiveX控件。
【解答】1.打开Internet Explorer。
2.选择“工具”菜单下的“Internet 选项”命令,打开“Internet 属性”对话框。
3.选择“安全”选项卡,单击“Internet”图标。
4.单击“自定义级别”按钮,打开“安全设置”对话框。
5.在“对没有标记为安全的ActiveX控件进行初始化和脚本运行”下选择“禁用”选项。
6.在“下载未签名的 ActiveX 控件”下选择“禁用”选项。
7.单击“确定”按钮,返回“Internet 属性”对话框。
8.单击“Internet 属性”对话框的确定按钮,操作完成。
第二题在Internet Explorer中,将加入可信站点,通过管理员认可,才能运行可信站点中的ActiveX控件和插件。
【解答】1.选择“开始”菜单中“设置”子菜单中的“控制面板”命令,打开“控制面板”窗口。
2.在“控制面板”窗口中单击“网络和 Internet 连接”,打开“网络和 Internet 连接”窗口。
3.在“网络和 Internet 连接”窗口中单击“Internet 选项”,打开“Internet 属性”对话框。
4.打开“安全”选项卡,单击“受信任的站点”图标。
5.单击“站点”按钮,打开“可信站点”对话框。
6.在文本框中输入https://,单击“添加”按钮。
7.单击“确定”按钮,返回“Internet 属性”对话框。
8.在“Internet 属性”对话框中,单击“自定义级别”按钮,打开“安全设置”对话框。
9.在“运行 ActiveX 控件和插件”下选择“管理员认可”选项,单击“确定”按钮,返回“Internet 属性”对话框。
10.单击“Internet 属性”对话框中的“确定”按钮,操作完成。
eclipse如何修改dynamic web module version一、修改工程属性:右键eclipse的工程,选择属性,再选择Project Facets里面中选择Dynamic Web Module ,将3.0修改为2.5,可能会提示如下信息:Cannot change version of project facet Dynamic Web Module to 2.5这样表示修改不了,然后采用第二种方法。
二、修改配置文件:找到工程项目的硬盘目录,打开项目下的 .setting文件夹,如下图:找到 .setting文件夹内的mon.project.facet.core.xml文件,文件格式大致如下:<?xml version="1.0" encoding="UTF-8"?><faceted-project><runtime name="Apache Tomcat v5.5"/><fixed facet="jst.web"/><fixed facet="jst.java"/><installed facet="jst.java" version="5.0"/><installed facet="jst.web" version="2.5"/><installed facet="wst.jsdt.web" version="1.0"/></faceted-project>直接手动修改jst.web对应的version即可。
最后重启tomcatX就可以正常使用了。
最近有个项目将要用到WebSphere,所以我提前来熟悉一下,在WebSphere7.0上安装完一个struts2的小项目测试,在tom cat下该项目没有任何问题,当我把tom cat下打包的war 包在WebSphere中安装后,就只能访问ht ml页面了,其余的struts2的请求,跟jsp页面都不能访问,提示您无权查看此页面,跑到日志文件中发现,启动时有如下错误:[10-8-18 15:17:41:079 CST] 00000010 webapp E com.ibm.ws.webcontain er.webapp.WebApp initializeExtensionProcessors SRVE0280E: 扩展处理器无法在工厂 [com.ibm.ws.jsp.webcontainerext.ws.WASJSPExtensionFactory@2bec2bec] 中进行初始化:ng.ClassCastException: com.sun.faces.application.WebappLifec ycleListener incom patible with java.util.EventListenerat com.ibm.ws.jsp.webcontainerext.AbstractJSPExtensionProcessor.<init> (AbstractJSPExtensionProcessor.java:250)at com.ibm.ws.jsp.webcontainerext.ws.WASJSPE xtensionProcessor.<init> (WASJSPExtensionProcessor.java:70)at com.ibm.ws.jsp.webcontainerext.ws.WASJSPExtensionFactory.createPro cessor(WASJSPExtensionFactory.java:251)at com.ibm.ws.jsp.webcontainerext.AbstractJSPExtensionFactory.createExt ensionProcessor(AbstractJSPExtensionFactory.java:75)at com.ibm.ws.webcontainer.webapp.WebApp.initializeExtensionProcessor s(WebApp.java:1313)at monInitializationFinish (WebApp.java:360)at com.ibm.ws.webcontainer.webapp.WebAppImpl.initialize(WebAppImpl.j ava:298)at com.ibm.ws.webcontainer.webapp.WebGroupImpl.addWebApplication (WebGroupImpl.java:100)at com.ibm.ws.webcontainer.VirtualHostImpl.addWebApplication(VirtualHo stIm pl.java:166)at com.ibm.ws.webcontainer.WSWebContainer.addWebApp(WSWebContai ner.java:731)at com.ibm.ws.webcontainer.WSWebContainer.addWebApplication(WSWeb Container.java:616)at ponent.WebContainerImpl.install(WebCo ntainerImpl.java:376)at ponent.WebContainerImpl.start(WebContainerImpl.java:668)at ponent.ApplicationMgrImpl.start(ApplicationMg rImpl.java:1122)at ponent.DeployedApplicationImpl.fireDeployed ObjectStart(DeployedApplicationImpl.java:1315)at ponent.DeployedModuleImpl.start(DeployedMo duleImpl.java:619)at ponent.DeployedApplicationImpl.start(Deploye dApplicationImpl.java:940)at ponent.ApplicationMgrImpl.startApplication(Ap plicationMgrImpl.java:725)at ponent.ApplicationMgrImpl.start(ApplicationMg rImpl.java:2046)at positionUnitMgrImpl.start(Composi tionUnitMgrImpl.java:439)at positionUnitImpl.start(Composition UnitImpl.java:123)at positionUnitMgrImpl.start(Composi tionUnitMgrImpl.java:382)at positionUnitMgrImpl.access$300(C om positionUnitMgrImpl.java:110)at positionUnitMgrImpl$CUInitializer. run(CompositionUnitMgrImpl.java:949)at ponent.WsComponentImpl$_AsynchInitializ er.run(WsComponentImpl.java:349)at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1550)[10-8-18 15:17:41:562 CST] 00000010 config I Initializing Sun's JavaSer ver Faces im plementation (1.2_07-b03-FCS) for context '/cc'[10-8-18 15:17:44:579 CST] 00000010 webapp W com.ibm.ws.webcontai ner.webapp.WebApp initializeTargetMappings SRVE0269W: 找不到用于处理 JSP 的扩展处理器。
cisco_ASA防火墙恢复初始化ASA 防火墙flash 被删防火墙不断启动Use BREAK or ESC to interrupt boot.Use SPACE to begin boot immediately.按下ESC进入监控模式监控模式下的显示和交换机路由器没有什么区别。
命令格式也大同小异只要大家变通一下就不难恢复。
rommon #1> ?Variables: Use "sync" to store in NVRAMADDRESS= local IP addressCONFIG= config file path/nameGATEWAY= gateway IP addressIMAGE= image file path/nameLINKTIMEOUT= Link UP timeout (seconds)PKTTIMEOUT= packet timeout (seconds)PORT= ethernet interface portRETRY= Packet Retry Count (Ping/TFTP)SERVER= server IP addressVLAN= enable/disable DOT1Q tagging on the selected portrommon #2> ADDRESS=192.168.0.2 (因为是TFFP上传,所以防火墙设置为客户机)rommon #3> GATEWAY=192.168.0.1 (网关)rommon #4> IMAGE=asa802-k8.bin (导入IOS的名称)rommon #5> SERVER=192.168.0.1 (服务器IP,也就是你的PC)rommon #6> sync (保存)Updating NVRAM Parameters...rommon #7> ping 192.168.0.1Sending 20, 100-byte ICMP Echoes to 192.168.0.1, timeout is 4 seconds:Success rate is 95 percent (19/20)确认线路是否连通,开启TFTP软件(这里说明下我测试是ASA5505 所以接的E0/0口。
security_attributes用法-回复标题:深入理解与应用security_attributes在计算机编程中,security_attributes是一个重要的概念,特别是在Windows操作系统环境下。
它主要用于控制对象(如文件、进程、线程等)的安全特性,确保系统的安全性和稳定性。
以下是一步一步详细解析security_attributes的用法。
一、理解security_attributessecurity_attributes是一种结构体,它在Windows API中被广泛使用。
这个结构体包含了三个主要的成员:nLength、bInheritHandle和lpSecurityDescriptor。
1. nLength:这是一个整型变量,用于存储security_attributes结构体的大小。
这是为了确保API函数能够正确地处理这个结构体。
2. bInheritHandle:这是一个布尔型变量,用于决定是否允许子进程继承这个句柄。
如果设置为TRUE,那么子进程就可以继承这个句柄;如果设置为FALSE,那么子进程就不能继承这个句柄。
3. lpSecurityDescriptor:这是一个指向SECURITY_DESCRIPTOR结构体的指针。
SECURITY_DESCRIPTOR结构体定义了对象的安全特性,包括所有者、组、访问控制列表(ACL)和系统访问控制列表(SACL)。
二、创建security_attributes在使用security_attributes之前,首先需要创建一个security_attributes 结构体。
以下是一个简单的示例:c++SECURITY_ATTRIBUTES sa;sa.nLength = sizeof(SECURITY_ATTRIBUTES);sa.bInheritHandle = TRUE; 允许子进程继承句柄sa.lpSecurityDescriptor = NULL; 使用默认的安全描述符在这个示例中,我们首先创建了一个security_attributes结构体,并设置了它的nLength成员为sizeof(SECURITY_ATTRIBUTES)。
The Q400EM is a high-performance, research-grade thermomechanical analyzer (TMA), with unmatched flexibility in operating modes, test probes, fixtures, and available signals. For standard TMA applications, the Q400 delivers the same performance and reliability. It is ideal f or research, teaching, and quality control applications, with perf ormance equivalent to competitive research models.Temperature Range (max)-150 to 1,000°C-150 to 1,000°CTemperature Precision + /- 1°C+ /- 1°CFurnace Cool Down Time <10 min from 600°C to 50°C <10 min from 600°C to 50°C (air cooling)Maximum Sample Size - solid 26 mm (L) x 10 mm (D)26 mm (L) x 10 mm (D)Maximum Sample Size - film/fiber 26 mm (L) x 0.5 mm (T)26 mm (L) x 0.5 mm (T)x 4.7 mm (W)x 4.7 mm (W)Measurement Precision +/- 0.1 %+/- 0.1 %Sensitivity15 nm15 nmDynamic Baseline Drift <1 µm (-100 to 500°C)<1 µm (-100 to 500°C)Force Range 0.001 to 1 N 0.001 to 1 N Force Resolution 0.001 N 0.001 N Frequency 0.01 to 2 Hz Not Available Mass Flow Control Optional Optional AtmosphereInert, Oxidizing, Inert, Oxidizing, (static or controlled flow)or Reactive Gasesor Reactive GasesNote: The Q400 can be field upgraded to the Q400EM.Operational Modes Standard Included Included Stress/Strain Included Not Available CreepIncluded Not Available Stress Relaxation Included Not Available Dynamic TMA (DTMA)IncludedNot AvailableModulated TMA ™(MTMA ™) Included Not Available1The Q400 features a rugged and reliable furnace. Its customized electronics provide excellent heating rate control and rapid response over a wide temperature range. Furnace raising and lowering is soft-ware controlled. The design ensures long life and performance consistency. The excellent heating rate control provides for superior baseline stability and improved sensitivity, while the rapid response permits Modulated TMA™operation. Furnace movementprovides operational convenience, and easy access to the sample chamber.2Located in the furnace core, the easily accessed chamber provides complete temperature and atmosphere control for sample analysis. Purge gas regulation is provided by an optional digital mass flow controller. These include enhanced data quality, ease-of-use, and productivity. The open design simplifies installation of available probes (see Modes of Deformation), sample mounting, and thermocouple placement. Data precision is enhanced by mass flow control of the purge gas.2 13Force Motor A non-contact motor provides aprecisely controlled, friction-free, calibrated force to the sample via the measurement probe or fixture. The force is programmable from 0.001 to 1 N, and can be increased to 2 N by addition of weights to a special tray. A precision sine wave generator provides a set of ten individual frequencies for use in dynamic experiments. Benefits:The motor smoothly generates the accurate and precise static, ramped, or oscillatory dynamic force necessary for quality measurementsin all modes of operation. The choice of frequencies allows optimization of dynamic TMA (DTMA) experiments in compression, 3-point bending, or tension modes of deformation.4Linear Variable Differential Transducer The heart of the Q400TMA sample measurement system is the precision, moveable-core, linear variable differential transducer (LVDT). Benefits:It generates an accurate output signal that is directly proportional to a sample dimension change. Its precise and reliable response over a wide temperature range (–150 to 1,000°C) makes for reproducible TMA results. Its location below the furnace protects it from unwanted temperature effects and ensures stable baseline performance.3421Expansion measurements determine a material’scoefficient of thermal expansion (CTE), glass transi-tion temperature (Tg), and compression modulus.A flat-tipped standard expansion probe (Figure 1)is placed on the sample (a small static force may be applied), and the sample is subjected to a temperature program. Probe movement records sample expansion or contraction. This mode is used with most solid samples. The larger surface area of the macro-expansion probe (Figure 2)better facilitates analysis of soft or irregular samples, powders, and filmsP ENETRATIONPenetration measurements use an extended tip probeto focus the drive force on a small area of the samplesurface (Figure 3). This provides precise measurementof Tg, softening, and melting behavior. It is valuablefor characterizing coatings without their removal froma substrate. The probe operates like the expansionprobe, but under a larger applied force. The hemi-spherical probe (Figure 4)is an alternate penetrationprobe for softening point measurements in solids.C OMPRESSIONIn this mode, the sample is subjected to either a static, linear ramp, or dynamic oscillatory force, while under a defined temperature program, and atmosphere. Sample displacement (strain) is recorded by either expansion / penetration experiments to measure intrinsic material properties, or dynamic tests to determine viscoelastic parameters (DTMA), to detect thermal events, and to separate overlapping transitions (MTMA™).Figure 2Figure 43-P OINT B ENDINGIn this bending deformation (also known as flexure), the sample is supported at both ends on a two-point, quartz anvil atop the stage (Figure 7). A fixed static force is applied vertically to the sample at its center, via a wedge-shaped, quartz probe. Material properties are determined from the force and the measured probe deflection. This mode is considered to represent “pure” deformation, since clamp-ing effects are eliminated. It is primarily used to determine bending properties of stiff materials (e.g., composites), and for distortion temperature measurements.Dynamic (DTMA) measurements are also available with the Q400EM, where aspecial low-friction metallic anvil replaces the quartz version.S PECIALTY P ROBE / F IXTURE K ITSAdditional sample measurement probes and fixtures are available for use with both the Q400 and Q400EM in specialty TMA applications. These include:Dilatometer Probe Kit –for use in volume expansion coefficient measurementsParallel Plate Rheometer –for the measurement of low shear viscosity of materials (10 to 107Pa.s range)under a fixed static force.The expansion, macro-expansion, and penetration probes are supplied with the Q400. These probes, plus the flexure probe, and the low-friction bending fixture, are included with the Q400EM module. Data analysis programs relevant to each of the measurements described are provided in our Thermal Advantage ™for Q Series ™software.T ENSIONTension studies of the stress/strain properties of films and fibers are performed using a Film/Fiber probe assembly (Figure 5). An alignment fixture (Figure 6)permits secure, and reproducible, sample positioning in the clamps. The clamped sample is placed in tension between the fixed and moveable sections of the probe assembly. Application of a fixed force is used to generate stress/strainand modulus information. Additionalmeasurements include Tg, softening temperatures, cure, and cross-link density. Dynamic tests (e.g. DTMA,MTMA™) in tension can be performed to determine viscoelastic parameters (e.g., E |, E ||, tan δ), and to separate overlapping transitions.Figure 6Figure 7TMA measures material deformation under controlled conditions of force, atmosphere, time, and temperature. Force can be applied in compression, flexure, or tension modes using probes previously described. TMA measures intrinsic material properties (e.g., expansion coefficient, glass transition temperature,Young’s modulus), plus processing / product performance parameters (e.g., softening points). These measure-ments have wide applicability, and can be performed by the Q400/Q400EM.TMA can also measure polymer viscoelastic properties using transient (e.g., creep, stress relaxation)or dynamic tests. These require the Q400EM module. In creep, a known stress is applied to the sample, and its deformation is monitored. After a period, the stress is removed, and strain recovery is recorded. In stress relaxation, a fixed strain is applied, and stress decay is monitored.In Dynamic TMA (DTMA), a known sinusoidal stress and linear temperature ramp are applied to the sample, and the resulting sinusoidal strain, and sine wave phase difference (δ), are measured . From this data, storage modulus (E |), loss modulus (E ||), and tan δ(E ||/E |) are calculated as functions of temperature, time, or stress.In Modulated TMA ™(MTMA ™), the sample experiences the combined effects of a linear ramp, and a sinusoidal temperature of fixed amplitude and period . The net signals, after Fourier transformation of the raw data, are total displacement and change in thermal expansion coefficient. Both can be resolved into their reversing and non-reversing component signals.The reversing signals contain events attributable to dimension changes, and are useful in detecting related events (e.g., Tg). The non-reversing signals contain events that relate to time dependent kinetic processes (e.g., stress relaxation).The Q400 and 400EM operating modes permit multiple material property measurements.The Q400 features the Standard mode, while the Q400EM additionally offers Stress/Strain,Creep, Stress Relaxation, Dynamic TMA, and Modulated ™TMA modes.Temperature (Time)Force StrainT(F o r c e )Force (Time)TFS TANDARD M ODE (Q400/Q400EM)Force is constant, and displacement is monitored undera linear temperature ramp. Provides intrinsic property measurements.Strain is constant, and the force required to maintain it ismonitored under a temperature ramp. Permits assessment of shrinkage forces in films/fibers.Force is ramped, and strain measured at constant temperature togenerate force/displacement plots, and modulus information.S TRESS /S TRAIN M ODE (Q400EM)Stress or strain is ramped, and the resulting strain or stress is measured at constant temperature. Both provide stress / strain plots and related modulus information.Strain (Stress)TStrain Stress(S t r a i n )D YNAMIC TMA M ODE (Q400EM):A sinusoidal force (stress) is applied during a temperature ramp. Analysis of the resulting strain and phase data provides viscoelastic property parameters (e.g., E |, E ||tan δ).Timet 2t 1S t r a i n / S t r e s sTemperature (time)STTemperatureTM o d u l a t e d L e n g t hM o d u l a t e d T e m p e r a t u r eC REEP /S TRESS R ELAXATION M ODES (Q400EM)In Creep, stress is held constant, and strain is monitored. In Stress Relaxation,strain is held constant, and stress decay is monitored. Both are transient tests used to assess material deformation and recovery properties.M ODULATED TMA M ODE (Q400EM):Temperature is programmed linearly, and simultaneously modulated at constant stress to generate signals relating to total displacement, CTE, and their reversing and non-reversing components. These permit detection of thermal transitions,and separation of overlapping events (e.g., Tg and stress relaxation).804020090-40-80-12050402030-1001080At a Point 127.3˚C α=25.8µm/m˚CPoint-to-Point Method α=27.6µm/m˚C Average Method α=26.8µm/m˚C230.0˚C45.0˚CAluminumExpansion Probe Size: 7.62mm Prog.: 5˚C/min Atm.: N26040140120100240220200180160260Temperature (˚C)60-20-4071.24˚C -17.48µmSize: 0.492 x 5.41 x 5.08 mm Force: 78.48 mNDeflection: -17.48 µm70605040302080Temperature (˚C)FIGURE 12D i m e n s i o n C h a n g e (µm )I NTRINSIC AND P RODUCT P ROPERTY M EASUREMENTSshows expansion and penetration probe measurements of Tg, and softening point of a synthetic rubber using a temperature ramp at constant force. The large CTE changes in the expan-sion plot indicate the transition temperatures. In penetration, they may be detected by the sharp movement of the loaded probe into the changing material structure.A CCURATE C OEFFICIENT OF T HERMAL E XPANSION (CTE) M EASUREMENTSFigure 11demonstrates the use of the expansion probe to accurately measure small CTE changes in an aluminum sample over a 200˚C temperature range. Advantage ™software permits analysis of the curve slope using an “at point”, “straight line” or “best fit” method to compute the CTE (α) at a selected temperature, or over a range.M ATERIAL P ERFORMANCE ANDS ELECTIONis an example of a 3-point bending mode (flexure probe) experiment on a polyvinyl chloride (PVC) sample, using the ASTM International T est Method E2092 to determine the distortion temper-ature. This test specifies the temperature at which a sample of defined dimensions produces a certain deflection under a given force. It has long been used for predicting material performance.-140-120-100-80-60-40-200102.54˚C-93.22 µm257.71˚C-108.0 µm50100150200250300-50350Temperature (˚C)FIGURE 13D i m e n s i o n C h a n g e (µm )202005202520202015201075250.20.10.30.0-251020304050Time (min)FIGURE 14D i m e n s i o n C h a n g e (µm )T e m p e r a t u r e (˚C )F o r c e (N )20300.0000.0150.0100.005510Slope = Modulus1520Strain (%)FIGURE 15S t r e s s (M P a )0.020M ULTILAYER F ILM A NALYSISFigure 13shows a compression mode analysis, using a penetration probe, of a double layer PE / PET film sample, supported on a metal substrate. The sample temperature was linearly ramped from ambient to 275 ˚C at 5 ˚C/min. The plot shows probe penetra-tions of the PE layer (93.22 µm) at 102 ˚C, and the PET layer (14.78 µm) at 257 ˚C respectively.F ILM P ROPERTY T ESTINGillustrates a classic isostrain experiment, in the tension mode, on a food wrapping film. The film was strained to 20% at room temperature for 5 minutes, cooled to -50 ˚C and held for 5 more minutes, then heated at 5 ˚C/min to 40 ˚C. The plot shows the force variation required to maintain a set strain in the film. The test simulates its use from the freezer to the microwave.F ILM T ENSILE T ESTINGFigure 15displays a strain ramp experiment, at a constant temperature, on a proprietary film in tension. The plot shows an extensive region where stress and strain are linearly related, and over which a tensile modulus can be directly determined. The results show the ability of the Q400EM to function as a mini tensile tester for films and fibers.01230.050.100.150.200.250.300.350.40Force (N)Yield RegionElastic Region40.40.20.60.020406080100120140160180200Temperature (˚C)As ReceivedCold Drawn0.00.20.40.60.81.0-1123456789101112Time (min)FIGURE 18CreepRecoveryS t r a i n (%)1.2F IBER S TRESS /S TRAIN M EASUREMENTSStress/strain measurements are widely used to assess,and compare, materials. shows the different regions of stress/strain behavior in a polyamide fiber (25 µm) in tension, when subjected to a force ramp at a constant temperature. The fiber undergoes instantaneous deformation, retardation,linear stress/strain response, and yield elongation.Other parameters (e.g., yield stress; Young’s modulus) can be determined.T HERMAL S TRESS A NALYSISOFF IBERSdisplays a tension mode experiment,using a temperature ramp at a constant strain (1%), to perform a stress analysis on a polyolefin fiber, as received, and after cold drawing. The plot shows the forces needed to maintain the set strain as a func-tion of temperature. The data has been correlated with key fiber industry, processing parameters, such as shrink force, draw temperature, draw ratio,elongation at break, and knot strength.C REEP A NALYSISCreep tests help in materials selection for end-uses where stress changes are anticipated. Figure 18illustrates an ambient temperature creep study on a polyethylene film in tension. It reveals the instantaneous deformation, retardation, and linear regions of strain response to the set stress, plus its recovery with time on stress removal. The data can also be plotted as compliance, and recoverable compliance, versus time.1301351401450.010.110.00110Time (min)FIGURE 19R e l a x a t i o n M o d u l u s (M P a )150050010001500200025000.100.080.060.040.02200050100150406080100120140160Temperature (˚C)FIGURE 20S t o r a g e M o d u l u s (M P a )T a n D e l t aL o s s M o d u l u s (M P a )3000-20202004000206080100120131.68˚C140160180200Temperature (˚C)FIGURE 21D i m e n s i o n C h a n g e (µm )N o n -R e v D i m e n s i o n C h a n g e (µm )R e v D i m e n s i o n C h a n g e (µm )40S TRESS R ELAXATION A NALYSISshows a stress relaxation test in tension on the same polyolefin film used for the creep study. A known strain is applied to the film, and maintained,while its change in stress is monitored. The plot shows a typical decay in the stress relaxation modulus. Such tests also help engineers design materials for end uses where changes in deformation can be expected.V ISCOELASTIC P ROPERTYD ETERMINATION – D YNAMIC TMAillustrates a dynamic test, in which a semi-crystalline polyethylene terephthate (PET) film in tension is subjected to a fixed sinusoidal stress during a linear temperature ramp. The resulting strain and phase data are used to calculate the material’s viscoelastic properties (E |, E ||, and tan δ). The plotted data shows dramatic modulus changes as the film is heated through its glass transition temperature.S EPARATING O VERLAPPINGT RANSITIONS - M ODULATED ™ TMAFigure 21shows a MTMA™ study to determine the Tg of a printed circuit board (PCB). The signals plotted are the total dimension change, plus its reversing, and non-reversing components. The total signal is identical to that from standard TMA, but does not uniquely define the Tg. The component signals, however, clearly separate the actual Tg from the stress relaxation event induced by non-optimum processing of the PCB.•conduct experiments and simultaneously analyzes data•operates up to 8 modules simultaneously•Wizards – guides and prompts in setting up experiments•provides a real-time display of the progress of the experiment •Autoqueuing– permits pre-programmed set-up of planned experiments •Autoanalysis– permits pre-programmed data analysis of planned experiments•– provides extensive, context sensitive, assistance•– terminates a test upon attaining a specified value (e.g., CTE)UNIVERSAL A NALYSIS ATA A NALYSIS•analyzes data from all TA Instruments modules•provides easy one plot analysis of large and small events•–analyzes data “as it arrives”••within UA 2000 using Microsoft Word™& Excel™templates •– for quick retrieval of previously analyzed data filesI NNOVATIVE E NGINEERINGTA Instruments is the recognized leader for supplying innovative technology,investing twice the industry average in research and development. Our new Q Series™ Thermal Analysis modules are the industry standard. The Q400TMA provides innovative technology suitable for research as well as QC laboratories. The Q400EM includes Dynamic TMA and also Modulated TMA ™, a technique unavailable from other manufacturers.T ECHNICAL S UPPORTCustomers prefer TA Instruments because of our reputation for after-sales support. Our worldwide technical support staff is the largest and most experienced in the industry. They are accessible daily by telephone, email, or via our website. Multiple training opportunities are available including on-site training, seminars in our application labs around the world, and convenient web-based courses.ALESANDERVICEWe pride ourselves in the technical competence and professionalism of our sales force, whose only business is thermal analysis and rheology. TA Instruments is recognized worldwide for its prompt, courteous, and knowledgeable service staff. Their specialized knowledge and experience are major reasons why current customers increasingly endorse our company and products to their worldwide colleagues.Q UALITY P RODUCTSAll thermal analyzers and rheometers are manufactured to ISO 9002 procedures in our New Castle, DE (USA) or our Leatherhead, UK facilities. Innovative flow manufacturing procedures and a motivated, highly skilled, work force ensure high quality products with industry leading delivery times.130******** 33130489460 3227060080 441372360363 31765087270 49602396470 390227421283 81354798418 34936009300 61395530813 46859469200。
Checkmarx CxSuite User Guide2015年1月目录1CxSuite Web 界面 (3)1.1访问Web界面 (3)1.2了解Web 界面 (4)1.2.1菜单项目 (4)1.2.2管理表 (5)2创建和管理项目 (7)2.1创建和配置一个项目 (7)2.2高级操作 (11)2.2.1配置电子邮件操作 (12)2.2.2配置一个可执行操作 (12)2.3查看项目细节 (15)2.4管理查询 (16)2.4.1导入和导出查询 (16)2.4.2管理查询规则集 (17)2.5管理项目和运行扫描 (19)3队列 (20)3.1查看队列 (20)3.2Queue队列操作 (21)4扫描结果 (22)4.1查看扫描结果表格 (22)4.2扫描结果操作 (24)4.3理解及定向扫描结果 (25)4.4扫描结果示例 (26)4.5生成扫描结果报告 (27)4.6比较扫描结果集 (29)5用户管理 (32)5.1角色与权限概述 (32)5.2创建与管理用户 (32)5.2.1在网络界面创建用户帐户 (33)5.2.2通过用户注册创建用户账户 (36)5.2.3管理现有用户 (38)5.3组织层次管理 (39)6管理与分析 (43)6.1系统指示板 (43)6.2数据分析 (43)7CxConsole: CxSuite CLI (47)7.1概述 (47)7.2运行扫描命令 (47)1CxSuite Web 界面CxSuite 为管理和分析代码扫描项目和CxSutie系统提供直观的Web界面。
本节内容:•访问Web界面•了解Web界面1.1 访问Web界面使用以下两种方法之一来访问CxSuite Web界面•如果要在本地(从服务器主机)访问CxSuite,只需使用Windows“开始”菜单Checkmarx文件夹中的链接。
•如果要从任何其他计算机访问CxSuite,请确保组织中的路由选择和防火墙配置允许客户端计算机访问CxSuite服务器。
Characterizing Secure Dynamic Web Applications Scalability Jordi Guitart, Vicenç Beltran, David Carrera, Jordi Torres and Eduard Ayguadé European Center for Parallelism of Barcelona (CEPBA) Computer Architecture Department - Technical University of CataloniaC/ Jordi Girona 1-3, Campus Nord UPC, Mòdul C6, E-08034 Barcelona (Spain) {jguitart, vbeltran, dcarrera, torres, eduard}@AbstractSecurity in the access to web contents and the interaction with web sites is becoming one of the most important issues in Internet. Servers need to provide certain levels of security so that the user feels comfortable when running the applications that provide the services he/she requires. HTTP over SSL is the most used solution, providing mutual authentication between the two interacting parts. The SSL protocol does not introduce complexity in web applications but increases the computational demand on the server, reducing its capacity to serve large number of clients and increasing the time to serve them.In order to compensate the degradation in the quality of service, the server needs to be upgraded with additional resources, mainly processors and memory. In this paper we analyze the scalability of servers that run secure dynamic web applications. We analyze how the server behaves when it is stressed with different number of clients and how the quality of service is degraded. We perform a detailed analysis of the server behavior and analyze the impact of adding more processors to the system that runs the server. The analysis is done using a fine-grained analysis framework that considers all levels in the application server execution (i.e. application, server, JVM and OS kernel). The RUBiS auction site benchmark is used to stress a Tomcat application server running on a commodity 4-way multiprocessor Intel platform with Linux.1. IntroductionCurrent web sites have to face two issues that affect directly to the site scalability. First, the web community is growing day after day, increasing exponentially the load that sites must support to satisfy all clients requests. Second, dynamic web content is becoming popular on current sites. At the same time, all information that is confidential or has market value must be carefully protected when transmitted over the open Internet. Security between network nodes over the Internet is traditionally provided using HTTPS [32]. With HTTPS, which is based on using HTTP over SSL (Secure Socket Layer [15]), you can perform mutual authentication of both the sender and receiver of messages and ensure message confidentiality. This process involves X.509 certificates that are configured on both sides of the connection. This widespread diffusion of dynamic web content and SSL increases the performance demand on application servers that host the sites. Due to these two facts, the scalability of these application servers has become a crucial issue in order to support the maximum number of concurrent clients demanding secure dynamic web content.Characterizing application servers scalability is something more complex than measuring the application server performance with different number of clients and determining the load that saturates the server. A complete characterization must also supply the causes of this saturation, giving to the server administrator the chance and the information to improve the server scalability by avoiding its saturation. For this reason, this characterization requires of powerful analysis tools that allow an in-depth analysis of the application server behavior and its interaction with the other system elements (including distributed clients, a database server, etc.). These tools must support and consider all the levels involved in the execution of web applications (operating system, Java Virtual Machine, application server and application) if they want to provide significant performance information to the administrators because the origin of performance problems can reside in any of these levels or in their interaction.A complete scalability characterization must also consider another important issue: the scalability relative to the resources. The analysis for determining the causes of server saturation can reveal that some resource is being a bottleneck for server scalability. In this case, a good option could be the addition of more resources of this type and the evaluation of the effect of this addition on server behavior in order to determine the causes of serversaturation. On the other side, although any resource has been detected as a bottleneck for server scalability, the analysis of server behavior when adding more resources can be performed to verify if server saturation problem remains unresolved.In this paper we present a characterization of secure dynamic web applications scalability divided in two parts. First, we measure the vertical scalability of the server if running with different number of processors, determining the impact of adding more processors on server saturation. Second, we perform a detailed analysis of the server behavior using a performance analysis framework, in order to determine the causes of the server saturation when running with different number of processors. This framework considers all levels involved in the application server execution, allowing a fine-grain analysis of dynamic web applications.The rest of the paper is organized as follows: Section 2 introduces dynamic web applications. Section 3 introduces the SSL protocol used to provide security capabilities when accessing web content. Section 4 describes our proposal for analyzing the scalability of secure dynamic web applications. Section 5 describes the experimental environment used in our evaluation. Section 6 presents our evaluation of secure dynamic web applications scalability. Section 7 presents the related work and finally, Section 8 presents the conclusions of this paper.2. Dynamic web applicationsDynamic web applications are a case of multi-tier application and are mainly composed of a Client tier and a Server tier, which in its turn uses to consist of a front-end web server, an application server and a back-end database. The client tier is responsible of interacting with application users and to generate requests to be attended by the server. The server tier implements the logic of the application and is responsible of serving user-generated requests.When the client sends to the web server an HTTP request for dynamic content, the web server forwards the request to the application server (as understood in this paper, a web server only serves static content), which is the dynamic content server. The application server executes the corresponding code, which may need to access the database to generate the response. The application server formats and assembles the results into an HTML page, which is returned as an HTTP response to the client.The implementation of the application logic in the application server may take various forms, including PHP [31], Microsoft Active Server Pages [26], Java Servlets [34] and Enterprise Java Beans (EJB) [33]. This study focuses on Java Servlets, but the same methodology can be applied with the other mechanisms for generating dynamic web content, in order to characterize their scalability.A servlet is a Java class used to extend the capabilities of servers that host applications accessed via a request-response programming model. Although servlets can respond to any type of request, they are commonly used to extend the applications hosted by web servers. For such applications, Java Servlet technology defines HTTP-specific servlet classes.Servlets access the database explicitly, using the standard JDBC interface, which is supported by all major databases. Servlets can use all the features of Java. In particular, they can use Java built-in synchronization mechanisms to perform locking operations.3. SSL protocolThe SSL protocol provides communications privacy over the Internet. The protocol allows client/server applications to communicate in a way that is designed to prevent eavesdropping, tampering, or message forgery. To obtain these objectives it uses a combination of public-key and private-key cryptography algorithm and digital certificates (X.509).The SSL protocol does not introduce a new degree of complexity in web applications structure because it works almost transparently on top of the socket layer. However, SSL increases the computation time necessary to serve a connection remarkably, due to the use of cryptography to achieve their objectives. This increment has a noticeable impact on server performance, which can be appreciated on Figure 1. This figure compares the throughput obtained by the Tomcat application server, configured as described in Section 5, using secure connections versus using normal connections. Notice that the maximum throughput Figure 1. Tomcat scalability when serving secure vs.non-secure connectionsobtained when using SSL connections is 72 replies/s and the server scales only until 200 clients. On the other side, when using normal connections the throughput is considerably higher (550 replies/s) and the server can scale until 1700 clients. Finally, notice also that when the server is saturated, if attending normal connections, the server can maintain the throughput if new clients arrive, but if attending SSL connections, the server cannot maintain the throughput and the performance is degraded. More information about the impact of using SSL on server performance can be found on [9].The SSL protocol fundamentally has two phases of operation: SSL handshake and SSL record protocol. We will do an overview of the SSL handshake phase, which is the responsible of most of the computation time required when using SSL. The detailed description of the whole protocol can be found in RFC 2246 [14].The SSL handshake allows the server to authenticate itself to the client using public-key techniques like RSA, and then allows the client and the server to cooperate in the creation of symmetric keys used for rapid encryption, decryption, and tamper detection during the session that follows. Optionally, the handshake also allows the client to authenticate itself to the server.Two different SSL handshake types can be distinguished: The full SSL handshake and the resumed SSL handshake. The full SSL handshake is negotiated when a client establishes a new SSL connection with the server, and requires the complete negotiation of the SSL handshake. This negotiation includes parts that spend a lot of computation time to be accomplished. We have measured the computational demand of a full SSL handshake in a 1.4 GHz Xeon machine to be around 175 ms.The SSL resumed handshake is negotiated when a client establishes a new HTTP connection with the server but using an existing SSL connection. As the SSL session ID is reused, part of the SSL handshake negotiation can be avoided, reducing considerably the computation demand for performing a resumed SSL handshake. We have measured the computational demand of a resumed handshake in a 1.4 GHz Xeon machine to be around 2 ms. Notice the big difference between negotiate a full SSL handshake respect to negotiate a resumed SSL handshake (175 ms versus 2 ms).4. Servers scalabilityThe scalability of an application server is defined as the ability to maintain a site availability, reliability, and performance as the amount of simultaneous web traffic, or load, hitting the application server increases [19].Given this definition, the scalability of an application server can be represented measuring the performance of the application server while the load increases. With this representation, the load that provokes the saturation of the server can be detected. We consider that the application server is saturated when it is unable to maintain the site availability, reliability, and performance (i.e. the server does not scale). As derived from the definition, when the server is saturated, the performance is degraded (lower throughput and higher response time) and the number of client requests refused is increased.At this point, two questions should appear to the reader (and of course, to the application server administrator). First, the load that provokes the saturation of the server has been detected, but why is this load causing the server performance to degrade? In other words, in which parts of the system (CPU, database, network, etc.) will a request be spending most of its execution time at the saturation points? In order to answer this question, we propose to analyze the application server behavior using a performance analysis framework, which considers all levels involved in the application server execution (operating system, JVM, application server and application), allowing a fine-grain analysis of dynamic web applications.Second, the application server scalability with given resources has been measured, but how would affect to the application server scalability the addition of more resources? This adds a new dimension to the application servers scalability: the measurement of the scalability relative to the resources. This scalability can be done in two different ways: vertical and horizontal.Vertical scalability (also called scaling up) is achieved by adding capacity (memory, processors, etc.) to an existing application server and requires few to no changes to the architecture of the system. Vertical scalability increases the performance (in theory) and the manageability of the system, but decreases the reliability and availability (single failure is more likely to lead to system failure). We will consider this kind of scalability relative to the resources in this paper.Horizontal scalability (also called scaling out) is achieved by adding new application servers to the system, increasing the complexity of the system. Horizontal scalability increases the reliability, the availability and the performance (depends on load balancing), but decreases the manageability (there are more elements in the system).The analysis of the application server behavior will provide us with hints to answer the question about how would affect to the application server scalability the addition of more resources. If we detect that some resource is being a bottleneck for the application server performance, this encourages the addition of new resources of this type (vertical scaling), the measurement of the scalability with this new configuration and the analysis of the application server behavior with theperformance analysis framework to determine the improvement on the server scalability and the new causes of server saturation.On the other side, if we upgrade a resource that is not being a bottleneck for the application server performance, we can verify with the performance analysis framework that scalability is not improved and the causes of server performance degradation remain unresolved. This observation justifies why with vertically scaling performance is improved only in theory, depending if the added resource is a bottleneck for server performance or not. This observation also motivates the analysis of the application server behavior in order to detect the causes of saturation before adding new resources.5. Experimental environment5.1 Tomcat servlet containerWe use Tomcat v5.0.19 [22] as the application server. Tomcat is an open-source servlet container developed under the Apache license. Its primary goal is to serve as a reference implementation of the Sun Servlet and JSP specifications, and to be a quality production servlet container too. Tomcat can work as a standalone server (serving both static and dynamic web content) or as a helper for a web server (serving only dynamic web content). In this paper we use Tomcat as a standalone server.Tomcat follows a connection service schema where, at a given time, one thread (an HttpProcessor) is responsible of accepting a new incoming connection on the server listening port and assigning to it a socket structure. From this point, this HttpProcessor will be responsible of attending and serving the received requests through the persistent connection established with the client, while another HttpProcessor will continue accepting new connections. HttpProcessors are commonly chosen from a pool of threads in order to avoid thread creation overheads.Persistent connections are a feature of HTTP 1.1 that allows serving different requests using the same connection, saving a lot of work and time for the web server, client and the network, considering that establishing and tearing down HTTP connections is an expensive operation.The pattern of a persistent connection in Tomcat is shown in Figure 2. In this example, three different requests are served through the same connection. The rest of the time (connection (no request)) the server is maintaining opened the connection waiting for another client request. A connection timeout is programmed to close the connection if no more requests are received. Notice that within every request is distinguished the service (execution of the servlet implementing the demanded request) from the request (no service). This is the pre and post process that Tomcat requires to invoke the servlet that implements the demanded request.Figure 3 shows the pattern of a secure persistent connection in Tomcat. Notice that when using SSL the pattern of the HTTP persistent connection is maintained, but the underlying SSL connection supporting this persistent HTTP connection must be established previously, negotiating a SSL handshake, which can be full or resumed depending if a SSL Session ID is reused. For instance, if a client must establish a new HTTP connection because its current HTTP connection has been closed by the server due to connection persistence timeout expiration, as it reuses the underlying SSL connection, it negotiates a resumed SSL handshake.We have configured Tomcat setting the maximum number of HttpProcessors to 100 and the connection persistence timeout to 10 seconds.5.2 Auction site benchmark (RUBiS)The experimental environment also includes a deployment of the RUBiS (Rice University Bidding System) [1] benchmark servlets version 1.4 on Tomcat. RUBiS implements the core functionality of an auctionFigure 2. Tomcat persistent connection patternFigure 3. Tomcat secure persistent connection patternsite: selling, browsing and bidding. RUBiS defines 27 interactions. Among the most important ones are browsing items by category or region, bidding, buying or selling items and leaving comments on other users. 5 of the 27 interactions are implemented using static HTML pages. The remaining 22 interactions require data to be generated dynamically. RUBiS supplies implementations using some mechanisms for generating dynamic web content like PHP, Servlets and several kinds of EJB.The client workload for the experiments was generated using a workload generator and web performance measurement tool called Httperf [27]. This tool, which support both HTTP and HTTPS protocols, allows the creation of a continuous flow of HTTP/S requests issued from one or more client machines and processed by one server machine: the SUT (System Under Test). The configuration parameters of the benchmarking tool used for the experiments presented in this paper were set to create a realistic workload, with non-uniform reply sizes, and to sustain a continuous load on the server. One of the parameters of the tool represents the number of concurrent clients interacting with the server. Each emulated client opens a session with the server. The session remains alive for a period of time, called session time, at the end of which the connection is closed. Each session is a persistent HTTP/S connection with the server. Using this connection, the client repeatedly makes a request (the client can also pipeline some requests), parses the server response to the request, and follows a link embedded in the response. The workload distribution generated by Httperf was extracted from the RUBiS client emulator, which uses a Markov model to determine which subsequent link from the response to follow. Each emulated client waits for an amount of time, called the think time, before initiating the next interaction. This emulates the “thinking” period of a real client who takes a period of time before clicking on the next request. The think time is generated from a negative exponential distribution with a mean of 7 seconds. Httperf allows also configuring a client timeout. If this timeout is elapsed and no reply has been received from the server, the current persistent connection with the server is discarded, and a new emulated client is initiated. We have configured Httperf setting the client timeout value to 10 seconds. RUBiS defines two workload mixes: a browsing mix made up of only read-only interactions and a bidding mix that includes 15% read-write interactions.5.3 Performance analysis frameworkIn order to determine the causes of the server saturation, we propose to analyze the application server behavior using a performance analysis framework. This framework, which consists of an instrumentation tool called Java Instrumentation Suite (JIS [10]) and a visualization and analysis tool called Paraver [30], considers all levels involved in the application server execution (operating system, JVM, application server and application), allowing a fine-grain analysis of dynamic web applications. For example, the framework can provide detailed information about thread status, system calls (I/O, sockets, memory & thread management, etc.), monitors, services, connections, etc. Further information about the implementation of the performance analysis framework and its use for the analysis of dynamic web applications can be found in [10] and [17].5.4 Hardware & software platformTomcat runs on a 4-way Intel XEON 1.4 GHz with 2 GB RAM. We use MySQL v4.0.18 [29] as our database server with the MM.MySQL v3.0.8 JDBC driver. MySQL runs on a 2-way Intel XEON 2.4 GHz with 2 GB RAM. We have also a 2-way Intel XEON 2.4 GHz with 2 GB RAM machine running the workload generator (Httperf 0.8). Each client emulation machine emulates the configured number of clients performing requests to the server during 10 minutes using the browsing mix (read-only interactions). All the machines run the 2.6.2 Linux kernel. Server machine is connected with client machine through a 1 Gbps Ethernet interface. Database and server machine are direct connected through 100 Mbps fast Ethernet crossed-link. For our experiments we use the Sun JVM 1.4.2 for Linux, using the server JVM instead of the client JVM and setting the initial and the maximum Java heap size to 1024 MB, which we have proven to be enough to avoid memory being a bottleneck for performance.All the tests are performed with the common RSA-3DES-SHA cipher suit. Handshake is performed with 1024 bit RSA key. Record protocol uses triple DES to encrypt all application data. Finally, SHA digest algorithm provides the Message Authentication Code (MAC).6. EvaluationIn this section we present the scalability characterization of Tomcat application server when running the RUBiS benchmark using SSL. The evaluation is divided in two parts. First, we evaluate the vertical scalability of the server when running with different number of processors, determining the impact of adding more processors on server saturation (can the server support more clients before saturating?) Second, we perform a detailed analysis of the server behavior using a performance analysis framework, in order to determine the causes of the server saturation when running with different number of processors.6.1 Tomcat vertical scalabilityFigure 4 shows the Tomcat scalability when running with different number of processors, representing the server throughput as a function of the number of clients. Notice that for a given number of processors, the server throughput increases linearly with respect to the input load (the server scales) until a determined number of clients hit the server. At this point, the throughput achieves its maximum value. Table 1 shows the number of clients that saturate the server and the maximum achieved throughput before saturating when running with one, two and four processors. Notice that running with more processors allows the server to handle more clients before saturating, so the maximum achieved throughput is higher.Notice also that the same throughput can be achieved, as shown in Figure 1, with a single processor when SSL is not used. This means that when using secure connections, the computing capacity provided when adding more processors is spent on supporting the SSL protocol. Table 1. Number of clients that saturate the server and maximum achieved throughput before saturatingnumber of processorsnumber of clientsthroughput (replies/s)1 250 90 2500 172 4950 279When the number of clients that saturate the server has been achieved, the server throughput degrades to approximately the 30% of the maximum achievable throughput, as shown in Table 2. This table shows the average throughput obtained when the server is saturated when running with one, two and four processors. Notice that, although the throughput obtained has been degradedin all cases when the server has reached a saturated state, running with more processors improves the throughput (duplicating the number of processors, the throughput almost duplicates too).Table 2. Average server throughput when saturatednumber of processorsthroughput (replies/s)1 252 50 4906.2 Tomcat scalability analysisIn order to perform a detailed analysis of the server, we have selected four different loads: 200, 400, 800 and 1400 clients, each one corresponding to one of the zones observed in Figure 4. These zones group the loads with similar behavior of the server. In order to conduct this analysis, we use the performance analysis framework described in Section 5.3.The analysis methodology consists of comparing the server behavior when it is saturated (400 clients when running with one processor, 800 clients when running with two processors and 1400 clients when running with four processors) with when it is not (200 clients when running with one processor, 400 clients when running with two processors and 800 clients when running with four processors). We calculate a series of metrics representing the server behavior, and determine which of them are affected when increasing the number of clients. From these metrics, an in-depth analysis is performed looking for the causes of their dependence of server load.Figure 5. Average time spent by the server processing apersistent client connectionFigure 4. Tomcat scalability with different number ofprocessorsIn order to detect the causes of server saturation we calculate, using the performance analysis framework, the average time spent by the server processing a persistent client connection, distinguishing the time devoted to each phase of the connection (connection phases have been described in Section 5.1) when running with different number of processors. This information is displayed in Figure 5. As shown in this figure, running with more processors decreases the average time required to process a connection. Notice that when the server is saturated, the average time required to handle a connection increases considerably. Going into detail on the connection phases, the time spent in the SSL handshake phase of the connection increases from 28 ms to 1389 ms when running with one processor, from 4 ms to 2003 ms when running with two processors and from 4 ms to 857 ms when running with four processors, becoming the phase where the server spends the major part of the time when processing a connection.To determine the causes of the great increment of thetime spent in the SSL handshake phase of the connection, we calculate the percentage of connections that perform a resumed SSL handshake (reusing the SSL Session ID) versus the percentage of connections that perform a full SSL handshake when running with different number of processors. This information is shown in Figure 6. Notice that when running with one processor and with 200 clients, the 97% of SSL handshakes can reuse the SSL connection, but with 400 clients, only the 27% can reuse it. The rest must negotiate the full SSL handshake, saturating the server because it cannot supply the computational demand of these full SSL handshakes. Remember the big difference between the computational demand of a resumed SSL handshake (2 ms) and a full SSL handshake (175 ms). The same situation is produced when running with two processors (the percentage of full SSL handshakes has increased from 0.25% to 68%), and when running with four processors (from 0.2% to 63%).We have determined that when running with any number of processors the server saturates when most of the incoming client connections must negotiate a full SSL handshake instead of resuming an existing SSL connection, requiring a computing capacity that the available processors are unable to supply. Nevertheless, why does this occur from a given number of clients? In other words, why do incoming connections negotiate a full SSL handshake instead of a resumed SSL handshake when attending a given number of clients? Remember that we have configured the client with a timeout of 10 seconds. This means that if no reply is received in this time (the server is unable to supply it because it is heavy loaded), this client is discarded and a new one is initiated. Remember that the initiation of a new client requires the establishment of a new SSL connection, and therefore the negotiation of a full SSL handshake.Therefore, if the server is loaded and it cannot handle the incoming requests before the client timeouts expire, this provokes the arrival of a great amount of new client connections that need the negotiation of a full SSL handshake, provoking the server performance degradation. This asseveration is supported with the information of Figure 7. This figure shows the number of clients timeouts occurred when running with different number of processors. Notice that from a given number of clients, the number of clients timeouts increases considerably, because the server is unable to respond to the clients before their timeouts expires. The comparisonof this figure with Figure 4 reveals that this given numberof clients matches with the saturation load of the server.Figure 6. Incoming server connections classification depending on SSL handshake type performedFigure 7. Client timeouts with different number ofprocessors。