Sudakov Form Factors with an Effective Theory of Particle Worldlines
- 格式:pdf
- 大小:118.65 KB
- 文档页数:7
S i m u f a c t .f o r m i n g 10.0simufact.forming 10.0u总览u改进前处理功能u 新功能菜单u 更加灵活的定义u改进后处理功能u改进并行计算能力u改进模具应力分析功能u改进钣金成形模块u改进机构运动模块u 环轧设备运动u 开坯锻设备运动u改进网格划分功能uGP-GUI 界面u 帮助u 改进实例分析u 新用户帮助手册S i m u f a c t .f o r m i n g 10.0总览S i m u f a c t .f o r m i n g 10.0General remarksu Simufact.forming 10.0 基于§MSC 软件Marc2010和Dytran2010 §Windows 界面支持更多功能§GP-GUI 基于Mentat2010§CAD 导入基于CADfix-Version 8.0 ServicePack 1支持最新的CAD 软件版本§全新的图形界面§新的安装和文件结构§Windows 32bit and 64bit (XP , Windows 7)Linux64 (no Forming-GUI)S i m u f a c t .f o r m i n g 10.0Installationu 所有的设定都储存在一个INI 文件~USER/ AppData\Roaming\Simufact /simufact.forming_10.0.ini ØWill allow to copy the installation to other computerØUsers can use the same settingsØThe INI-file will be written when you start the first time the GUI u Ini-File can directly be copied for other users u Whole installation can be copied or mirrored without a new setupon other computersS i m u f a c t .f o r m i n g 10.0Default Unit-Systemu Default UNIT-System after the installation is Si-mm-UnitAll examples are based on this systemUnit-System can be reset by usingS i m u f a c t.f o r m i n g 10.0u Add stl-Import for CAD Preview/Import with CAD-repair functionalitybased on CADFixu Latest Version from CADFix 8.0 Service Pack1 are implemented CADFix InterfaceIntelligent model quality diagnostics and repair Automatical defeaturingAutomaticalrepair Support Assemblies Native Interfaces CA TIA V5/R20CA TIA V4/ 4.1.X, 4.2.X Pro/Engineer Wildfire 4SolidWorks 2009Unigraphics NX7Inventor 2010Neutral Interfaces ACIS R20IGES 5.3STEP AP203 & AP214Parasolid V22VDASF 2.0STLS i m u f a c t .f o r m i n g 10.0Improved View CapabilitiesS i m u f a c t .f o r m i n g 10.0u Improved predefined view concept u 3 different View and Zoom settings can be stored 1.Global for all GUI ’s which are opened from the user 2.Stored for the current project 3.Stored for each processàThis option allows to define more flexible the view and zoomfactor for a more easy post processing and for variationsS i m u f a c t .f o r m i n g 10.0u New global setting for edge angle for outline view Edge angle = 5 deg Edge angle = 30 degS i m u f a c t .f o r m i n g 10.0u New option to control Cache for Graphicusupport huge models u Reduce out-of-memory problemsOFF : All results will not hold in the memory ON : All results are hold in th memory.Current:current used memory Clear: Clear all result information in memory Clear cache between animation steps:Results are removed from memory after each animation stepSimufact.forming1.u New option to activate filters in process and inventory windowu This allows the user to show or hide objects andprocesses very easy and concentrate on the things he want to do§eg. hide all used geometriesS i m u f a c t .f o r m i n g 10.0u Define number of entries for the “open project list ” (default: 4)S i m u f a c t .f o r m i n g 10.0uNew Cutting function with u There is a new option to decided which part should be cut 1.All components (default)2.All except workpiece 3.Only workpieceS i m u f a c t .f o r m i n g 10.0Improved Preprocessing CapabilitiesS i m u f a c t .f o r m i n g 10.0uStore comment for the projectS i m u f a c t .f o r m i n g 10.0uImproved support for default settings for process types uSettings are stored in ~InstallDir/sfForming/setting/processtype.ini u Following parameters can be predefined by the user§Default Solver and dimension, cold/or hot used for the Process§Predefined names for the tools depending on Solver and hot/cold §Number of tools used for the Process depending on Solver and hot/cold§ElementSize depending on Solver §Number of Outputresults §New: Convergence control for FE §Control parameter for forming control FE/FV §Meshertype §Ambient temperatureS i m u f a c t .f o r m i n g 10.0u Names for most objects are based on the type now including theprocess type and solver information (FE/FV)u Temperature of the die and workpiece are added to the name by defaultuNew ICON to open model view windowSim u f a c t .f o r m i n g 10.0u New Interface to Thermoprof from ABP Induction§Import temperature field based on the inductive heatingcalculation from ThermoprofSi m u f a c t .f o r m i n g 10.0u New Interface to ProCast Casting simulation softwareu Import geometry and blow whole distributionS i m u f a c t .f o r m i n g 10.0Improved Preprocessing Capabilities/Modelu Basic geometry body Cylinder are created symmetric tothe axisS i m u f a c t .f o r m i n g 10.0Improved Preprocessing CapabilitiesuEnlarge geometry in one direction to close gaps §Sometime you have small gaps in your process which are not useful for the simulation §You can enlarge it easy by using enlarge option and substitute the geometry which the new enlarged geometry geometry-EEnlarged filename is: *-ES i m u f a c t .f o r m i n g 10.0u Redesign of Positioner menuu The user can directly choosebetween Standard (with gravity)Positioner and Translation onlyu Add menu also for right mouseclickS i m u f a c t .f o r m i n g 10.0u New Function to rotate part based on the coordinate systemuRight mouse click on part 1.Relative to current position 2.absoluteS i m u f a c t .f o r m i n g 10.0uNew Interface to JMatPro §JMatPro will provide material data based on the chemical composition for a huge range and a lot of materials §This allows more material sensitive simulations § A lot of information are available also for later use (TTT etc)§JMatPro will have an export function to simufact at the endof the yearS i m u f a c t .f o r m i n g 10.0uUnits are taken from predefined settings (default mm and mm/sec) for the definition of presses u All presses which are available for FV Solver are alsoavailable for FE-Solver with all functionalities§Screw press §Counter blow Hammer §Scotch Yoke Driveu Hammer and Screw press are improved, so that the elastic effect from the dies can be taken into account uNew: Velocity table based on the diameter of a ring. This allows full flexibility for ring rolling applications uNew: Force velocity controlled press uNew: radial press with table driven velocity of the upper die (only for FE-Solver)u Presses can be mixed for FE solver (eg. CrankPress+Table)S i m u f a c t .f o r m i n g 10.0uHammer and Screw press §Support for FE and FV all functionalities §Counterblow §including efficiency factor constant variable variable with clutch §New feature to support spring effect of the dies (advanced settings)uScotch Yoke drive Press §Support for FE and FVSimufact.forming1.u New: Force velocity controlled press§Can be defined by using a table based on force/velocity§Velocity direction is controlled in forming menu§stroke has to be defined as well§Can be used similar to hydraulic pressResult from simulation:S i m u f a c t .f o r m i n g 10.0u New: radial press with table driven velocity of the upper die§Rotation path are defined based on a circle or rosette path §Can be combined with other pressesS i m u f a c t .f o r m i n g 10.0u Move friction or heat object to the processu all unassigned objects will get this friction/heatS i m u f a c t .f o r m i n g 10.0uThe die types are improved for the use with the FE-solver to make it more flexible uYou will find now:ØDie Spring ØDie Insert ØGeneric Spring ØDie Spring: The stiffness and/or Force can be defined depending on the time or displacementS i m u f a c t .f o r m i n g 10.0ØDie Insert: redefined to make it more easy to use and fully flexible. The movement can be:§Free §Fixed §Coupled with a Press (table based)§Or coupled with a generic springS i m u f a c t .f o r m i n g 10.0ØGeneric spring: a generic spring can be defined in all 3 translation directions (global or local coordinate system)and in the rotation direction as a torsion spring ØStiffness or force can be defined depending on displacement or force ØThe generic spring should be use together with a die insertS i m u f a c t .f o r m i n g 10.0Improved Preprocessing Capabilities/Contact tableu Contact tableu New: Initial stress free projection (the contact is calculated at thebeginning without a stress calculation), this is needed if you have initial penetration based on the discretization and you want to bring the parts in contact, often used if parts are glued togetherü.S i m u f a c t .f o r m i n g 10.0u Particleumost element variables can be selected for pathplotS i m u f a c t .f o r m i n g 10.0u The forming dialog is rewritten based on the new conceptwith short descriptionsuThe direction can be defined via arrows u Stroke or Time (depends on the presstype) can be setindependent from the time in the table to simulate only a part firstS i m u f a c t .f o r m i n g 10.0New terminate criteriau New terminate criteria for FE•max force for press as sumall forces of the diesof the press are cumulated§and/ormax force for each bodyand one directionS i m u f a c t .f o r m i n g 10.0u New features for sub stage dialog to support morecomplex processes very easyS i m u f a c t .f o r m i n g 10.0u You can deactivate a tool which are not used for thecalculation of the positioning of the workpiece in the first stepØEg. The blankholder is a fixed tool and have to bedeactivate for the positioning of the workpieceS i m u f a c t .f o r m i n g 10.0uYou can trim during the forming process with additional trimming tools ØYou can trim after different strokes with different tools ØIf they are not defined with a press, then they are used only for the trimming operation This allows you to form the part, trim it with a different tool and to go on with the forming processS i m u f a c t .f o r m i n g 10.0Trimmer (Cutter-1)CutterForming until 60 mm Trimming with Cutter-1Forming up to 70 mmuForming-Trimming-Forming in one runS i m u f a c t .f o r m i n g 10.u You can simulate a forward and backward motion withthe same press to have a whole cycleØYou can add a movement of a predefined press only forthe backward movementS i m u f a c t .f o r m i n g 10.0uForming process can be done in one whole cycle with forward and backward movement, kinematic for counterpunch can be deactivated for forming forward part u This is helpful for cold forming and sheet forming application where a deformations are also taken into a count in thebackward motion partS i m u f a c t .f o r m i n g 10.0 1.You can define a max. thickness for potsprocessing, so that the legend are scaled automatically, you can change this later as well 2.You can define max. distance for “Distance to Die ” Postvariabel (same as for FV), as a max Threshold 3.For 3D axissymmetric problem you can define radial&tangential results, so that you will get the vectors/tensors also in a cylindrical coordinate system 4.You can define own nodal and/or element variables which you can output by using subroutines. The names can be defined individuallyS i m u f a c t .f o r m i n g 10.0u New step size control based on the max displacementØThis can be defined also by default from the solver ØIt makes a simulation more robust, but needs generallymore stepsS i m u f a c t .f o r m i n g 10.0uSupport new solver ØThe solvers are improved ØMultiplethreading (parallel solving) are supported from ØMultifrontal Sparse solver ØCASI Solver (very fast)ØParadiso Solver ØNew: Mumps SolverØSome have a new option to speed up the solution time inthe interfaces when using DDM parallel optionS i m u f a c t .f o r m i n g 10.0u Support UsersubroutineØUser can select own subroutine and build an own version ØNeed a valid fortran licenseIntel(R) Fortran Compiler Version 10.1Requires:Microsoft(c) Visual Studio 2005 Service Pack 1Microsoft(c) Platform SDK for Windows Server 2003 SP1S i m u f a c t .f o r m i n g 10.0uThe parallel menu is redesigned and will support all parallel options ØWorkpiece only Øwill use multiple domains with remeshing of the workpiece Øonly the workpiece can be a meshed body ØMultiple bodies without remeshing Øwill use multiple domains for all meshed bodies Øwithout remeshing ØMultiple bodies with remeshing Øeach meshed body is assigned to one domain Øremeshing possibleS i m u f a c t .f o r m i n g 10.0u Starting with simufact.forming 10.0 a report can be generated in XML-format about the processuIncludes all information about the process u and the simulation parameteru informations are linkedu preview is included in the GUI。
2019年托福写作:TPO42综合写作阅读题目TPO42综合写作题目文本Integrated WritingGlass is a favored building material for modern architecture, yet it is also very dangerous for wild birds. Because they often cannot distinguish between glass and open air, millions of birds are harmed every year when they try to fly through glass windows. There are, however, several solutions that responsible businesses can use to prevent injuries to birds.One-Way GlassOne solution is to replace the regular, clear glass with one-way glass that is transparent in only one direction. The occupants of the building can see out, but birds and others cannot see in. If birds cannot see through a window, theywill understand that the glass forms a solid barrier and will not try to fly through it.Colorful DesignsA second solution is to paint colorful lines or other designs on regular window glass. For example, a window could have a design of thin stripes painted over the glass. People would still be able to see through the openings in the design where there is no paint, while birds would see the stripes and thus avoid trying to fly through the glass. Architects can be encouraged to include colorful painted patterns on glass as part of the general design of buildings.Magnetic FieldThe third solution is to create an artificial magnetic field to guide birds away from buildings. Humans use an instrument called a magnetic compass to determine directions—either north, south, east, or west. Bird research has shown that birds have a natural ability to sense Earth’s magnetic fields; this ability works just like a compass, and it helps birds navigate in the right direction when they fly.A building in a bird flight path can be equipped with powerful electromagnets that emit magnetic signals that steer birds in a direction away from the building.TPO42综合写作DirectionsYou have 20 minutes to plan and write your response. Your response will be judged on the basis of the quality of your writing and on how well your response presents the points in the lecture and their relationship to the reading passage. Typically, an effective response will be 150 to 225 words.TPO42综合写作Essay TopicSummarize the points made in the lecture, being sure to explain how they cast doubt on the specific solutions proposed in the reading passage.You must finish your answer in 20 minutes。
MICRO SWITCH Miniature Toggle SwitchesMT SeriesDESCRIPTIONHoneywell MICRO SWITCH MT Series miniature toggle switches are designed to meet the need for a rugged, cost-effective toggle switch. Quality construction features include a seal between the toggle lever and bushing, and between the cover and case. These switches are designed for use inapplications in many demanding outdoor environments, where the panels are subjected to such things as vibration fromequipment, temperature extremes, dust, splashing water, and/or hose directed water.They are capable of withstanding exposure to heavyaccumulations of early morning dew that may condense on the control panel in cabs of vehicles left outdoors overnight. The MT toggle switches with metal or plastic levers are well suited for gloved-hand operation.The panel stand-off with O-ring feature available on some listings eliminates the need for behind-the-panel hardware, provides a uniform panel height, and a panel-to-cover seal.VALUE TO CUSTOMERS• Spring-loaded mechanism provides enhanced tactile feedback for toggle switch lever• High sealing level and electrical life cycle enable more uptime in field installations which helps in keeping machines working longer with less downtime• Small size allows for smaller control box/panel size to reduce weight and operator fatigueFEATURES• Miniature behind-panel size (case) enables overall control box or panel use• IEC 60529-2001, IP67, IP68 (except terminal parts) sealing for harsh-duty applications• Up to 60,000 electrical life that enhances the use time • Available with 8 circuitry options• 2- or 3- position maintained and momentary action to meet circuit and actuator requirements• UL recognized, CE certified for global usePOTENTIAL APPLICATIONSRemote control box of • Concrete pumping • Cranes• Material handling • Boom trucks• Any application with small panel constraintsDIFFERENTIATION• 60K life cycle offers an enhanced application life, keeping maintenance, replacement, and refurbishment at a minimum• High seal rating (IP68) to protect the toggle from water ingress and support more equipment uptimePORTFOLIOHoneywell offers an extensive line of MICRO SWITCH toggle switches, including the following Series: TL , NT , TW , TS , AT , and ET .Sensing and Internet of Things005437Issue 12 * These positions are momentary. All others are maintained.Sensing and Internet of Things 34 Figure 1. DImensions mm [in]2-Pole Switch7,291-Pole SwitchFigure 2. Panel CutoutPanel Cut-out005437-1-EN IL50 GLO December 2016© 2016 Honeywell International Inc. All rights reserved.m WARNINGPERSONAL INJURYDO NOT USE these products as safety or emergency stop devices or in any other application where failure of the product could result in personal injury.Failure to comply with these instructions could result in death or serious injury.m WARNINGMISUSE OF DOCUMENTATION• The information presented in this product sheet is for reference only. Do not use this document as a product installation guide.•Complete installation, operation, and maintenanceinformation is provided in the instructions supplied with each product.Failure to comply with these instructions could result in death or serious injury.Find out moreHoneywell serves its customers through a worldwide network of sales offices, representatives and distributors. For application as-sistance, current specifications, pricing or name of the nearest Authorized Distributor, contact your local sales office.To learn more about Honeywell’s sensing and switching products, call +1-815-235-6847 or 1-800-537-6945,visit , or e-mail inquiries to *********************ADDITIONAL MATERIALSThe following associated literature is available on the Honeywell web site at :• Product installation instructions • Product range guide• Product application-specific informationHoneywell Sensing and Internet of Things 9680 Old Bailes Road Fort Mill, SC 29707 Warranty/RemedyHoneywell warrants goods of its manufacture as being free of defective materials and faulty workmanship during the appli-cable warranty period. Honeywell’s standard product warranty applies unless agreed to otherwise by Honeywell in writing; please refer to your order acknowledgement or consult your local sales office for specific warranty details. If warrantedgoods are returned to Honeywell during the period of coverage, Honeywell will repair or replace, at its option, without charge those items that Honeywell, in its sole discretion, finds defec-tive. The foregoing is buyer’s sole remedy and is in lieu of all other warranties, expressed or implied, including those of merchantability and fitness for a particular purpose. In no event shall Honeywell be liable for consequential, special, or indirect damages.While Honeywell may provide application assistance personally, through our literature and the Honeywell web site, it is buyer’s sole responsibility to determine the suitability of the product in the application.Specifications may change without notice. The information we supply is believed to be accurate and reliable as of this writing. However, Honeywell assumes no responsibility for its use.。
八年级科技前沿英语阅读理解25题1<背景文章>Artificial intelligence (AI) has been making remarkable strides in the medical field in recent years. AI - powered systems are being increasingly utilized in various aspects of healthcare, bringing about significant improvements and new possibilities.One of the most prominent applications of AI in medicine is in disease diagnosis. AI algorithms can analyze vast amounts of medical data, such as patient symptoms, medical histories, and test results. For example, deep - learning algorithms can scan X - rays, CT scans, and MRIs to detect early signs of diseases like cancer, pneumonia, or heart diseases. These algorithms can often spot minute details that might be overlooked by human doctors, thus enabling earlier and more accurate diagnoses.In the realm of drug development, AI also plays a crucial role. It can accelerate the process by predicting how different molecules will interact with the human body. AI - based models can sift through thousands of potential drug candidates in a short time, identifying those with the highest probability of success. This not only saves time but also reduces the cost associated with traditional trial - and - error methods in drug research.Medical robots are another area where AI is making an impact.Surgical robots, for instance, can be guided by AI systems to perform complex surgeries with greater precision. These robots can filter out the natural tremors of a surgeon's hand, allowing for more delicate and accurate incisions. Additionally, there are robots designed to assist in patient care, such as those that can help patients with limited mobility to move around or perform simple tasks.However, the application of AI in medicine also faces some challenges. Issues like data privacy, algorithmic bias, and the need for regulatory approval are important considerations. But overall, the potential of AI to transform the medical field is vast and holds great promise for the future of healthcare.1. What is one of the main applications of AI in the medical field according to the article?A. Designing hospital buildings.B. Disease diagnosis.C. Training medical students.D. Managing hospital finances.答案:B。
EMG AmplitudeEstimation ToolboxFor Use with MATLAB®(Version 5 for the PC)User's Guide Edward (Ted) A. ClancyAlpha Version 0.02a Worcester Polytechnic InstituteMay, 1999Copyright © 1999–2000, Edward A. Clancy.CreditsMATLAB is a registered trademark of:The MathWorks, Inc.24 Prime Park WayNatick, MA 01760Tel: 508-647-7000Fax: 508-647-7001E-mail: info@WWW: For more information, to report bugs (in detail, hopefully!), or to provide comments/constructive critique, please contact:Edward (Ted) A. ClancyDepartment of Electrical and Computer EngineeringWPI100 Institute RoadWorcester, MA 01609Phone: (508) 831-5778Fax: (508) 831-5491Email: ted@Various people have contributed to the analytic development and the analysis techniques in this Toolbox, as well as to the actual software development. Many thanks to these people, who include: Stéphane Bouchard, Kristin Farry, Neville Hogan, Denis Rancourt and Yves St-Amant.Forward to Version 0.02Welcome to the EMG Amplitude Estimation Toolbox! And thank you for helping me develop these tools. My eventual goal with this toolbox is to have a simple, efficient tool for implementing existing EMG processing algorithms and for developing new EMG processing algorithms. A frequent limitation in developing tools which are signal processing intensive is the time and effort that goes into software development and documentation. Hopefully, this toolbox will begin to allow all of us to share this effort, thereby minimizing the time that each of us must spend attending to the many details.This "alpha" version of the toolbox is likely to be incomplete in certain respects. I expect that more explanation, description and examples are needed in several areas. Also, I implore all users to independently test all modules that they use. While I have done my best to test everything in this toolbox, (1) my testing this time around has been a bit rushed---I found a few bugs during final testing (inevitably a few more must remain)--and (2) my philosophy is that any general-purpose software requires independent evaluation and testing. Different users will interact with the toolbox functions indifferent ways, thereby challenging the software in distinct ways. The most complete testing, and therefore the most robust software, is software which has been used (successfully) by many different users---such as you.I fully expect, and hope, that all of the user's of this version of the software will do so in collaboration with myself. I want the opportunity to fill in any gaps that remain in the documentation and make the toolbox most beneficial to each user. If you will agree to keep me involved in your use of the toolbox, I will do my best to improve the toolbox for your EMG needs. In the end, I want the toolbox to be a useful tool. For your part, I hope you will exercise some patience with any limitations in this initial version. As mentioned above, I hope you consider yourself as a co-developer. I also hope you will provide mature, written feedback (constructive criticism) on the toolbox. When doing so, please keep in mind that I would like to solve your problem in the context of a general tool which I hope to be useful for everyone.I hope that by working together, we can all improve our EMG processing work and do so in an efficient manner. Thank you for your help.Comments on Upward CompatibilityBecause EMG processing is an evolving field and this is a first "alpha" version of the toolbox, upward compatibility is not being assured. The plans at this time are that the default processing methods will be updated in future versions to reflect the best state of the art available. Thus, certain amounts of upward compatibility may be better served if explicit selection of all options is made. If upward compatibility is important, an alternative is to install the toolbox using a modified toolbox name. In particular, the installation instructions describe installing the toolbox using the name "emg". Forupward compatibility, the toolbox can be installed using the name "emg0_02". Thereafter, future versions could be installed using appropriate changes in the toolbox name. The desired toolbox can then be used by specifying the appropriate toolbox path to MATLAB. As the toolbox is used and feedback is received, the intent is to provide for upward compatibility at some point starting in the future.Comments on RAM UsageThe general scheme of MATLAB is to process an entire signal at one time. In addition, processing of the signal frequently requires replication of the signal while performing various filter operations. (E.g., most filtering operations are not performed "in place", thus the input vector and the output vector must exist simultaneously in RAM.) For signals of modest size, this scheme can require large amounts of RAM. There do exist certain programming styles which help to reduce RAM usage. Some effort has been made to reduce RAM requirements when it was easy to do so, however many operations were not optimized (i.e., minimized) for RAM usage. Some additional attention may be given to minimizing RAM usage in future versions of the toolbox. However, given the trend for cheaper and cheaper RAM modules in PCs (and other computers), efforts to minimize RAM requirements may prove unnecessary. For the near term, however, users should be aware that some EMG processing algorithms may be RAM intensive. Comments on DependenciesSome of the EMG functions are dependent upon other MATLAB toolboxes and/or MATLAB MEX files. For the toolboxes, the primary dependence is the Signal Processing Toolbox. Since most users of this EMG toolbox will rely heavily on signal processing algorithms for their research, this dependence will likely be permanent. Other dependencies (e.g. the routine e_uband relies on one function from the Statistics Toolbox) may eventually be removed. The dependency on MEX files is for implementation of limited, numerically intensive portions of certain algorithms. Thusfar, an alternative non-MEX file method had been made available. However, the non-MEX file methods will be quite slow. Thus, so long as MATLAB is running on a PC, the MEX files are recommended for use. If the toolbox is installed on other machines, it might be best to attempt to make MEX files for these machines. The MEX file source code (in the C programming language) is supplied with the toolbox.Table of ContentsFront PageCreditsForwardTable of ContentsInstalling and Uninstalling the EMG Amplitude Estimation Toolbox Introduction to EMG Processing and the EMG Amplitude Estimation Toolbox A Review of Methods for Estimating the EMG AmplitudeEMG Processing with Basic Functions of the EMG Amplitude Estimation ToolboxEMG Processing with Global Functions of the EMG Amplitude Estimation ToolboxEMG Amplitude Estimation Toolbox ReferenceInstalling and Uninstalling the EMG Amplitude Estimation Toolbox NOTE: These instructions are for a PC. No instructions yet exist for other machines (portions of the Toolbox will not function on other machines). InstallingAt present, there is no automated method for installing the Toolbox. Thus, the Toolbox must be installed using the steps and instructions which follow. The Toolbox exists onthe distribution media (e.g., floppy disk, CD ROM, Web archive, etc.) within the directory 'emgxxxxx', where xxxxx denotes the Toolbox version. For example, Toolbox version 1.01 would be contained within the directory named 'emg1_01'. Denote this directory as '$emg$'.Denote the directory on the PC in which MATLAB is installed as '$MATLAB$'. The default value of '$MATLAB$' is 'c:\MATLAB'.To install the toolbox, do the following:1. Uninstall Any Previous Versions: If any previous Toolbox versions are installed,they should first be uninstalled.2. Install M-Files:1. Copy the directory '$emg$\m_files' on the distribution media to directory'$MATLAB$\toolbox' on the PC.2. Rename directory '$MATLAB$\toolbox\m_files' on the PC to'$MATLAB$\toolbox\emg'.3. Add M-Files to Default Path:1. If file '$MATLAB$\bin\startup.m' does not exist, create it as an empty file.(*** For MATLAB 5.3, replace '$MATLAB$\bin' with'$MATLAB$\toolbox\local' for all occurrences on this page.***)2. Edit file '$MATLAB$\bin\startup.m', adding the line'$MATLAB$\toolbox\emg'.4. Install Help Files:1. Copy the directory '$emg$\html' on the distribution media to directory'$MATLAB$\help\toolbox' on the PC.2. Rename directory '$MATLAB$\help\toolbox\html' on the PC to'$MATLAB$\help\toolbox\emg'.UninstallingAgain, there is no automated method at present for uninstalling the Toolbox. The manual steps for uninstalling the Toolbox are:1. Delete Help Files: Delete the directory '$MATLAB$\help\toolbox\emg'.2. Remove M-Files from Default Path: Delete the line '$MATLAB$\toolbox\emg'from the file '$MATLAB$\bin\startup.m'.3. Delete M-Files: Delete the directory '$MATLAB$\toolbox\emg'.Introduction to EMG Processing and the EMG Amplitude Estimation Toolbox The EMG Amplitude Estimation Toolbox is intended to be a convenient tool for implementing a class of EMG amplitude estimators in MATLAB. This class of EMG processors has six general, sequential stages shown in the figure below. The six stagesare (1) noise rejection/filtering, (2) whitening, (3) multiple-channel combination(including gain scaling), (4) demodulation, (5) smoothing and (6) relinearization. Noise rejection/filtering generally consists of high pass filtering to suppress motion artifact inthe raw EMG signal. High pass filtering also removes the offset in the EMG signalwhich arises from offsets in the recording electronics and A/D converter. Whitening increases the statistical bandwidth [Bendat and Piersol, 1971] of the raw EMG. By temporally uncorrelating the EMG signal, the detection algorithm can operate on each whitened sample separately. Multiple-channel combination is used to combine the information from several electrode recordings made over the same muscle.Demodulation rectifies the whitened EMG and then raises the result to a power. Once demodulated, the information content of the signal changes from the signal standard deviation (prior to demodulation) to the signal mean (raised to a power - after demodulation). Smoothing filters the signal, increasing the signal-to-noise-ratio (albeit atthe expense of adding bias error to the estimate). Finally, relinearization inverts theeffect of the power law applied during the demodulation stage, returning the signal tounits of EMG amplitude.Users who are developing EMG processing algorithms or who wish to have detailed control over the EMG processing steps can do so by using the basic functions supplied in the EMG Amplitude Estimation Toolbox. These functions are described in the section titled "EMG Processing with Basic Functions of the EMG Amplitude Estimation Toolbox". Most user's, however, can control all of the EMG processing functions via two commands described in the section titled "EMG Processing with Global Functions of the EMG Amplitude Estimation Toolbox". The first command, e_cal(), is used to select all processing options and to perform calibration for an EMG processor. The second command, e_amp(), then processes all EMG amplitude estimates for that processor. Note that users must be familiar with the basic functions of the toolbox prior to using the two global functions.Three sections follow this introduction. The first section is a general review of EMG amplitude estimation processing. This section is not specific to the Toolbox. Rather, it serves as a background to the problem of amplitude estimation. The following sections then describe the functions of the toolbox; first the basic functions, then the global functions.Other EMG FunctionsThe toolbox includes one other function of use to EMG processing. Often it is useful to be able to generate simulated EMGs. One method for doing so is to generate a randomprocess, temporally and spatially filter the process, then modulate the output by thesimulated EMG amplitude. Toolbox function e_fsim is intended to provide a simple mechanism for performing this genre of simulation.The toolbox also includes some sample data for EMG processing. These data are described in the reference section titled e_data.ReferencesBendat, J. S. and Piersol, A. G. Random Data: Analysis and Measurement Procedures. New York: Wiley, 1971.A Review of Methods for Estimating theEMG AmplitudeThe amplitude of the surface electromyogram (EMG) is frequently used as the controlinput to myoelectric prostheses, as a measure of muscular effort, and has also been investigated as an indicator of muscle force. This paper will review the methods whichare used to estimate the EMG amplitude from recordings of the EMG. (Note that this review does not include the related area of EMG-to-force processing.) Historically,Inman et al. [1952] are credited with the first continuous EMG amplitude estimator.They implemented a full-wave rectifier followed by a simple resistor-capacitor (RC) low pass filter. Early investigators studied the type of non-linear detector which should be applied to the waveform. This work led to the routine use of analog rectify and smooth (low pass filter) processing and root-mean-square (RMS) processing of the EMG waveform to form an amplitude estimate. Ensuing investigation has shown the promiseof whitening individual EMG waveform channels, combining multiple waveformchannels into a single EMG amplitude estimate and adaptively tuning the smoothingwindow length. None of these techniques have been routinely incorporated into EMG amplitude estimators.Emerging from this work is a standard cascade of six sequential processing stages whichcan be used to form most any EMG processor. The six stages are (1) noiserejection/filtering, (2) whitening, (3) multiple-channel combination (including gain scaling), (4) demodulation, (5) smoothing and (6) relinearization. Noiserejection/filtering generally consists of high pass filtering to suppress motion artifact inthe raw EMG signal. High pass filtering also removes the offset in the EMG signalwhich arises from offsets in the recording electronics and A/D converter. Whitening increases the statistical bandwidth [Bendat and Piersol, 1971] of the raw EMG. By temporally uncorrelating the EMG signal, the whitening filter orthogonalizes the data samples, allowing the detection algorithm to operate on each whitened sample independently. Multiple-channel combination is used to combine the information from several electrode recordings made over the same muscle. Demodulation rectifies the whitened EMG and then raises the result to a power. Once demodulated, the information content of the signal changes from the signal standard deviation (prior to demodulation)to the signal mean (raised to a power - after demodulation). Smoothing filters the signal, increasing the signal-to-noise-ratio (albeit at the expense of adding bias error to the estimate). Finally, relinearization inverts the effect of the power law applied during the demodulation stage, returning the signal to units of EMG amplitude.Noise Rejection/FilteringHigh pass filtering serves to remove artifacts from the EMG, either during the recordingof the signal or during post-processing. High pass filtering rejects at least three established sources of artifact: offsets due to the recording apparatus, motion artifacts and electrocardiographic (ECG) artifacts. The recording apparatus - typically electrode-amplifiers, signal conditioning (hardware high and low pass filters, and individual channel gains), and analog-to-digital conversion - adds offsets to the normally zero-mean EMG. Thus, the signal mean is generally removed; a high pass filtering operation, albeit at a very low cut-off frequency (0 Hz).High pass filtering is also used to remove motion artifacts from the EMG. The frequency content of changes in joint angle and force fall below 5-10 Hz, while nearly all of the frequency content of the EMG is well above this range. Hence, high pass filters can be used to eliminate the frequencies associated with motion, with limited loss to EMG signal content. In general, initial high pass filtering is accomplished in hardware, either in the electrode-amplifiers, the signal conditioners, or both. A frequent strategy is for these filters to be of low order (2-4), but sufficient to prevent motion artifacts from causing the signal to saturate the recording apparatus. The lower order filters will roll-off slowly. Typical filter cut-off frequencies are between 10 and 30 Hz. With certain highmotion/vibration applications, higher order (6-10) analog filters are necessary. Thereafter, higher order high pass filtering can be performed during post-processing. For motion artifact rejection, high order high pass filters (e.g., tenth order) at 15-20 Hz may be sufficient.Finally, recordings over abdominal, chest and neck muscles, in particular, are frequently corrupted by ECG artifact. Redfern et al. [1993] found empirically that high pass filtering such signals, with a cut-off frequency at 30 Hz, greatly reduced the ECG artifact. This technique is likely acceptable if the filtered signal is used for amplitude estimation. However, if the shape of the power spectral density of the signal is analyzed during fatiguing contractions, then a cut-off frequency at 30 Hz may interfere with effected frequencies. In such cases, the cut-off frequency may need to be no higher than 20 Hz, if possible.WhiteningSeveral investigators have found that applying a whitening filter prior to demodulation and smoothing improves the amplitude estimate. A whitening filter outputs a constant-valued, or "whitened," power spectrum in response to an EMG input. If EMG is modeled as a discretely sampled Gaussian process, whitening orthogonalizes the data samples, allowing the detection algorithm to operate on each output sample independently. In addition, Zhang et al. [1990] discussed the advantages of whitening based on a model of EMG as the superposition of simulated motor unit action potentials. Kaiser and Peterson [1974] found that the shape of the whitening filter should be a function of the contraction level. They suggested that measurement noise, which is a larger portion of the entire signal from weaker contractions than from stronger contractions, may be a major determinant of the whitening filter's shape. They designed an adaptive analog filter to achieve the whitening that they desired. Hogan and Mann [1980a], [1980b] found thatreducing the outer edge spacing of a pair of rectangular electrodes from 20 mm to 10 mm whitened the EMG. During a constant 25% maximum voluntary contraction (MVC) trial, this electrode spacing gave a 35% signal-to-noise-ratio (SNR) improvement. Harba and Lynn [1981] used autoregressive (AR) modeling of the EMG power spectrum to form a whitening filter. Using a sixth-order AR model, they found only small contraction-level dependent changes in the whitening filter's shape. Whitening approximately doubled the probability of correctly differentiating between one of four discrete contraction levels.D'Alessio [1984] and Filligoi and Mandarini [1984] discussed whitening in functional mathematical models of the EMG. ("Functional" models are based on the observed signal rather than the underlying physiological process.) D'Alessio et al. [1987] proposed updating an AR model of the EMG PSD on a sample-by-sample basis. They used a forgetting factor so that the resulting PSD estimate represented only the most recent EMG signal. An adaptive whitening filter is then formed from the dynamic AR coefficients. With this technique, any change in the PSD structure which can be represented by an AR model can be accommodated. They implemented this technique using a sequential algorithm with a forgetting factor.Clancy and Hogan [1994] used moving-average digital whitening filters for constant-force contractions. They found that fourth-order whitening filters, calibrated from a short segment (<=5 s) of data, improved the SNR by 63% for 10%, 25%, 50% and 75% MVC's. These whitening filters, however, performed poorly for contractions less than 10% MVC, where additive measurement noise dominated the output of the whitening filters. Like Kaiser and Peterson [1974], Clancy and Hogan [1997], [1991] implemented an adaptive whitening filter which maintained the SNR performance improvement for contractions above 10% MVC, but reverted to unwhitened processing for lower contraction levels. Their implementation was a simple first-order adaptive filter. In quasi-constant-force, constant-angle, nonfatiguing EMG-to-torque estimation about the elbow, this simple adaptive whitening filter provided an error reduction of about 10% from the unwhitened technique.This result and the large performance improvements in EMG amplitude processing demonstrated for contractions above 10% MVC suggested that more formal approaches to adaptive whitening were well worth pursuing. Clancy and Farry [unpublished] recently developed a more formal adaptive whitening technique by modeling additive background noise in the measured EMG signal. Their adaptive whitener consists of cascading a non-adaptive whitening filter, an adaptive Wiener filter, and an adaptive gain correction. These stages can be calibrated from two, five second duration, constant-angle, constant-force contractions, one at a reference level [e.g., 50% maximum voluntary contraction (MVC)] and one at 0% MVC. In experimental studies of this technique, subjects tracked a randomly-moving target (projected on a computer screen) with real-time EMG amplitude estimates. With a 0.25 Hz bandwidth target, either adaptive whitening or multiple-channel processing reduced the tracking error to roughly half that of the error achieved using force feedback. At a 1.00 Hz bandwidth, all of the EMG processors had errors equivalent to that found with force feedback, reflecting that errors in this task were dominated by subjects’ inability to track targets at thisbandwidth. Increases in the additive noise level, smoothing window length, and tracking bandwidth diminished the advantages of whitening.Compared to the adaptive whitening technique of Clancy and Farry [unpublished], the technique of D'Alessio et al. [1987] is much less restrictive. It could provide better whitening if, or to the extent that, the true PSD shape changes with the EMG amplitude (or with localized muscle fatigue). However, since each PSD estimate is based on only a short segment of the most recent data, each PSD estimate (and, therefore, the resulting whitening filter) has a high variance. Hence, to the extent that the EMG PSD is truly amplitude modulated, the whitening method of Clancy and Farry is more stable and repeatable. For example, with the method of D'Alessio et al., a different whitening filter can result from different instances of EMG data which share the same PSD.Multiple-Channel CombinationThe first reported use of multiple sites for EMG amplitude estimation appears to be thatof Hogan and Mann [1980a, 1980b]. They suggested that dispersing multiple electrodes about a single muscle would provide a broader, more complete, measure of the underlying electrophysiologic activity, since a single differential electrode obtains mostof its signal energy from a small portion of muscle adjacent to the electrode. They derived an optimal amplitude estimator assuming that separate EMG sites were spatially correlated but temporally uncorrelated. Using four electrodes, they achieved an SNR performance improvement of approximately 91% compared to the single site rectify and low-pass filter estimator of Inman et al. [1952]. The combination of multiple sites and whitening via electrode geometry yielded an SNR performance improvement of approximately 176% compared to the estimator of Inman et al. The SNR performance of their algorithm was relatively insensitive to force levels over the range of 5-25% MVC. Hogan and Mann implemented their algorithm off-line on a digital computer and on-line with analog circuitry. Murray and Rolph [1985] implemented this algorithm in real time on a digital microprocessor. Harba and Lynn [1981] used four electrode pairs to improve the quality of an EMG processor which tried to differentiate between four discrete contraction levels. They were able to improve the probability of correctly differentiating between contraction levels by 40-70% (compared to using one electrode). Thusneyapan and Zahalak [1989] reported a nine site EMG amplitude estimator. Clancy and Hogan [1995] combined the techniques of waveform whitening and multiple channel combination. For contractions ranging from 10-75% MVC, a four channel, temporally whitened processor improved the SNR 187% compared to the estimator of Inman et al. [1952]. Eight whitened combined channels provided an SNR improvement of 309% compared to the estimator of Inman et al. Calibration of the optimal processor was achieved with a single five second contraction trial at 50% MVC. Continuing work has consistently found multiple channel amplitude estimators to be superior to single channel amplitude estimators [Clancy, in press; Clancy and Farry, unpublished]. Finally, Clancy [1997] noted that when multiple channels of EMG are recorded, the risk of failed recording channels (e.g., shorted electrodes, pick-up of large amounts of unwanted noise,etc.) grows with the number of electrodes. Automated methods for locating and managing failed channels may need to be developed.Demodulation and RelinearizationTreating the EMG as a zero mean, amplitude modulated signal, Inman et al. [1952] suggested demodulation with a full-wave rectifier (the analog equivalent of the first-power - absolute value - demodulator). In an attempt to improve on this demodulator, Kreifeldt and Yao [1974] experimentally investigated the performance of six non-linear demodulators. A second-power demodulator was found to be best for contraction levels of 10%, 25% and 50% MVC. A fourth-power demodulator was found to be best at 5% MVC. These power law demodulators improved the SNR performance of the full wave rectifier by 5-20%, depending on the force level.Hogan and Mann [1980a,1980b] used a functional mathematical model of EMG - based on a model of EMG as a Gaussian random process - to analytically predict that a second-power demodulator would give the best maximum likelihood estimate of the EMG amplitude for constant-force, constant-posture, nonfatiguing contractions. Theoretically, the SNR with this model is: SNR = ([2/(pi-2)]·N)1/2, where N is the number of degrees of freedom in the amplitude estimate. Experimentally, they found no SNR performance difference between the RMS processor (i.e., a second-power demodulator) and a full wave rectifier (i.e., a first-power demodulator). Similarly, Clancy [1991] consistently found full wave rectification to be a small improvement (2-8%) over RMS detection. These experimental results are contrary to the predictions from Gaussian theory. Although a Gaussian density for surface EMG is frequently assumed, evidence in the literature has demonstrated mixed results. Roesler [1974] measured the positive peak amplitudes from constant-force, constant-angle, fatiguing contractions of the biceps, triceps and forearm muscles. The Gaussian density precisely described the experimental density for various contraction strengths. For one biceps evaluation, the probability of deviation from a Gaussian distribution was less than 10-3, using a Chi-square test.Milner-Brown and Stein [1975] reported that the distribution of the EMG signal from constant-force, constant-angle contraction of the first dorsal interosseus muscle was more sharply peaked near zero than a Gaussian distribution. Recordings at higher force levels tended to appear less peaked than those at lower force levels. Parker et al. [1977], using fine wire electrodes inserted into biceps muscles, graphically compared the EMG probability density to a Gaussian density during light and moderate contraction levels. They concluded that the EMG is reasonably modeled as a Gaussian random process. Hunter et al. [1987], using surface electrodes on the biceps muscles, graphically compared the EMG probability density to a Gaussian density. Isometric, constant-force, nonfatiguing contractions were conducted at 30% MVC. They found that the density function departed considerably from the Gaussian form, being more sharply peaked near zero. More recently, Bilodeau et al. [1997] examined constant-angle, nonfatiguing contractions of the biceps muscles. Constant-force (20%, 40%, 60%, 80% MVC) and slowly-force-varying contractions were studied. Using a Shapiro-Wilk test, they generally found that EMG signals present a non-Gaussian amplitude distribution, being。
A Tool for Test Case Generation and EvaluationAhmed M. SalemDepartment of Computer Science, California State University, SacramentoSacramento, CA 95819.AbstractSoftware Quality has been a major concern since the inception of software. Testing has been regarded as a critical phase in the software lifecycle. To improve the software testing process and ensure high quality software, new tools and techniques need to be developed. In this paper we propose and describe a software testing tool for generating and evaluating efficient test cases. The tool is built on the concept of Design of Experiments (DOE) anda logistic regression. The focus is to efficiently minimize the number of test cases and predict software failures.Keywords: Software Testing Process; Logistic Regression; Design of Experiments (DOE)1. IntroductionSoftware testing is the process of executing software to determine its correctness with respect to its specification. Failures are deviations from the specification and are determined by comparing observed vs. intended behaviorof the system [1]. It is inherently an unbounded process because it will never be known with total confidence whether the software under test contains no faults or will never fail. Thus any testing process is a form of sampling. The tester uses some knowledge of the system to select a subset of tests to execute. Too often, this selection processis ad hoc.Testing is the primary tool for software quality assurance. It embodies not just the act of running a test, but designing tests, predicting test outcomes, establishing standards for tests, and corrective procedures for discovered errors. Developers are under great pressure to deliver more complex software on increasingly aggressive schedules and with limited resources. Testers are expectedto verify the quality of such software in less time and with even fewer resources. In such an environment, dynamic testing tools are a must to improve the software testing process.The aim of this paper is to describe a testing tool which utilizes both logistic regression modeling and Design of Experiments (DOE). 2. Background and Design PrinciplesIt is becoming harder to improve software quality and reliability merely based on theoretical models and approaches. Efficient and intuitive testing tools are needed while consuming fewer resources in terms of time and cost. Essentially, the motivation for this work is two fold: First, to improve the software testing process through dynamic test case generation (minimum and effective set of test cases). Thus, software testers can use the tool to define test models that have dozens of parameters and the tool will generate an efficient set of test cases. This will enable testers to define models with enough detail to accurately capture the semantics of the system under test. Second is to develop a predictive model that allows the software tester to predict the outcomes of test cases. This will, in turn, enable testers to shorten testing time, forecast software release, and gain insight into the quality of software under test.The proposed tool consists of the following four modules: •Input and Specification Module•Test Generation Module (using DOE designs)•Logistic Regression Module (using Logistic Regression)•Analysis Module.2.1. Input and Specification ModuleThis module allows the software tester to specify the test parameters, formulate them, and pass them to the generation module. The test parameters will be coded by values as 1 and -1. The tool, for now, is designed to work with two values for each test parameter.2.2. Test Case Generation ModuleThis module consists of various DOE designs to generate efficient test cases. With these designs, the software tester can select the appropriate design based on the number of test parameters.DOE is a general method for increasing the efficiency of an experiment by reducing the number of experimental runs required to achieve the desired results [2]. The use oftechniques from experimental design to choose test cases is relatively new in software testing. In other related work, other DOE techniques have been used in software testing. Taguchi method was used as described in Phadke [3]. In this work, the use of orthogonal arrays for planning test case generation was proposed.Design of experiments is a very effective way to minimize the experimental runs and yet cover the input space efficiently. DOE is a practice that employs statistical tools and methods in scientific experimenting [4]. Using DOE, the designer can successfully study multiple components in a single experiment. Figure 1 show a sample of DOE.Figure #1 – DOE Sample InputDOE designs are very powerful techniques, which are utilized to identify or screen important factors (test parameters) affecting a process. DOE is also used develop empirical models that validates these concepts. In many cases, it is sufficient to consider the test parameters affecting the production process at two values. For example, the test parameter may either be set at maximum or minimum or a screen field could be set at blank or not blank. The experimenter’s objective is to determine whether any of these changes affect the results of a specific function or the outcome of a test case [5]. The most intuitive approach to study those factors would be to vary the test parameters of interest in a full factorial design, that is, to try all possible combinations of settings.This would work fine, except that the number of necessary runs or test cases in the experiment (observations) will increase exponentially [6]. For example, if the experiment contains only 10 parameters we would need 210 = 1,024 runs (test cases) in the experiment. Because each run may be time-consuming and costly, it is often not feasible to require many different testing runs for the experiment. In these conditions, fractional factorials are used to minimize the number of runs and to reduce cost. Fractional factorial designs of resolution III are used in the early stages of the experiments. The main purpose of this design is to reduce the initial set of test parameters to a small set of parameters that might be sufficient and meaningful in the process. Fractional factorial designs of resolution IV and V are used when parameter interaction perceived to be significant and might expose more faults in the software under test.2.3. Logistic Regression ModuleThis module produces a model for test case prediction. The logistic module consists of two components: an estimation component for the unknown parameters and a prediction component. The function of the estimation component is to estimate the unknown parameters based on the test parameters and the observed outcome (Y). The model first estimates the unknown parameters and then the prediction component calculates, using the logistic model, the probability of test case outcomes. For the logistic regression model, the Maximum likelihood Estimation (MLE) is used to estimate the unknown parameters. The MLE seeks to maximize the log likelihood (LL) which reflects how likely it is (odds) that the observed value of the independent variable may be predicted from the observed values of independents.There are a variety of statistical techniques that can be used to predict a binary outcome variable (Y) from a set of independent variables. Multiple regression analysis and discriminate analysis are two related techniques that might be applicable. However, these techniques pose difficulties when the outcome variable (Y) can have only two values – an event occurring or not occurring. Ordinary regression deals with finding a function that relates a continuous outcome variable (Y) to one or more independent variables (X1, X2,..).Simple linear regression assumes a function of the form:Y =1ββ+ * X1 +2β * X2+3β * X3 +4β * X4+...and finds the values ofβ,1β,2β,3β,4β, etc. (β is called the intercept.)Logistic regression is a variation of ordinary regression, useful when the observed outcome is restricted to two values, (usually coded as 1 or 0, respectively). It produces a formula that predicts the probability of the occurrence as a function of the independent variables. Logistic regression fits a special s-shaped curve, as shown in figure 2, by taking the linear regression form, which could produce any Y-value between minus infinity and plus infinity, and transforming it with the function:P =)1/(YY e e +Figure 2- Logistic Regression CurveLogistic regression is a statistical technique that is widely used in medical research but is rarely used in other areas such as software testing research. This paper will focus on the logistic regression model and its use in modeling the relationship between a dichotomous outcome variable (Y ) and a set of independent variables.The probability that a software failure occurs given the test parameters [7]:X 1, X 2 , X 3, X 4 ,X 5 , X 6, X 7, X 8, X 9, and X i is donated by π(X 1, X 2, X 3,…, X i ) = P (Y = 1 | X 1, X , X , ….., X 23i )π]1/[1)(1i ni i o X e∑+==+−ββWhere 0β,1β,2β, …, i β are estimated by the Maximum Likelihood (ML) estimation technique.2.4. Analysis ModuleThis module is used to examine the results of the models produced by the tool and to keep record of the number of test cases generated, test cases that produced failures, and test cases that that did not produce failures. For these models, the maximum likelihood (ML) used to estimate the unknown parameters. The maximum likelihood (ML) estimation is one of several alternative approaches that have been developed for estimating the parameters in a mathematical model. Because the logistic model is a non-linear, ML estimation is the preferred estimation method for logistic regression.In other models like the multiple linear regressions the interpretation of the regression coefficients isstraightforward. It shows the amount of change in the outcome variable for one-unit change in the independent variable. However, to understand the interpretation of the logistic coefficient, the logistic model can be rewritten in term of the odds of an event occurring (test case produceda failure or test case did not produce a failure).In other words, the logistic model can be represented by Logit form of the model. To get the Logit from the logistic model, a transformation of the model is made. The Logit transformation, denoted as logit P (X), is given as follows:Logit = ln)(X P []))(1/()(X P X P − where]1/[1)()(∑+=+−i i o X eX P ββThis transformation allows us to compute a number, called logit P (X), for each test case with input variables given by X. The logistic regression model fits the log odds by a linear function of the variables [8]. Logit P (X) =o β∑=+ni 1i βX iThe analysis module statistical criterion is based on the analysis of categorical data. As we have defined earlier that the outcome variable (Y) is of the binary form. The outcome (Y ) can assume only two values (1 and 0) or (fail or pass). The module evaluation criteria are predictive validity and misclassification rate. We use the criterion of predictive validity for validation and assessment, since we determine the absolute worth of a predictive model by looking at its statistical significance. A model that does not meet the criterion of predictive validity should be rejected.The predictive validity is the capability of the tool to predict the probability of a test case’s outcome. The tool first checks the resultant model against DOE generated test cases (training set) using the actual test cases outcome (1 or 0). Then validate the produced model against test cases that are outside the training set. A significant p value and a chi-square statistic are applied to test the model fitness.With regard to the predictive models, two misclassification errors should be taken into consideration and examined. A Type 1 error occurs when a test case that produced a failure is classified as a test case that did not produce a failure, while a Type 2 error occurs when a test case that did not produce a failure is classified as a test case that produced a failure. It is our aim to have both types of errors very small, especially Type 1. However,since the two types of errors are not independent, software testers should consider their different implications. As a result of Type 1 error, a test case that actually produced a failure could pass the software tester. This would cause the release of lower quality software and more testing effort when a failure actually occurs in the filed. As result of a Type 2 error, a test case that actually passed will receive more testing effort than needed. This in turn would cause a waste of resources and possible delays of release time. To validate this underlined approach and prove its effectiveness for the software testing process, it is necessary to show that such designs and models are effective in reducing test cases, covering the input domain efficiently, and correctly predicting software failures prior to testing. The tool’s performance depends on how it predicts the outcome of test cases outside the generated training set.2.5. ProcessAfter modeling the software under test and selecting the test parameters, the tool follows the following steps:1.Generate a set of test cases using DOE designs(training set).2.Run the training set against the software undertest and observe each test case outcome (Y) – 1or 0.3.Evaluate the model adequacy by examining the pvalue and chi-square.4.If the model is adequate based on the validitycriteria then the model is used to predict the testcases outcome outside the training set.5.If the model is not adequate, add more test casesand develop a new logistic model. Restart fromstep 2.6.Analyze the results.3. Benefits and limitationsFrom our experience and based on the case studies, we conclude that this tool is more effective with applications where parameter interaction is of great importance. This approach is general enough to be applied and implemented with various software applications. Database applications, network applications, telecommunication, and computationally intensive applications are good candidates. We found that this tool is more effective and produced good results with the early releases of software. The contributions of this research are as follows:1.Generating efficient test cases and gaining agreat deal of insight into the quality of thesoftware.2.Predicting test case’s outcome with highprobability, which allows the software tester toprioritize the testing effort to achieve qualitygoals.3.Logistic regression prediction capabilityprovides a good reference as when to releasesoftware.4.Ranking test cases that produced failures andidentify the components that are more error-prone is another benefit.More work need to done with three or more value parameters. Complete and exhaustive case studies need to be conduct to validate the effectiveness of the proposed tool. Further study and research could be conducted with mixed DOE designs.4. ConclusionWe described a new tool for generating and predicting software failures. When this tool is used with early releases of software, the predictive models proved usefulin predicting software failures and in forecasting software readiness. We feel the tool’s underlined approach (DOE and logistic Regression) is applicable to most applications especially if the tool is used to test early software releases. However, to increase our confidence in the results, we will conduct and examine several case studies with different types of applications.5. References[1]Malaiya, Y.K., Karcich, N., Li, F. and Skibbe, R.”The Relationship between Test Coverage and Reliability”, Proceedings of the Fifth InternationalSymposium on Software Reliability Engineering,Monterey, CA, 1994[2]Taguchi, G., Introduction to Quality Engineering:Designing Quality into Products and Processes,Asian Productivity Organization, Tokyo, Americandistribution by UNIPUB/Kraus International Publications, New York. 1988[3]Phadke, M.S., Quality Engineering Using RobustDesign, Prentice Hall, Englewood Cliffs, N.J., 1989. [4]Cochran, W.G. and Cox, G.M., Experimental Design,Wilely, New York, 1957.[5]Kempthorne, O., "The Design and Analysis ofExperiments," Robert E. Krieger Publishing, NewYork, 1979.[6]Dalal, S.R., and Patton, G. C., “Automatic EfficientTest Generator (AETG): A Test Generation Systemfor Screen Testing, Protocol Verification and FeatureInteraction”, Belcore, 1993.[7]Kleinbaum, D.G., and Kupper, L.L. AppliedRegression Analysis and Other Multivariable Methods. North Scituate (Massachusetts): Duxbury Press, 1978.[8]Lindgren, B.W. Statistical Theory (3rd Ed). NewYork: Macmillan, Publishing, 1976.。