A Framework for XML-based Integration of Data, Visualization and Analysis in a Biomedical D
- 格式:pdf
- 大小:373.57 KB
- 文档页数:15
简答题1.What are the qualities of a good language teacher?a. non-intellectual qualitiesPsychological qualities are essential factors. strong will-power(顽强的意志品质)good motivation(明确的动机)good motivation(明确的动机)perseverance (持之以恒的精神)out-going characteristics(外向的性格)b. Intellectual qualitiesLanguage learning abilitySelf-study abilityFour language skills abilityApplication of CAIc. Application of CAI( computer-assisted instruction)d. Teaching practice qualitiese. self-assessment qualities2.What are the difference between linguistic competence andcommunicative competence? What is communicative competence?1)2)It covers a variety of development in syllabus design and in themethodology of foreign language teaching and includes bothknowledge about how to use the language appropriately incommunicative situation.3. What is deductive method of teaching grammar? What is inductive method of teaching grammar?1)Deductive method: it refers on reasoning, analysing and comparison.First ,the teacher write an example on board or draws attention to anexample in the textbook. Then the teacher explains the underlyingrules regarding the forms and positions of certain structural word.2)Inductive method: in the inductive method ,the teacher provideslearners with authentic language data and induces the learners torealise grammar rules without any form of explicit explanation. It isbelieved that the rules will become evident if the students are givenenough appropriate examples.3.What are the principles for good lesson planning?1)Variety:Planning a number of different types of activities and where possible introducing students to a wide selection of materials so that learning isalways interesting, motivating and never monotonous for the students. 2)Flexibility:Planning to use a number of different methods and techniques rather than being a slave to one methodology. This will make teaching and learning more effective and more efficient.3)Linkage:The stages and the steps within each stage are planned in such a way that they are somehow linked with one another. Language learning needs recycling and reinforcement.4)Learnability:The contents and tasks planned for the lesson should be within the learning capability of the students. Of course, things should not be too easy either. Doing things that are beyond or below the students’coping ability will diminish their motivation.4.What are the difference between macro planning and micro planning?Ideally, lesson planning should be done ay two levels: macro planning and micro planning. The former is planning over time, for instance, the planning for a month,a term, or the whole course. The latter is planning for a specific lesson, which usually lasts 40 or 50 minutes. Of course, there is no clear cut difference between these two types of planning. Micro planning should be based on macro planning, and macro planning is apt to be modified as lessons go on.5.What are the components of a lesson plan?1)Teaching aims:The first thing to do in lesson planning is to decide theaims of a lesson, which include what language components to present, what communicative skills to practise, what activities to conduct and what materials and teaching aids to be used.2)Language contents and skills:language contents: structures (grammar),vocabulary,functions,topics and so on. Language skills: communicative skills involved in listening, speaking reading and writing3)Teaching stages and procedures:Teaching stages are the major stepsthat language teachers go through in the classroom.Procedures are the detailed steps in each teaching stage.The most popular language teaching stages are the three P’s model, which include presentation, practice and production.6.What are the aspects of pronunciation?Pronunciation is an umbrella term covering mang aspects besides sound and phonetic symbols, such as stress, intonation, and rhythm.7.What are the principles for teaching listening?1 Focus on process2 Combine listening with other skills3 Focus on the comprehension of meaning4 Grade difficulty level appropriately8.What are the purposes for pre-listening, while-listening and post-listening activities?1)Pre-listening:To spark interest and motivate students to attend to thespoken message,To activate or build students' prior topical and linguistic knowledge,To set purposes for listening.2)While-Listening: To foster students' comprehension of the speaker'slanguage and ideas, To focus students' attention on such things as the speaker's organizational patterns, To encourage students' critical reactions and personal responses to the speaker's ideas and use of language.3)Post-listening: To examine relationships between prior knowledgeand experience, and new ideas and information gained from the speaker or discussion ,To invite and encourage student reflection and response,To clarify and extend comprehension beyond the literal level to the interpretive and critical levels.9.Can you name some types of speaking activities?1 Controlled activities: it mainly focuses on form and accuracy.2 Semi-controlled activities: it focuses on more on meaning and communication.3 Communicative activities: it allows for real information exchange.10.What is the bottom-up model of teaching reading?11.What is the top-down model of teaching reading?12.What are the purposes of pre-reading activities?To interest and motivate studentsTo activate students’ prior knowledge13.What is the process approach to writing?14.What is the interrelationship between listening and speaking? What isthe interrelationship between reading and writing?15.Why should we integrate the four skills? What is skills integration?a.Skills integration generally refers to linking two or more of thetraditional four skills of language learning: reading, writing, listening, and speaking.There are many situations in which we use more than one language skill .b.An integrating approach for the development of communicative skillsin the classroom, in which the four skills in the acquisition of knowledge of a foreign language can be taught in a coherent way, and practiced together.16.What are the conditions for language learning according to JaneWillis’ Framework for Task-Based Leaning? What are the essential conditions and what is the desirable condition?a.Essential and desirableb.Essential: 1.Exposure to a rich but comprehensible input of realspoken and written language in use e of the language to do things 3 Motivation to listen to and read the language and to speak and write itC . Desirable: instruction in language (i.e. chances to focus on form)17.What are the means to integrate the four skills in teaching?1 Simple integration2 Complex integration18.What are the methods of assessment?Positive assessment;neglect assessment;teacher’s assessment;continuous assessment;Ss’self-assessment;portfolios (个人成长档案)19.What are the criteria for assessment?1.Criterion-referenced assessment2.Norm-referenced assessment3.Individual-referenced assessment20.What are the features of good textbooks?21.What are the methods of adapting textbooks? What are the 8 optionsin adapting textbooks?。
NO Abbr aa1ABM Activity-based Management2AO Application Outsourcing3APICS American Production and Inventory4APICS Applied Manufacturing Education S5APO Advanced Planning and Optimizatio6APS Advanced Planning and Scheduling7ASP Application Service/Software Prov8ATO Assemble To Order9ATP Available To Promise10B2B Business to Business11B2C Business to Consumer12B2G Business to Government13B2R Business to Retailer14BIS Business Intelligence System15BOM Bill Of Materials16BOR Bill Of Resource17BPR Business Process Reengineering18BPM Business Process Management19BPS Business Process Standard20C/S Client/Server(C/S)\Browser/Server21CAD Computer-Aided Design22CAID Computer-Aided Industrial Design23CAM Computer-Aided Manufacturing24CAPP Computer-Aided Process Planning25CASE Computer-Aided Software Engineeri26CC Collaborative Commerce27CIMS Computer Integrated Manufacturing28CMM Capability Maturity Model29COMMS Customer Oriented Manufacturing M30CORBA Common Object Request Broker Arch31CPC Collaborative Product Commerce32CPIM Certified Production and Inventor33CPM Critical Path Method34CRM Customer Relationship Management35CRP capacity requirements planning36CTI Computer Telephony Integration37CTP Capable to Promise38DCOM Distributed Component Object Mode39DCS Distributed Control System40DMRP Distributed MRP41DRP Distribution Resource Planning42DSS Decision Support System43DTF Demand Time Fence44DTP Delivery to Promise45EAI Enterprise Application Integratio46EAM Enterprise Assets Management47ECM Enterprise Commerce Management48ECO Engineering Change Order49EDI Electronic Data Interchange50EDP Electronic Data Processing51EEA Extended Enterprise Applications 52EIP Enterprise Information Portal53EIS Executive Information System54EOI Economic Order Interval55EOQ Economic Order Quantity56EPA Enterprise Proficiency Analysis 57ERP Enterprise Resource Planning58ERM Enterprise Resource Management59ETO Engineer To Order60FAS Final Assembly Schedule61FCS Finite Capacity Scheduling62FMS Flexible Manufacturing System63FOQ Fixed Order Quantity64GL General Ledger65GUI Graphical User Interface66HRM Human Resource Management67HRP Human Resource Planning68IE Industry Engineering/Internet Exp 69ISO International Standard Organizati 70ISP Internet Service Provider71ISPE International Society for Product 72IT/GT Information/Group Technology73JIT Just In Time74KPA Key Process Areas75KPI Key Performance Indicators76LP Lean Production77MES Manufacturing Executive System78MIS Management Information System79MPS Master Production Schedule80MRP Material Requirements Planning81MRPII Manufacturing Resource Planning 82MTO Make To Order83MTS Make To Stock84OA Office Automation85OEM Original Equipment Manufacturing 86OPT Optimized Production Technology 87OPT Optimized Production Timetable88PADIS Production And Decision Informati 89PDM Product Data Management90PERT Program Evaluation Research Techn 91PLM Production Lifecycle Management 92PM Project Management93POQ Period Order Quantity94PRM Partner Relationship Management95PTF Planned Time Fence96PTX Private Trade Exchange97RCCP Rough-Cut Capacity Planning98RDBM Relational Data Base Management99RPM Rapid Prototype Manufacturing100RRP Resource Requirements Planning101SCM Supply Chain Management102SCP Supply Chain Partnership103SFA Sales Force Automation104SMED Single-Minute Exchange Of Dies105SOP Sales And Operation Planning106SQL Structure Query Language107TCO Total Cost Ownership108TEI Total Enterprise Integration109TOC Theory Of Constraints/Constraints110TPM Total Productive Maintenance111TQC Total Quality Control112TQM Total Quality Management113WBS Work Breakdown System114XML eXtensible Markup Language115ABC Classification(Activity Based Classification) 116ABC costing117ABC inventory control118abnormal demand119acquisition cost ,ordering cost120action message121action report flag122activity cost pool123activity-based costing(ABC)124actual capacity125adjust on hand126advanced manufacturing technology127advanced pricing128AM Agile Manufacturing129alternative routing130Anticipated Delay Report131anticipation inventory132apportionment code133assembly parts list134automated storage/retrieval syste135Automatic Rescheduling136available inventory137available material138available stock139available work140average inventory141back order142back scheduling143base currency144batch number145batch process146batch production147benchmarking148bill of labor149bill of lading150branch warehouse151bucketless system152business framework153business plan154capacity level155capacity load156capacity management157carrying cost158carrying cost rate159cellular manufacturing160change route161change structure162check point163closed loop MRP164Common Route Code(ID)165component-based development 166concurrent engineering167conference room pilot168configuration code169continuous improvement170continuous process171cost driver172cost driver rate173cost of stockout174cost roll-up175crew size176critical part177critical ratio178critical work center179CLT Cumulative Lead Time180current run hour181current run quantity182customer care183customer deliver lead time 184customer loyalty185customer order number186customer satisfaction187customer status188cycle counting189DM Data Mining190Data Warehouse191days offset192dead load193demand cycle194demand forecasting195demand management196Deming circle197demonstrated capacity198discrete manufacturing199dispatch to200DRP Distribution Requirements Plannin 201drop shipment202dunning letter203ECO workbench204employee enrolled205employee tax id206end item207engineering change mode flag208engineering change notice209equipment distribution210equipment management211exception control212excess material analysis213expedite code214external integration215fabrication order216factory order217fast path method218fill backorder219final assembly lead time220final goods221finite forward scheduling222finite loading223firm planned order224firm planned time fence225FPR Fixed Period Requirements226fixed quantity227fixed time228floor stock229flow shop230focus forecasting231forward scheduling232freeze code233freeze space234frozen order235gross requirements236hedge inventory237in process inventory238in stock239incrementing240indirect cost241indirect labor242infinite loading243input/output control244inspection ID245integrity246inter companies247interplant demands248inventory carry rate249inventory cycle time250inventory issue251inventory location type 252inventory scrap253inventory transfers254inventory turns/turnover 255invoice address256invoice amount gross257invoice schedule258issue cycle259issue order260issue parts261issue policy262item availability263item description264item number265item record266item remark267item status268job shop269job step270kit item271labor hour272late days273lead time274lead time level275lead time offset days276least slack per operation 277line item278live pilot279load leveling280load report281location code282location remarks283location status284lot for lot285lot ID286lot number287lot number traceability288lot size289lot size inventory290lot sizing291low level code292machine capacity293machine hours294machine loading295maintenance ,repair,and operating 296make or buy decision297management by exception298manufacturing cycle time299manufacturing lead time300manufacturing standards301master scheduler302material303material available304material cost305material issues and receipts306material management307material manager308material master,item master309material review board310measure of velocity311memory-based processing speed312minimum balance313Modern Materials Handling314month to date315move time , transit time316MSP book flag317multi-currency318multi-facility319multi-level320multi-plant management321multiple location322net change323net change MRP324net requirements325new location326new parent327new warehouse328next code329next number330No action report331non-nettable332on demand333on-hand balance334on hold335on time336open amount337open order338order activity rules339order address340order entry341order point342order point system343order policy344order promising345order remarks346ordered by347overflow location348overhead apportionment/allocation 349overhead rate,burden factor,absor 350owner's equity351parent item352part bills353part lot354part number355people involvement356performance measurement357physical inventory358picking359planned capacity360planned order361planned order receipts362planned order releases363planning horizon364point of use365Policy and procedure366price adjustments367price invoice368price level369price purchase order370priority planning371processing manufacturing372product control373product family374product mix375production activity control376production cycle377production line378production rate379production tree380PAB Projected Available Balance 381purchase order tracking382quantity allocation383quantity at location384quantity backorder385quantity completion386quantity demand387quantity gross388quantity in389quantity on hand390quantity scrapped391quantity shipped392queue time393rated capacity394receipt document395reference number396regenerated MRP397released order398reorder point399repetitive manufacturing 400replacement parts401required capacity402requisition orders403rescheduling assumption404resupply order405rework bill406roll up407rough cut resource planning 408rounding amount409run time410safety lead time411safety stock412safety time413sales order414scheduled receipts415seasonal stock416send part417service and support418service parts419set up time420ship address421ship contact422ship order423shop calendar424shop floor control425shop order , work order426shrink factor427single level where used428standard cost system429standard hours430standard product cost431standard set up hour432standard unit run hour433standard wage rate434status code435stores control436suggested work order437supply chain438synchronous manufacturing439time bucket440time fence441time zone442top management commitment443total lead time444transportation inventory445unfavorable variance, adverse446unit cost447unit of measure448value chain449value-added chain450variance in quantity451vendor scheduler,supplier schedul 452vendor scheduling453Virtual Enterprise(VE)/ Organizat 454volume variance455wait time456where-used list457work center capacity458workflow459work order460work order tracking461work scheduling462world class manufacturing excelle 463zero inventories464465Call/Contact/Work/Cost center 466Co/By-product467E-Commerce/E-Business/E-Marketing 468E-sales/E-procuement/E-partner 469independent/dependent demand470informal/formal system471Internet/Intranet/Extranet472middle/hard/soft/share/firm/group ware 473pegging/kitting/netting/nettable474picking/dispatch/disbursement lis475preflush/backflush/super backflus476yield/scrap/shrinkage (rate)477scrap/shrinkage factor478479costed BOM480engineering BOM481indented BOM482manufacturing BOM483modular BOM484planning BOM485single level BOM486summarized BOM487488account balance489account code490account ledger491account period492accounts payable493accounts receivable494actual cost495aging496balance due497balance in hand498balance sheet499beginning balance500cash basis501cash on bank502cash on hand503cash out to504catalog505category code506check out507collection508cost simulation509costing510current assets511current liabilities512current standard cost513detail514draft remittance515end of year516ending availables517ending balance518exchange rate519expense520financial accounting521financial entity522financial reports523financial statements524fiscal period525fiscal year526fixed assets527foreign amount528gains and loss529in balance530income statement531intangible assets532journal entry533management accounting534manual reconciliation535notes payable536notes receivable537other receivables538pay aging539pay check540pay in541pay item542pay point543pay status544payment instrument545payment reminder546payment status547payment terms548period549post550proposed cost551simulated cost552spending variance,expenditure var 553subsidiary554summary555tax code556tax rate557value added tax558559as of date , stop date560change lot date561clear date562date adjust563date available564date changed565date closed566date due567date in produced568date inventory adjust569date obsolete570date received571date released572date required573date to pull574earliest due date575effective date576engineering change effect date 577engineering stop date578expired date579from date580last shipment date581need date582new date583pay through date584receipt date585ship date586587allocation588alphanumeric589approver590assembly591backlog592billing593bill-to594bottleneck595bulk596buyer597component598customer599delivery600demand601description602discrete603ergonomics604facility605feature606forecast607freight608holidays609implement610ingredient611inquire612inventory613item614job615Kanban616level617load618locate619logistics620lot621option622outstanding623overhead624override625overtime626parent627part628phantom629plant630preference631priority632procurement633prototyping634queue635quota636receipt637regeneration638remittance639requisition640returned641roll642routing643schedule644shipment645ship-to646shortage647shrink648spread649statement650subassembly651supplier652transaction653what-if654655post-deduct inventory transaction 656pre-deduct inventory transaction 657generally accepted manufacturing658direct-deduct inventory transacti 659Pareto Principle660Drum-buffer-rope661663Open Database Connectivity664Production Planning665Work in Process666accelerated cost recovery system 667accounting information system668acceptable quality kevel669constant purchasing power account 670break-even analysis671book value672cost-benefit analysis673chief financial office674degree of financial leverage675degree of operating leverage676first-in , first-out677economic lot size678first-in ,still-here679full pegging680linear programming681management by objective682value engineering683zero based budgeting684CAQ computer aided quality assurance 685DBMS database management system686IP Internet Protocol687TCP T ransmission Control Protocol 689690API Advanced Process Industry691A2A Application to Application692article693article reserves694assembly order695balance-on-hand-inventory696bar code697boned warehouse698CPA Capacity Requirements Planning 699change management700chill space701combined transport702commodity inspection703competitive edge704container705container transport706CRP Continuous Replenishment Program707core competence708cross docking709CLV Customer Lifetime Value710CReM Customer Relationship Marketing 711CSS Customer Service and Support712Customer Service Representative 713customized logistics714customs declaration715cycle stock716data cleansing717Data Knowledge and Decision Suppo 718data level integration719data transformation720desktop conferencing721distribution722distribution and logistics723distribution center724distribution logistics725distribution processing726distribution requirements727DRP distribution resource planning 728door-to-door729drop and pull transport730DEM Dynamic Enterprise Module731ECR Efficient Consumer Response732e-Government Affairs733EC Electronic Commerce734Electronic Display Boards735EOS Electronic order system736ESD Electronic Software Distribution 737embedding738employee category739empowerment740engineering change effect work or 741environmental logistics742experiential marketing743export supervised warehouse744ERP Extended Resource Planning745field sales/cross sale/cross sell 746franchising747FCL Full Container Load748Global Logistics Management749goods collection750goods shed751goods shelf752goods stack753goods yard754handing/carrying755high performance organization756inland container depot757inside sales758inspection759intangible loss760internal logistics761international freight forwarding 762international logistics763invasive integration764joint distribution765just-in-time logistics766KM Knowledge Management767lead (customer) management768learning organization769LCL less than container load770load balancing771loading and unloading772logistics activity773logistics alliance774logistics center775logistics cost776logistics cost control777logistics documents778logistics enterprise779logistics information780logistics management781logistics modulus782logistics network783logistics operation784LRP Logistics Resource Planning785logistics strategy786logistics strategy management787logistics technology788MES Manufacture Execute System789mass customization790NPV Net Present Value791neutral packing792OLAP On-line Analysis Processing793OAG Open Application Group794order picking795outsourcing796package/packaging797packing of nominated brand798palletizing799PDA Personal Digital Assistant800personalization801PTF Planning time fence802POS Point Of Sells803priority queuing804PBX Private Branch Exchange805production logistics806publish/subscribe807quality of working life808Quick Response809receiving space810REPs Representatives811return logistics812ROI Return On Investment813RM Risk Management814sales package815scalability816shipping space817situational leadership818six sigma819sorting/stacking820stereoscopic warehouse821storage822stored procedure823storehouse824storing825SRM Supplier Relationship Management 826tangible loss827team building828TEM Technology-enabled Marketing829TES Technology-enabled Selling830TSR TeleSales Service Representative 831TPL Third-Part Logistics832through transport833unit loading and unloading834Value Management835value-added logistics service 836Value-chain integration837VMI Vender Managed Inventory838virtual logistics839virtual warehouse840vision841volume pricing model842warehouse843waste material logistics844workflow management845zero latency846ZLE Zero Latency Enterprise847ZLP Zero Latency Process848zero-inventory technologyCC S F NUM基于作业活动管理F10应用程序外包E21美国生产与库存管理协会ext L651实用制造管理系列培训教材ext C652先进计划及优化技术F14高级计划与排程技术F15应用服务/软件供应商L22定货组装L24可供销售量(可签约量)L31企业对企业(电子商务)F51企业对消费者(电子商务)F52企业对政府(电子商务)F53企业对经销商(电子商务)F54商业智能系统E47物料清单bom L471资源清单L43业务/企业流程重组E49业务/企业流程管理E49业务/企业流程标准E50客户机/服务器\浏览器/服务器abr L457计算机辅助设计L75计算机辅助工艺设计L76计算机辅助制造L77计算机辅助工艺设计L78计算机辅助软件工程L79协同商务E68计算机集成制造系统L73能力成熟度模型L55面向客户制造管理系统ext L653通用对象请求代理结构F70协同产品商务E69生产与库存管理认证资格ext F654关键线路法L92客户关系管理L102能力需求计划L60电脑电话集成(呼叫中心)L74可承诺的能力F56分布式组件对象模型F121分布式控制系统L122分布式MRP L123分销资源计划L125决策支持系统L110需求时界L115可承诺的交货时间F111企业应用集成E140企业资源管理E141企业商务管理F142工程变更订单D139电子数据交换L131电子数据处理F132扩展企业应用系统F152企业信息门户E143高层领导信息系统F150经济定货周期L129经济订货批量(经济批量法)L130企业绩效分析144企业资源计划L145企业资源管理L145专项设计,按订单设计L136最终装配计划L160有限能力计划L162柔性制造系统L171固定定货批量法L167总账cid D522图形用户界面F178人力资源管理L181人力资源计划L182工业工程/浏览器188国际标准化组织F194互联网服务提供商F195国际生产力促进会ext F655信息/成组技术abr F458准时制造/准时制生产L218关键过程域L220关键业绩指标F219精益生产L227制造执行系统L254管理信息系统L252主生产计划L259物料需求计划L268制造资源计划D256定货(订货)生产L249现货(备货)生产L250办公自动化L292原始设备制造商E311最优生产技术E300最优生产时刻表E301生产和决策管理信息系统L346产品数据管理L342计划评审技术L352产品生命周期管理E348项目管理353周期定量法L323合作伙伴关系管理F320计划时界L330自用交易网站F339粗能力计划L385关系数据库管理F372快速原形制造F367资源需求计划D380供应链管理L420供应链合作伙伴关系L421销售自动化L392快速换模法L408销售与运作规划L391结构化查询语言F417总体运营成本F428全面企业集成F429约束理论/约束管理L423全员生产力维护F431全面质量控制L432全面质量管理L433工作分解系统F448可扩展标记语言F153 ABC分类法T1作业成本法F2 ABC 库存控制D3反常需求D4定货费L5行为/活动(措施)信息D6活动报告标志D7作业成本集L8作业基准成本法/业务成本法L9实际能力D11调整现有库存量D12先进制造技术L13高级定价系统D16敏捷制造L17替代工序(工艺路线)D18拖期预报T19预期储备L20分摊码D23装配零件表D25自动仓储/检索系统C26计划自动重排T27可达到库存D28可用物料D29达到库存T30可利用工时T32平均库存D33欠交(脱期)订单L34倒排(序)计划/倒序排产?L35本位币D36批号D37批流程L38批量生产D39标杆瞄准(管理)sim F586工时清单D41提货单D42分库D44无时段系统L45业务框架D46经营规划L48能力利用水平L57能力负荷D58能力管理L59保管费L61保管费率D62单元式制造T63修改工序D64修改产品结构D65检查点sim D66闭环MRP L67通用工序标识T71组件(构件)开发技术F72并行(同步)工程L80会议室模拟L81配置代码D82进取不懈C84连续流程L85作业成本发生因素L86作业成本发生因素单位费用L87短缺损失L88成本滚动计算法L89班组规模D90急需零件D91紧迫系数L93关键工作中心L94累计提前期L95现有运转工时D96现有运转数量D97客户关怀D98客户交货提前期L99客户忠诚度F100客户订单号D101客户满意度F103客户状况D104周期盘点L105数据挖掘F106数据仓库F107偏置天数L108空负荷T109需求周期L112需求预测D113需求管理L114戴明环ext L116实际能力C117离散型生产L119调度D120分销需求计划L124直运C126催款信D127 ECO工作台D128在册员工D133员工税号D134最终产品D135工程变更方式标志D137工程变更通知D138设备分配D146设备管理D147例外控制D148呆滞物料分析D149急送代码T151外部集成F154加工订单T155工厂订单D156快速路径法D157补足欠交D158总装提前期D159成品D161有限顺排计划L163有限排负荷L164确认的计划订单L165确认计划需求时界L166定期用量法L168固定数量法D169固定时间法D170作业现场库存L172流水车间T173调焦预测T174顺排计划L175冻结码D176冷冻区D176冻结订单D177毛需求L179囤积库存L180在制品库存D183在库D184增值D185间接成本D186间接人工D187无限排负荷L189投入/产出控制L190检验标识D191完整性D192公司内部间D193厂际需求量T196库存周转率D197库存周期D197库存发放D198仓库库位类型D199库存报废量D200库存转移D201库存(资金)周转次数L202发票地址D203发票金额D204发票清单D205发放周期D206发送订单T207发放零件D208发放策略D209项目可供量D210项目说明D211项目编号D212项目记录T213项目备注D214项目状态D215加工车间L216作业步骤D217配套件项目D221人工工时D222延迟天数D223提前期L224提前期水平D225提前期偏置(补偿)天数D226最小单个工序平均时差C228单项产品T229应用模拟L230负荷量T231负荷报告T232仓位代码D233仓位备注T234仓位状况T235按需定货(因需定量法/缺补法)L236批量标识T237批量编号T238批号跟踪D239批量D240批量库存L241批量规划L242低层(位)码L243机器能力D244机时D245机器加载T246维护修理操作物料C247外购或自制决策D248例外管理法L251制造周期时间T253制造提前期D255制造标准D257主生产计划员L260物料L261物料可用量L262物料成本D263物料发放和接收D264物料管理L265物料经理L266物料主文件L267物料核定机构L269生产速率水平C270基于存储的处理速度F271最小库存余量L272现代物料搬运C273月累计D274传递时间L275 MPS登录标志T276多币制D277多场所D278多级D279多工厂管理F280多重仓位T281净改变法L282净改变式MRP T283净需求L284新仓位D285新组件D286新仓库D287后续编码D288后续编号D289不活动报告D290不可动用量C291急需的D293现有库存量D294挂起D295准时D296未清金额D298未结订单/开放订单L299订单活动规则D302订单地址D303订单输入T304定货点T305定货点法L306定货策略L307定货承诺T308定货备注T309定货者D310超量库位D312间接费分配L313间接费率L314所有者权益L315母件L316零件清单D317零件批次D318零件编号D319全员参治C321业绩评价L322实际库存D324领料/提货D325计划能力L326计划订单L327计划产出量L328计划投入量L329计划期/计划展望期L331使用点C332工作准则与工作规程L333价格调整D334发票价格D335物价水平D336采购订单价格D337优先计划D338流程制造D340产品控制D341产品系列D343产品搭配组合C344生产作业控制L345生产周期L347产品线D349产品率D350产品结构树T351预计可用库存(量)L354采购订单跟踪D355已分配量D356仓位数量T357欠交数量D358完成数量D359需求量D360毛需求量D361进货数量T362现有数量D363废品数量D364发货数量D365排队时间L366额定能力L368收款单据D369参考号D370重生成式MRP T371下达订单L373再订购点D374重复式生产(制造)L375替换零件D376需求能力L377请购单D378重排假设T379补库单L381返工单D382上滚D383粗资源计划D384舍入金额D386加工(运行)时间L387安全提前期L388安全库存L389保险期T390销售订单D393计划接收量(预计入库量/预期到货量)L394季节储备L395发送零件T396服务和支持D397维修件T398准备时间L399发运地址D400发运单联系人D401发货单D402工厂日历(车间日历)L403车间作业管理(控制)L404车间订单L405损耗因子(系数)D406单层物料反查表D407标准成本体系L409标准工时D410标准产品成本D411标准机器设置工时T412标准单位运转工时T413标准工资率T414状态代码D415库存控制T416建议工作单D418供应链L419同步制造/同期生产C422时段(时间段)L424时界L425时区L426领导承诺C427总提前期L430在途库存L434不利差异L435单位成本T436计量单位D437价值链L438增值链C439量差D440采购计划员/供方计划员L442采购计划法T443虚拟企业/公司L444产量差异L445等待时间L446反查用物料单L447工作中心能力L449工作流L450工作令T451工作令跟踪T452工作进度安排T453国际优秀制造业C454零库存T455456呼叫/联络/工作/成本中心abr X459联/副产品abr X460电子商务/电子商务/电子集市abr X461电子销售/电子采购/电子伙伴abr独立需求/相关需求件abr X462非/规范化管理系统abr X463互联网/企业内部网/企业外联网abr X464中间/硬/软/共享/固/群件abr X465追溯(反查)/配套出售件/净需求计算abr X466领料单(或提货单)/派工单/发料单abr X467预冲/倒冲法/完全反冲abr X468成品率/废品率/缩减率abr X469残料率(废品系数)/损耗系数abr fromchen470成本物料清单bom D472设计物料清单bom L473缩排式物料清单bom L474制造物料清单bom L475模块化物料清单bom L476计划物料清单bom L477单层物料清单bom D478汇总物料清单bom L479480账户余额cid D481账户代码cid D482分类账cid D483会计期间cid D484应付账款cid L485应收账款cid L486实际成本cid D487账龄cid D488到期余额cid D489现有余额cid D490资产负债表cid D491期初余额cid D492现金收付制cid D493银行存款cid L494现金cid L495支付给cid D496目录cid D497分类码cid D498结帐cid D499催款cid D500成本模拟cid D501成本核算cid D502流动资产cid L503流动负债cid L504现行标准成本cid C505明细cid D506汇票汇出cid D507年末cid D508期末可供量cid D509期末余额cid D510汇率cid D511费用cid D512财务会计cid L513财务实体cid L514财务报告cid D515财务报表cid D516财务期间cid D517财政年度cid D518固定资产cid L519外币金额cid D520损益cid D521平衡cid D523损益表cid D524无形资产cid L525分录cid D526管理会计cid L527手工调账cid D528应付票据cid L529应收票据cid L530其他应收款cid L531付款账龄cid D532工资支票cid D533缴款cid D534付款项目cid D535支付点cid D536支付状态cid D537付款方式cid D538催款单cid D539付款状态cid D540付款期限cid D541期间cid D542过账cid D543建议成本cid L544模拟成本cid L545开支差异cid L546明细账cid D547汇总cid D548税码cid D549税率cid D550增值税cid D551552截止日期dat D553修改批量日期dat D554结清日期dat D555调整日期dat D556有效日期dat D557修改日期dat D558结束日期dat D559截止日期dat560生产日期dat D561库存调整日期dat D562作废日期dat D563收到日期dat D564交付日期dat D565需求日期dat D566发货日期dat D567最早订单完成日期dat L568生效日期dat D569工程变更生效日期dat D570工程停止日期dat D571失效日期,报废日期dat D572起始日期dat D573最后运输日期dat T574需求日期dat D575新日期dat D576付款截止日期dat D577收到日期dat D578发运日期dat D579580已分配量sim D581字母数字sim C582批准者sim D583装配(件)sim D584未结订单/拖欠订单sim L585开单sim D587发票寄往地sim C588瓶颈资源sim L589散装sim D590采购员sim T591子件/组件sim L592客户sim D593交货sim D594需求sim D595说明sim D596离散sim D597工效学(人类工程学)sim L598设备、功能sim D599基本组件/特征件sim L600预测sim D601运费sim D602例假日sim D603实施sim D604配料、成分sim D605查询sim D606库存sim L607物料项目sim D608作业sim D609看板sim T610层次(级)sim D611负荷sim D612定位sim D613后勤保障体系;物流管理sim L614批次sim D615可选件sim L616逾期未付sim D617制造费用sim D618覆盖sim C619加班sim D620双亲(文件)sim D621零件sim D622虚拟件sim L623工厂,场所sim D624优先权sim D626优先权(级)sim D627采购sim628原形测试sim L629队列sim T630任务额,报价sim D631收款、收据sim D632全重排法sim C633汇款sim D634请购单sim L635退货sim D636滚动sim D637工艺线路sim L638计划表sim D639发运量sim D640交货地sim C641短缺sim D642损耗sim D643分摊sim D644报表sim D645子装配件sim D646供应商sim D647事务处理sim F648如果怎样-将会怎样sim C649650后减库存处理法ext T656前减库存处理法ext T657通用生产管理原则ext T658直接增减库存处理法ext T659帕拉图原理ext L660鼓点-缓冲-绳子ext T661开放数据库互连fromchen生产规划编制 fromchen在制品 fromchen快速成本回收制度fromchen会计信息系统 fromchen可接受质量水平 fromchen不买够买力会计fromchen保本分析fromchen帐面价值fromchen成本效益分析fromchen财务总监fromchen财务杠杆系数fromchen经济杠杆系数fromchen先进先出法 fromchen经济批量 fromchen后进先出法fromchen完全跟踪 fromchen线性规划 fromchen目标管理 fromchen价值工程 fromchen零基预算fromchen计算机辅助质量保证 fromchen数据库管理系统 fromchen网际协议 fromchen传输控制协议 fromchen高级流程工业fromAMT应用到应用(集成)fromAMT物品fromAMT物品存储fromAMT装配订单fromAMT现有库存余额fromAMT条形码fromAMT保税仓库fromAMT能力需求计划fromAMT变革管理fromAMT冷藏区fromAMT联合运输fromAMT进出口商品检验fromAMT竞争优势fromAMT集装箱fromAMT集装箱运输fromAMT连续补充系数fromAMT。
About the T utorialSoapUI is an open-source tool used for functional and non-functional testing, widely used in WebServices testing. This is a brief tutorial that introduces the readers to the basic features and usage of SoapUI. The tutorial will guide the users on how to utilize the tool in WebService and other non-functional testing.AudienceThis tutorial has been prepared for beginners to help them understand how to use the SOAPUI tool.PrerequisitesAs a reader of this tutorial, you should have a basic understanding of the client/server environment, and knowledge of SOAP, WSDL, XML, and XML namespace.Copyright & DisclaimerCopyright 2018 by Tutorials Point (I) Pvt. Ltd.All the content and graphics published in this e-book are the property of Tutorials Point (I) Pvt. Ltd. The user of this e-book is prohibited to reuse, retain, copy, distribute or republish any contents or a part of contents of this e-book in any manner without written consent of the publisher.We strive to update the contents of our website and tutorials as timely and as precisely as possible, however, the contents may contain inaccuracies or errors. Tutorials Point (I) Pvt. Ltd. provides no guarantee regarding the accuracy, timeliness or completeness of our website or its contents including this tutorial. If you discover any errors on our website or inthistutorial,******************************************iT able of ContentsAbout the Tutorial (i)Audience (i)Prerequisites (i)Copyright & Disclaimer (i)Table of Contents ....................................................................................................................................... i i1.SOAP ─ INTRODUCTION (1)2.SOAP ─ MESSAGES (2)3.SOAP ─ WHAT IS REST? (4)4.SOAPUI ─ INTRODUCTIO N (5)5.SOAPUI ─ CAPABILITIE S (6)6.SOAPUI ─ NG PRO (8)7.SOAPUI ─ INSTALLATIO N & CONFIGURATION (9)8.SOAPUI ─ WSDL (19)WSDL Usage (19)Understanding WSDL (20)Format and Elements (20)WSDL – Port Type (22)Patterns of Operation (22)ii9.SOAPUI ─ PROJECT (26)Create a SOAP Project (26)Add a WSDL (28)Details View (30)10.SOAPUI – TESTSUITE (33)Creation of TestSuite (33)11.SOAPUI – TESTCASE (37)12.SOAPUI ─ TESTSTEP (39)13.SOAPUI ─ REQUEST & R ESPONSE (42)Request Setup (42)Response (42)HTTP Request (43)HTTP Response (44)14.SOAPUI ─ PRO PERTIES (46)Defining Properties (46)Accessing Property (47)15.SOAPUI ─ PROPERTY TRANSFER (49)Adding Property Transfer (49)Transferring a Property (50)iii16. SOAPUI ─ LOGS PANE (54)SOAP UI Log (55)HTTP Log (55)Error Log (56)Memory Log (56)17.SOAPUI – ASSERTIONS (57)18.ASSERTION ─ CONTAINS (60)19.ASSERTION ─ NOT CONT AINS (63)20.ASSERTION ─ XPATH MA TCH (65)21.ASSERTION – XQUERY MATCH (68)22.ASSERTION ─ SCRIPT (70)23.SOAPUI – TROUBLESHOOTING (74)24.SOAPUI ─ PERFORMANCE TESTING (76)Types of Performance Testing (76)Key Aspects in Web Service (76)25.SOAPUI ─ LOAD TESTIN G (78)Creation of Load Test (78)Execution of Load Test (79)Adding an Assertion (81)26.SOAPUI ─ RESTFUL WEB SERVICES (84)27.REST ─ PROJECT SETUP (85)iv28.REST ─ WADL (88)29.REST ─ REQUEST (90)30.REST ─ RESPONSE (94)31.REST ─ HTTP METHODS (97)POST (97)GET (97)PUT (98)PATCH (98)DELETE (99)32.SOAPUI ─ JDBC CONNEC TION (100)33.SOAPUI ─ JDBC PROPER TY (103)34.SOAPUI – JDBC ASSERTION (104)vSOAP is the acronym for Simple Object Access Protocol. It is defined by World Wide Web Consortium (W3C) at /TR/2000/NOTE-SOAP-20000508 as follows: SOAP is a lightweight protocol for the exchange of information in a decentralized, distributed environment. It is an XML based protocol that consists of three parts: an envelope that defines a framework for describing what is in a message and how to process it; a set of encoding rules for expressing instances of application-defined data types; and a convention for representing remote procedure calls and responses.SOAP ─ Important FeaturesFollowing are some important features of SOAP.∙It is a communication protocol designed to communicate via Internet.∙It can extend HTTP for XML messaging.∙It provides data transport for Web services.∙It can exchange complete documents or call a remote procedure.∙It can be used for broadcasting a message.∙It is both platform and language independent.∙It is the XML way of defining what information is sent and how.∙It enables client applications to easily connect to remote services and invoke remote methods.Although SOAP can be used in a variety of messaging systems and can be delivered via a variety of transport protocols, the initial focus of SOAP is remote procedure calls transported via HTTP. Other frameworks such as CORBA, DCOM, and Java RMI provide similar functionality to SOAP, but SOAP messages are written entirely in XML and are therefore uniquely platform- and language-independent.1A SOAP message is an ordinary XML document containing the following elements:∙Envelope: Defines the start and the end of the message. It is a mandatory element.∙Header: Contains any optional attributes of the message used in processing the message, either at an intermediary point or at the ultimate end-point. It is an optional element.∙Body: Contains the XML data comprising the message being sent. It is a mandatory element.∙Fault:An optional Fault element that provides information about errors that occur while processing the message.All these elements are declared in the default namespace for the SOAP envelope: /2001/12/soap-envelopeThe default namespace for SOAP encoding and data types is:/2001/12/soap-encodingNote: All these specifications are subject to change. Thus, keep updating yourself with the latest specifications available on the W3 website.SOAP ─ Message StructureThe following block depicts the general structure of a SOAP message:233.REST is the acronym for Representational State Transfer. It can be defined as an architectural style of designing softwares. REST is not a specification or W3C standard. Hence, it is easier to work with RESTful Services. It doesn’t require any middleware specification framework.REST ─ Important FeaturesFollowing are some important features of REST.∙It relies on stateless, client-server, cacheable communication protocol – virtually in all cases, HTTP is used.∙It is light-weighted alternative of WebService and RPC (Remote Procedure Call) like SOAP-WSDL.∙It represents everything in unique ID or URIs.∙It makes the use of standard HTTP methods, such as GET, POST, PUT, DELETE.∙It links sources together.∙REST resources could have multiple representations.∙Any named information is considered as a Resource. For example: An image, a person, a document, all can be considered as an example of resource and represented as a unique ID or a URI.∙World Wide Web itself, based on HTTP, can be viewed as REST based architecture. REST services are Platform and Language independent. Since it is based on HTTP standards, it can easily work in the presence of firewalls. Like WebServices, REST doesn’t offer any in-built security, session management, QoS guarantee but these can be added by building on top of HTTP. For encryption, REST can be used on top of HTTPS.45SoapUI is a tool which can be used for both functional and non-functional testing. It is not limited to web services, though it is the de-facto tool used in web services testing.SoapUI ─ Important FeaturesFollowing are some important features of SoapUI.∙ It is capable of performing the role of both client and service.∙It enables the users to create functional and non-functional tests quickly and in an efficient manner using a single environment.∙ It is licensed under the terms of the GNU Leaser General Public Licence (LGPL).∙ It is purely implemented using JAVA platform.∙ It supports Windows, Mac, multiple Linux dialects.∙It allows testers to execute automated functional, regression, compliance, and load tests on different Web API.∙It supports all the standard protocols and technologies to test all kinds of APIs.SOAP UI can be used to test complete RESTful API and SOAP Web Service testing. It supports Functional Testing, Performance Testing, Interoperability Testing, Regression Testing, Load Testing, and much more.It is user friendly as well as it is easy to convert functional test into non-functional tests such as Load, Stress testing.6S OAP UI is rich in the following five aspects:∙ Functional Testing ∙ Security Testing ∙ Load Testing∙ Protocols and Technologies ∙Integration with other toolsLet’s learn more about each of these capabilities.Functional Testing∙ SOAP UI allows the testers to write functional API tests in SOAP UI.∙ SOAP UI supports Drag-Drop feature that accelerates the script development.∙SOAP UI supports debugging of tests and allows testers to develop data driven tests.∙SOAP UI supports multiple environments making it easy to switch among QA, Dev, and Prod environments.∙SOAP UI allows advanced scripting (the tester can develop their custom code depending on the scenarios).Security Testing∙ SOAP UI performs a complete set of vulnerability scan.∙ SOAP UI prevents SQL Injection to secure the databases.∙ SOAP UI scans for stack overflows, caused by documents huge in size.∙SOAP UI scans for cross-site scripting, which occurs when service parameters are exposed in messages.∙SOAP UI performs fuzzing scan and boundary scan to avoid erratic behavior of the services.Load Testing∙ SOAP UI distributes the load tests across n number of LoadUI agents. ∙ SOAP UI simulates high volume and real-world load testing with ease.∙SOAP UI allows advanced custom reporting to capture performance parameters.7∙SOAP UI allows end-to-end system performance monitoring.Protocols& TechnologiesSOAP UI supports a wide range of protocols:∙SOAP – Simple Object Access Protocol∙WSDL – Web Service Definition Language∙REST – Representational State Transfer∙HTTP – Hyper Text Transmission Protocol∙HTTPS – Hyper Text Transmission Protocol Secured∙AMF –Action Message Format∙JDBC – Java Database Connectivity∙JMS – Java Messaging ServiceIntegration w ith O ther T ools∙Apache Maven Project∙HUDSON∙JUnit∙Apache – Ant and more….6.SOAP UI is an open source free version tool with basic features of testing, while SOAP UI NG Pro is a commercialized tool having advanced features of reporting, data-driven functionality and much more.ComparisonThe following table compares and contrasts the various features of SoapUI and SoapUI NG Pro.897.SoapUI is a cross-platform tool. It supports Windows, Linux, and Mac operating systems.Prerequisites∙Processor: 1GHz or higher 32-bit or 64-bit processor.∙RAM: 512MB of RAM.∙Hard Disk Space: Minimum 200MB of hard disk space for installation.∙Operating System Version: Windows XP or later, MAC OS 10.4 or later.∙JAVA: JAVA 6 or later.Download ProcessStep 1: Go to https:/// and click Download SOAP UI.1011Step 2: Click ‘Get It’ to download SOAP UI Open Source. It will start downloading 112MB .exe file in the system. Wait till the download process is complete.12End of ebook previewIf you liked what you saw…Buy it from our store @ https://。
大学英语课程思政教学指南教育部公布The Ministry of Education has recently released the "Guidelines for Moral and Political Education in College English Courses", providing a comprehensive framework for integrating moral and political education into college English teaching. This move is part of the government's efforts to promote the cultivation of well-rounded individuals who are not only proficient in language skills but also have a strong sense of social responsibility and ethical values.The guidelines emphasize the importance of incorporating ideological and moral education into English courses, in order to help students develop a deeper understanding of Chinese traditional culture, socialist core values, and the principles of a harmonious society. The aim is to nurture students' patriotic spirit, moral integrity, social responsibility, and critical thinking skills, while also enriching their understanding of global issues and promoting cross-cultural communication.One of the key principles outlined in the guidelines is the integration of moral and political education with language teaching, in order to create a holistic learning experience for students. This involves incorporating relevant themes, topics, and materials into the English curriculum, such as discussions onsocial justice, environmental protection, gender equality, and cultural diversity. Through these discussions, students are encouraged to reflect on their own values and beliefs, and to develop a more nuanced understanding of their role in society.The guidelines also stress the importance of fostering a positive classroom environment that encourages open dialogue, critical thinking, and mutual respect among students. Teachers are encouraged to create opportunities for students to engage in meaningful discussions, debates, and group activities that promote collaboration and teamwork. By creating a supportive and inclusive learning environment, teachers can help students develop essential skills such as empathy, tolerance, and communication, which are crucial for building a harmonious society.In addition, the guidelines highlight the importance of incorporating real-life examples, case studies, and current events into English lessons, in order to make the content more relevant and engaging for students. By exploring real-world issues and challenges, students can develop a deeper understanding of the complexities of the world around them, and learn how to apply their language skills to address important social issues.Overall, the "Guidelines for Moral and Political Education in College English Courses" provide a comprehensive framework for promoting the integration of moral and political education into college English teaching. By incorporating these principles into their teaching practices, educators can help students develop the knowledge, skills, and attitudes needed to become responsible citizens and global leaders in an increasingly complex and interconnected world.。
[生活]计算机专业英语词汇缩写大全计算机专业英语词汇缩写大全计算机专业英语词汇缩写大全(J-Z)2010年01月06日星期三 12:47J J2EE — Java 2 Enterprise Edition J2ME — Java 2 Micro Edition J2SE — Java 2 Standard Edition JAXB — Java Architecture for XML Binding JAX-RPC — Java XML for Remote Procedure Calls JAXP — Java API for XML Processing JBOD — Just a Bunch of Disks JCE — Java Cryptography Extension JCL — Job Control Language JCP — Java Community Process JDBC — Java Database Connectivity JDK — Java Development KitJES — Job Entry SubsystemJDS — Java Desktop SystemJFC — Java Foundation Classes JFET — Junction Field-Effect Transistor JFS — IBM Journaling File System JINI — Jini Is Not InitialsJIT — Just-In-TimeJMX — Java Management Extensions JMS — Java Message Service JNDI — Java Naming and Directory Interface JNI — Java Native InterfaceJPEG — Joint Photographic Experts Group JRE — Java Runtime Environment JS — JavaScriptJSON — JavaScript Object NotationJSP — Jackson Structured Programming JSP — JavaServer PagesJTAG — Joint Test Action Group JUG — Java Users Group JVM — Java Virtual Machine jwz — Jamie ZawinskiKK&R — Kernighan and Ritchie KB — KeyboardKb — KilobitKB — KilobyteKB — Knowledge BaseKDE — K Desktop Environment kHz — KilohertzKISS — Keep It Simple, Stupid KVM — Keyboard, Video, Mouse LL10N — LocalizationL2TP — Layer 2 Tunneling Protocol LAMP — Linux Apache MySQL Perl LAMP — Linux Apache MySQL PHP LAMP — Linux Apache MySQL Python LAN —Local Area Network LBA — Logical Block Addressing LCD — Liquid Crystal Display LCOS — Liquid Crystal On Silicon LDAP — Lightweight Directory Access ProtocolLE — Logical ExtentsLED — Light-Emitting Diode LF — Line FeedLF — Low FrequencyLFS — Linux From Scratch lib — libraryLIF — Low Insertion Force LIFO — Last In First Out LILO — Linux LoaderLKML — Linux Kernel Mailing List LM — Lan ManagerLGPL — Lesser General Public License LOC — Lines of CodeLPI — Linux Professional Institute LPT — Line Print Terminal LSB — Least Significant Bit LSB — Linux Standard Base LSI — Large-Scale IntegrationLTL — Linear Temporal Logic LTR — Left-to-RightLUG — Linux User Group LUN — Logical Unit Number LV — Logical VolumeLVD — Low Voltage Differential LVM — Logical Volume Management LZW — Lempel-Ziv-Welch MMAC — Mandatory Access Control MAC — Media Access Control MAN —Metropolitan Area Network MANET — Mobile Ad-Hoc Network MAPI —Messaging Application Programming InterfaceMb — MegabitMB — MegabyteMBCS — Multi Byte Character Set MBR — Master Boot RecordMCA — Micro Channel Architecture MCSA — Microsoft Certified Systems AdministratorMCSD — Microsoft Certified Solution DeveloperMCSE — Microsoft Certified Systems Engineer MDA — Mail Delivery AgentMDA — Model-Driven Architecture MDA — Monochrome Display Adapter MDF — Main Distribution FrameMDI — Multiple Document Interface ME — [Windows] Millennium Edition MF — Medium FrequencyMFC — Microsoft Foundation Classes MFM — Modified Frequency Modulation MGCP — Media Gateway Control Protocol MHz — Megahertz MIB — Management Information Base MICR — Magnetic Ink Character Recognition MIDI — Musical Instrument Digital Interface MIMD —Multiple Instruction, Multiple Data MIMO — Multiple-Input Multiple-Output MIPS — Million Instructions Per Second MIPS — Microprocessor without Interlocked Pipeline StagesMIS — Management Information Systems MISD — Multiple Instruction, Single Data MIT — Massachusetts Institute of Technology MIME —Multipurpose Internet Mail ExtensionsMMDS — Mortality Medical Data System MMI — Man Machine Interface. MMIO — Memory-Mapped I/OMMORPG — Massively Multiplayer Online Role-Playing GameMMU — Memory Management Unit MMX — Multi-Media Extensions MNG —Multiple-image Network Graphics MoBo — MotherboardMOM — Message-Oriented Middleware MOO — MUD Object OrientedMOSFET — Metal-Oxide Semiconductor FET MOTD — Message Of The Day MPAA — Motion Picture Association of America MPEG — Motion Pictures Experts Group MPL — Mozilla Public License MPLS —Multiprotocol Label Switching MPU — Microprocessor Unit MS — Memory StickMS — MicrosoftMSB — Most Significant Bit MS-DOS — Microsoft DOSMT — Machine TranslationMTA — Mail Transfer AgentMTU — Maximum Transmission Unit MSA — Mail Submission Agent MSDN — Microsoft Developer Network MSI — Medium-Scale Integration MSI — Microsoft InstallerMUA — Mail User AgentMUD — Multi-User DungeonMVC — Model-View-ControllerMVP — Most Valuable Professional MVS — Multiple Virtual Storage MX — Mail exchangeMXF — Material Exchange Format NNACK — Negative ACKnowledgement NAK — Negative AcKnowledge Character NAS — Network-Attached Storage NAT — Network Address Translation NCP — NetWare Core ProtocolNCQ — Native Command Queuing NCSA — National Center for Supercomputing ApplicationsNDPS — Novell Distributed Print Services NDS — Novell Directory Services NEP — Network Equipment Provider NEXT — Near-End CrossTalk NFA — Nondeterministic Finite Automaton GNSCB — Next-Generation Secure Computing BaseNFS — Network File SystemNI — National InstrumentsNIC — Network Interface Controller NIM — No Internal Message NIO — New I/ONIST — National Institute of Standards and TechnologyNLP — Natural Language Processing NLS — Native Language Support NP — Non-Deterministic Polynomial-TimeNPL — Netscape Public License NPU — Network Processing Unit NS —NetscapeNSA — National Security Agency NSPR — Netscape Portable Runtime NMI — Non-Maskable Interrupt NNTP — Network News Transfer Protocol NOC — Network Operations Center NOP — No OPerationNOS — Network Operating System NPTL — Native POSIX Thread Library NSS — Novell Storage Service NSS — Network Security Services NSS —Name Service SwitchNT — New TechnologyNTFS — NT FilesystemNTLM — NT Lan ManagerNTP — Network Time Protocol NUMA — Non-Uniform Memory Access NURBS — Non-Uniform Rational B-Spline NVR - Network Video Recorder NVRAM — Non-Volatile Random Access Memory OOASIS — Organization for the Advancement of StructuredInformation StandardsOAT — Operational Acceptance Testing OBSAI — Open Base Station Architecture InitiativeODBC — Open Database Connectivity OEM — Original Equipment Manufacturer OES — Open Enterprise ServerOFTC — Open and Free Technology Community OLAP — Online Analytical Processing OLE — Object Linking and Embedding OLED — Organic LightEmitting Diode OLPC — One Laptop per Child OLTP — Online Transaction Processing OMG — Object Management Group OO — Object-Oriented OO — Open OfficeOOM — Out of memoryOOo — OOP — Object-Oriented Programming OPML — Outline Processor Markup Language ORB — Object Request Broker ORM — Oject-Relational Mapping OS — Open SourceOS — Operating SystemOSCON — O'Reilly Open Source Convention OSDN — Open Source Developer Network OSI — Open Source Initiative OSI — Open Systems Interconnection OSPF — Open Shortest Path First OSS — Open Sound SystemOSS — Open-Source SoftwareOSS — Operations Support System OSTG — Open Source Technology Group OUI — Organizationally Unique Identifier PP2P — Peer-To-PeerPAN — Personal Area Network PAP — Password Authentication Protocol PARC — Palo Alto Research Center PATA — Parallel ATAPC — Personal ComputerPCB — Printed Circuit BoardPCB — Process Control BlockPCI — Peripheral Component Interconnect PCIe — PCI ExpressPCL — Printer Command Language PCMCIA — Personal Computer Memory Card InternationalAssociationPCM — Pulse-Code ModulationPCRE — Perl Compatible Regular Expressions PD — Public Domain PDA — Personal Digital Assistant PDF — Portable Document Format PDP — Programmed Data Processor PE — Physical ExtentsPEBKAC — Problem Exists Between Keyboard And ChairPERL — Practical Extraction and Reporting LanguagePGA — Pin Grid ArrayPGO — Profile-Guided Optimization PGP — Pretty Good PrivacyPHP — PHP: Hypertext Preprocessor PIC — Peripheral Interface Controller PIC — Programmable Interrupt Controller PID — Proportional-Integral-Derivative PID — Process IDPIM — Personal Information Manager PINE — Program for Internet News & EmailPIO — Programmed Input/Output PKCS — Public Key Cryptography Standards PKI — Public Key Infrastructure PLC — Power Line Communication PLC — Programmable Logic Controller PLD — Programmable Logic Device PL/I — Programming Language One PL/M — Programming Language for MicrocomputersPL/P — Programming Language for Prime PLT — Power Line Telecoms PMM — POST Memory ManagerPNG — Portable Network Graphics PnP — Plug-and-PlayPoE — Power over EthernetPOP — Point of PresencePOP3 — Post Office Protocol v3 POSIX — Portable Operating System Interface POST — Power-On Self TestPPC — PowerPCPPI — Pixels Per InchPPP — Point-to-Point Protocol PPPoA — PPP over ATMPPPoE — PPP over EthernetPPTP — Point-to-Point Tunneling Protocol PS — PostScriptPS/2 — Personal System/2PSU — Power Supply UnitPSVI — Post-Schema-Validation Infoset PV — Physical VolumePVG — Physical Volume GroupPVR — Personal Video RecorderPXE — Preboot Execution Environment PXI — PCI eXtensions for Instrumentation QQDR — Quad Data RateQA — Quality AssuranceQFP — Quad Flat PackageQoS — Quality of ServiceQOTD — Quote of the DayQt — Quasar ToolkitQTAM — Queued Teleprocessing Access Method RRACF — Resource Access Control Facility RAD — Rapid Application Development RADIUS — Remote Authentication Dial In User Service RAID — Redundant Array of Independent Disks RAID — Redundant Array of Inexpensive Disks RAIT — Redundant Array of Inexpensive Tapes RAM —Random Access MemoryRARP — Reverse Address Resolution Protocol RAS — Remote Access ServiceRC — Region CodeRC — Release CandidateRC — Run CommandsRCS — Revision Control SystemRDBMS — Relational Database Management SystemRDF — Resource Description Framework RDM — Relational Data Model RDS — Remote Data ServicesREFAL — REcursive Functions Algorithmic LanguageREST — Representational State Transfer regex — Regular Expression regexp — Regular Expression RF — Radio FrequencyRFC — Request For CommentsRFI — Radio Frequency Interference RFID — Radio Frequency Identification RGB — Red, Green, BlueRGBA — Red, Green, Blue, Alpha RHL — Red Hat LinuxRHEL — Red Hat Enterprise Linux RIA — Rich Internet Application RIAA — Recording Industry Association of AmericaRIP — Raster Image Processor RIP — Routing Information Protocol RISC — Reduced Instruction Set Computer RLE — Run-Length Encoding RLL — Run-Length LimitedRMI — Remote Method Invocation RMS — Richard Matthew Stallman ROM — Read Only MemoryROMB — Read-Out Motherboard RPC — Remote Procedure Call RPG —Report Program Generator RPM — RPM Package ManagerRSA — Rivest Shamir Adleman RSI — Repetitive Strain Injury RSS —Rich Site Summary, RDF Site Summary, or Really SimpleSyndicationRTC — Real-Time ClockRTE — Real-Time EnterpriseRTL — Right-to-LeftRTOS — Real Time Operating System RTP — Real-time Transport Protocol RTS — Ready To SendRTSP — Real Time Streaming Protocol SSaaS — Software as a Service SAN — Storage Area NetworkSAR — Search And Replace[1]SATA — Serial ATASAX — Simple API for XMLSBOD — Spinning Beachball of Death SBP-2 — Serial Bus Protocol 2 sbin — superuser binarySBU — Standard Build UnitSCADA — Supervisory Control And Data AcquisitionSCID — Source Code in Database SCM — Software Configuration Management SCM — Source Code Management SCP — Secure Copy SCPI — Standard Commands for Programmable Instrumentation SCSI — Small Computer System Interface SCTP — Stream Control Transmission Protocol SD — Secure DigitalSDDL — Security Descriptor Definition LanguageSDI — Single Document InterfaceSDIO — Secure Digital Input OutputSDK — Software Development KitSDL — Simple DirectMedia LayerSDN — Service Delivery NetworkSDP — Session Description ProtocolSDR — Software-Defined RadioSDRAM — Synchronous Dynamic Random Access MemorySDSL — Symmetric DSLSE — Single EndedSEAL — Semantics-directed Environment Adaptation Language SEI — Software Engineering InstituteSEO — Search Engine OptimizationSFTP — Secure FTPSFTP — Simple File Transfer ProtocolSFTP — SSH File Transfer ProtocolSGI — Silicon Graphics, IncorporatedSGML — Standard Generalized Markup LanguageSHA — Secure Hash AlgorithmSHDSL — Single-pair High-speed Digital Subscriber LineSIGCAT — Special Interest Group on CD-ROM Applications andTechnologySIGGRAPH — Special Interest Group on GraphicsSIMD — Single Instruction, Multiple DataSIMM — Single Inline Memory ModuleSIP — Session Initiation ProtocolSIP — Supplementary Ideographic PlaneSISD — Single Instruction, Single Data SLED — SUSE LinuxEnterprise Desktop SLES — SUSE Linux Enterprise Server SLI — Scalable Link Interface SLIP — Serial Line Internet Protocol SLM — Service Level Management SLOC — Source Lines of Code SPMD — Single Program, Multiple Data SMA — SubMiniature version A SMB — Server Message Block SMBIOS — System Management BIOS SMIL — Synchronized Multimedia Integration LanguageS/MIME — Secure/Multipurpose Internet Mail ExtensionsSMP — Supplementary Multilingual Plane SMP — Symmetric Multi-Processing SMS — Short Message Service SMS — System Management Server SMT — Simultaneous Multithreading SMTP — Simple Mail Transfer Protocol SNA — Systems Network Architecture SNMP — Simple Network Management Protocol SOA — Service-Oriented Architecture SOE — Standard Operating Environment SOAP — Simple Object Access Protocol SoC — System-on-a-ChipSO-DIMM — Small Outline DIMM SOHO — Small Office/Home OfficeSOI — Silicon On InsulatorSP — Service PackSPA — Single Page Application SPF — Sender Policy Framework SPI —Serial Peripheral Interface SPI — Stateful Packet Inspection SPARC —Scalable Processor Architecture SQL — Structured Query Language SRAM —Static Random Access Memory SSD — Software Specification Document SSD - Solid-State DriveSSE — Streaming SIMD Extensions SSH — Secure ShellSSI — Server Side Includes SSI — Single-System Image SSI — Small-Scale Integration SSID — Service Set Identifier SSL — Secure Socket Layer SSP — Supplementary Special-purpose Plane SSSE — Supplementary Streaming SIMD Extensionssu — superuserSUS — Single UNIX Specification SUSE — Software und System-Entwicklung SVC — Scalable Video Coding SVG — Scalable Vector Graphics SVGA — Super Video Graphics Array SVD — Structured VLSI Design SWF —Shock Wave FlashSWT — Standard Widget Toolkit Sysop — System operatorTTAO — Track-At-OnceTB — TerabyteTcl — Tool Command Language TCP — Transmission Control Protocol TCP/IP — Transmission Control Protocol/Internet ProtocolTCU — Telecommunication Control Unit TDMA — Time Division Multiple Access TFT — Thin Film Transistor TI — Texas Instruments TLA — Three-Letter Acronym TLD — Top-Level DomainTLS — Thread-Local Storage TLS — Transport Layer Security tmp —temporaryTNC — Terminal Node Controller TNC — Threaded Neill-Concelman connector TSO — Time Sharing OptionTSP — Traveling Salesman Problem TSR — Terminate and Stay Resident TTA — True Tap AudioTTF — TrueType FontTTL — Transistor-Transistor Logic TTL — Time To LiveTTS — Text-to-SpeechTTY — TeletypeTUCOWS — The Ultimate Collection of Winsock SoftwareTUG — TeX Users GroupTWAIN - Technology Without An Interesting NameUUAAG — User Agent Accessibility Guidelines UAC — User Account Control UART — Universal Asynchronous Receiver/Transmitter UAT — User Acceptance Testing UCS — Universal Character SetUDDI — Universal Description, Discovery, and Integration UDMA — Ultra DMAUDP — User Datagram Protocol UE — User ExperienceUEFI — Unified Extensible Firmware Interface UHF — Ultra High Frequency UI — User InterfaceUL — UploadULA — Uncommitted Logic Array UMA — Upper Memory AreaUMB — Upper Memory BlockUML — Unified Modeling Language UML — User-Mode LinuxUMPC — Ultra-Mobile Personal Computer UNC — Universal Naming Convention UPS — Uninterruptible Power Supply URI — Uniform Resource Identifier URL — Uniform Resource Locator URN — Uniform Resource Name USB — Universal Serial Bus usr — userUSR — U.S. RoboticsUTC — Coordinated Universal Time UTF — Unicode Transformation FormatUTP — Unshielded Twisted Pair UUCP — Unix to Unix CopyUUID — Universally Unique Identifier UVC — Universal Virtual Computer Vvar — variableVAX — Virtual Address eXtension VCPI — Virtual Control Program Interface VR — Virtual RealityVRML — Virtual Reality Modeling Language VB — Visual BasicVBA — Visual Basic for Applications VBS — Visual Basic Script VDSL — Very High Bitrate Digital Subscriber LineVESA — Video Electronics Standards AssociationVFAT — Virtual FATVFS — Virtual File SystemVG — Volume GroupVGA — Video Graphics ArrayVHF — Very High FrequencyVLAN — Virtual Local Area Network VLSM — Variable Length Subnet Mask VLB — Vesa Local BusVLF — Very Low FrequencyVLIW - Very Long Instruction Word— uinvac VLSI — Very-Large-Scale Integration VM — Virtual MachineVM — Virtual MemoryVOD — Video On DemandVoIP — Voice over Internet Protocol VPN — Virtual Private Network VPU — Visual Processing Unit VSAM — Virtual Storage Access Method VSAT — Very Small Aperture Terminal VT — Video Terminal?VTAM — Virtual Telecommunications Access MethodWW3C — World Wide Web Consortium WAFS — Wide Area File ServicesWAI — Web Accessibility Initiative WAIS — Wide Area Information Server WAN — Wide Area NetworkWAP — Wireless Access Point WAP — Wireless Application Protocol WAV — WAVEform audio format WBEM — Web-Based Enterprise Management WCAG — Web Content Accessibility Guidelines WCF — Windows Communication Foundation WDM — Wavelength-Division Multiplexing WebDAV — WWW Distributed Authoring and VersioningWEP — Wired Equivalent Privacy Wi-Fi — Wireless FidelityWiMAX — Worldwide Interoperability for Microwave AccessWinFS — Windows Future Storage WINS- Windows Internet Name Service WLAN — Wireless Local Area Network WMA — Windows Media Audio WMV — Windows Media VideoWOL — Wake-on-LANWOM — Wake-on-ModemWOR — Wake-on-RingWPA — Wi-Fi Protected Access WPAN — Wireless Personal Area Network WPF — Windows Presentation Foundation WSDL — Web Services Description Language WSFL — Web Services Flow Language WUSB — Wireless Universal Serial Bus WWAN — Wireless Wide Area Network WWID — World Wide Identifier WWN — World Wide NameWWW — World Wide WebWYSIWYG — What You See Is What You Get WZC — Wireless Zero Configuration WFI — Wait For InterruptXXAG — XML Accessibility Guidelines XAML — eXtensible Application Markup LanguageXDM — X Window Display Manager XDMCP — X Display Manager Control Protocol XCBL — XML Common Business Library XHTML — eXtensible Hypertext Markup Language XILP — X Interactive ListProc XML —eXtensible Markup Language XMMS — X Multimedia SystemXMPP — eXtensible Messaging and Presence ProtocolXMS — Extended Memory SpecificationXNS — Xerox Network Systems XP — Cross-PlatformXP — Extreme ProgrammingXPCOM — Cross Platform Component Object ModelXPI — XPInstallXPIDL — Cross-Platform IDLXSD — XML Schema Definition XSL — eXtensible Stylesheet Language XSL-FO — eXtensible Stylesheet Language Formatting Objects XSLT — eXtensible Stylesheet Language TransformationsXSS — Cross-Site ScriptingXTF — eXtensible Tag Framework XTF — eXtended Triton Format XUL —XML User Interface Language YY2K — Year Two ThousandYACC — Yet Another Compiler Compiler YAML — YAML Ain't Markup Language YAST — Yet Another Setup Tool ZZCAV — Zone Constant Angular Velocity ZCS — Zero Code Suppression ZIF — Zero Insertion ForceZIFS — Zero Insertion Force Socket ZISC — Zero Instruction Set Computer ZOPE — Z Object Publishing Environment ZMA — Zone Multicast Address。
英文原文:The Java programming language and platform have emerged as major technologies for performing e-business functions. Java programming standards have enabled portability of applications and the reuse of application components across computing platforms. Sun Microsystems' Java Community Process continues to be a strong base for the growth of the Java infrastructure and language standards. This growth of open standards creates new opportunities for designers and developers of applications and services .Applications of JavaJava uses many familiar programming concepts and constructs and allows portability by providing a common interface through an external Java Virtual Machine (JVM). A virtual machine is a self-contained operating environment, created by a software layer that behaves as if it were a separate computer. Benefits of creating virtual machines include better exploitation of powerful computing resources and isolation of applications to prevent cross-corruption and improve security.The JVM allows computing devices with limited processors or memory to handle more advanced applications by calling up software instructions inside the JVM to perform most of the work. This also reduces the size and complexity of Java applications because many of the core functions and processing instructions were built into the JVM. As a result, software developers no longer need to re-create the same application for every operating system. Java also provides security by instructing the application to interact with the virtual machine, which served as a barrier between applications and the core system, effectively protecting systems from malicious code.Among other things, Java is tailor-made for the growing Internet because it makes it easy to develop new, dynamic applications that could make the most of the Internet's power and capabilities. Java is now an open standard, meaning that no single entity controls its development and the tools for writing programs in the language are available to everyone. The power of open standards like Java is the ability to break down barriers and speed up progress.Today, you can find Java technology in networks and devices that range from the Internet and scientific supercomputers to laptops and cell phones, from Wall Street market simulators tohome game players and credit cards. There are over 3 million Java developers and now there are several versions of the code. Most large corporations have in-house Java developers. In addition, the majority of key software vendors use Java in their commercial applications (Lazaridis, 2003).ApplicationsJava on the World Wide WebJava has found a place on some of the most popular websites in the world and the uses of Java continues to grow. Java applications not only provide unique user interfaces, they also help to power the backend of websites. Everybody is probably familiar with eBay and Amazon have been Java pioneers on the World Wide Web.eBayFounded in 1995, eBay enables e-commerce on a local, national and international basis with an array of Web sites.You can find it on eBay, even if you didn't know it existed. On a typical day, more than 100 million items are listed on eBay in tens of thousands of categories. on eBay; the world's largest online marketplace.eBay uses Java almost everywhere. To address some security issues, eBay chose Sun Microsystems' Java System Identity Manager as the platform for revamping its identity management system. The task at hand was to provide identity management for more than 12,000 eBay employees and contractors.Now more than a thousand eBay software developers work daily with Java applications. Java's inherent portability allows eBay to move to new hardware to take advantage of new technology, packaging, or pricing, without having to rewrite Java code.Amazon has created a Web Service application that enables users to browse their product catalog and place orders. uses a Java application that searches the Amazon catalog for books whose subject matches a user-selected topic. The application displays ten books that match the chosen topic, and shows the author name, book title, list price, Amazon discount price, and the cover icon. The user may optionally view one review per displayed title and make a buying decision.Java in Data Warehousing & MiningAlthough many companies currently benefit from data warehousing to support corporatedecision making, new business intelligence approaches continue to emerge that can be powered by Java technology. Applications such as data warehousing, data mining, Enterprise Information Portals and Knowledge Management Systems are able to provide insight into customer retention, purchasing patterns, and even future buying behavior.These applications can not only tell what has happened but why and what may happen given certain business conditions; As a result of this information growth, people at all levels inside the enterprise, as well as suppliers, customers, and others in the value chain, are clamoring for subsets of the vast stores of information to help them make business decisions. While collecting and storing vast amounts of data is one thing, utilizing and deploying that data throughout the organization is another.The technical challenges inherent in integrating disparate data formats, platforms, and applications are significant. However, emerging standards such as the Application Programming Interfaces that comprise the Java platform, as well as Extendable Markup Language technologies can facilitate the interchange of data and the development of next generation data warehousing and business intelligence applications. While Java technology has been used extensively for client side access and to presentation layer challenges, it is rapidly emerging as a significant tool for developing scaleable server side programs. The Java2 Platform, Enterprise Edition (J2EE) provides the object, transaction, and security support for building such systems.Metadata IssuesOne of the key issues that business intelligence developers must solve is that of incompatible metadata formats. Metadata can be defined as information about data or simply "data about data." In practice, metadata is what most tools, databases, applications, and other information processes use to define, relate, and manipulate data objects within their own environments. It defines the structure and meaning of data objects managed by an application so that the application knows how to process requests or jobs involving those data objects. Developers can use this schema to create views for users. Also, users can browse the schema to better understand the structure and function of the database tables before launching a query.To address the metadata issue, a group of companies have joined to develop the Java Metadata Interface (JMI) API. The JMI API permits the access and manipulation of metadata in Java with standard metadata services. JMI is based on the Meta Object Facility (MOF)specification from the Object Management Group (OMG). The MOF provides a model and a set of interfaces for the creation, storage, access, Metamodel and metadata interchange is done via XML and uses the XML Metadata Interchange (XMI) specification, also from the OMG. JMI leverages Java technology to create an end-to-end data warehousing and business intelligence solutions framework.Enterprise JavaBeansA key tool provided by J2EE is Enterprise JavaBeans (EJB), an architecture for the development of component-based distributed business applications. Applications written using the EJB architecture are scalable, transactional, secure, and multi-user aware. These applications may be written once and then deployed on any server platform that supports J2EE. The EJB architecture makes it easy for developers to write components, since they do not need to understand or deal with complex, system-level details such as thread management, resource pooling, and transaction and security management. This allows for role-based development where component assemblers, platform providers and application assemblers can focus on their area of responsibility further simplifying application development.Data Storage & AccessData stored in existing applications can be accessed with specialized connectors. Integration and interoperability of these data sources is further enabled by the metadata repository that contains metamodels of the data contained in the sources, which then can be accessed and interchanged uniformly via the JMI API. These metamodels capture the essential structure and semantics of business components, allowing them to be accessed and queried via the JMI API or to be interchanged via XML. Through all of these processes, the J2EE infrastructure ensures the security and integrity of the data through transaction management and propagation and the underlying security architecture.To consolidate historical information for analysis of sales and marketing trends, a data warehouse is often the best solution. In this example, data can be extracted from the operational systems with a variety of Extract, Transform and Load tools (ETL). The metamodels allow EJBs designed for filtering, transformation, and consolidation of data to operate uniformly on data from diverse data sources as the bean is able to query the metamodel to identify and extract the pertinent fields. Queries and reports can be run against the data warehouse that containsinformation from numerous sources in a consistent, enterprise-wide fashion through the use of the JMI API.Java in Industrial SettingsMany people know Java only as a tool on the World Wide Web that enables sites to perform some of their fancier functions such as interactivity and animation. However, the actual uses for Java are much more widespread. Since Java is an object-oriented language, the time needed for application development is minimal.In addition, Java's automatic memory management and lack of pointers remove some leading causes of programming errors. Most importantly, application developers do not need to create different versions of the software for different platforms. The advantages available through Java have even found their way into hardware. The emerging new Java devices are streamlined systems that exploit network servers for much of their processing power, storage, content, and administration.Benefits of JavaThe benefits of Java translate across many industries, and some are specific to the control and automation environment. Java's ability to run on any platform enables the organization to make use of the existing equipment while enhancing the application.IntegrationWith few exceptions, applications running on the factory floor were never intended to exchange information with systems in the executive office, but managers have recently discovered the need for that type of information. Before Java, that often meant bringing together data from systems written on different platforms in different languages at different times. Integration was usually done on a piecemeal basis, once it worked, was unique to the two applications it was tying together. Additional integration required developing a brand new system from scratch, raising the cost of integration.ScalabilityAnother benefit of Java in the industrial environment is its scalability. Even when internal compatibility is not an issue, companies often face difficulties when suppliers with whom they share information have incompatible systems. This becomes more of a problem as supply-chain management takes on a more critical role which requires manufacturers to interact more withoffshore suppliers and clients. The greatest efficiency comes when all systems can communicate with each other and share information seamlessly. Since Java is so ubiquitous, it often solves these problems.Dynamic Web Page DevelopmentJava has been used by both large and small organizations for a wide variety of applications beyond consumer oriented websites. Sandia, a multiprogram laboratory of the U.S. Department of Energy's National Nuclear Security Administration, has developed a unique Java application. The lab was tasked with developing an enterprise-wide inventory tracking and equipment maintenance system that provides dynamic Web pages.ConclusionOpen standards have driven the e-business revolution. As e-business continues to develop, various computing technologies help to drive its evolution. The Java programming language and platform have emerged as major technologies for performing e-business functions. the time needed for application development is minimal. Java also encourages good software engineering practices with clear separation of interfaces and implementations as well as easy exception handling. Java's automatic memory management and lack of pointers remove some leading causes of programming errors. The advantages available through Java have also found their way into hardware. The emerging new Java devices are streamlined systems that exploit network servers for much of their processing power, storage, content, and administration.中文翻译:Java编程语言和Java平台,已成为主要的实现电子商务功能的技术。
第1篇It is with great pleasure and a sense of accomplishment that I stand before you today to share my reflections and insights from the National Professional Development Program for English Teachers (NPDEP). This program has been an extraordinary journey, filled with learning, growth, and transformation. As we come to the end of this enriching experience, I feel compelled to summarize my key takeaways and express my gratitude to all those who have contributed to this remarkable event.First and foremost, I would like to extend my heartfelt thanks to the organizers of the NPDEP for creating such a valuable opportunity for English teachers across the nation. The meticulous planning, diverse range of workshops, and expert facilitators have all played a crucial role in enhancing our professional skills and knowledge.Reflections on the Program1. Pedagogical Innovations: The program has exposed me to a variety of pedagogical approaches and techniques that are not only effective but also engaging for students. From project-based learning to flipped classrooms, I have learned that there is no one-size-fits-all method when it comes to teaching English. The key is to adapt and integrate different strategies based on the needs and learning styles of our students.2. Technology Integration: In today's digital age, technology has become an indispensable tool in the classroom. The NPDEP has provided me with the skills and confidence to effectively integrate technology into my teaching practice. From using educational apps and online resources to creating interactive presentations, I am now better equipped to engage students and enhance their learning experiences.3. Cultural Competence: As English teachers, it is our responsibility to foster cultural competence in our students. The program has emphasized the importance of understanding and appreciating diverse cultures. Through various workshops and discussions, I have gained a deeperinsight into the cultural contexts of our students, which will undoubtedly enrich my teaching and make it more inclusive.4. Professional Development: The NPDEP has not only focused on enhancing our teaching skills but also on our personal and professional growth. The emphasis on self-reflection, goal setting, and continuous improvement has been invaluable. I have learned the importance of seeking feedback, embracing challenges, and staying committed tolifelong learning.5. Collaboration and Networking: One of the most significant aspects of the NPDEP has been the opportunity to collaborate and network withfellow English teachers from different regions and backgrounds. This exchange of ideas and experiences has been incredibly enriching and has broadened my perspective on teaching and learning.Personal GrowthDuring the program, I have encountered numerous moments of personal growth. I have learned to be more patient and understanding, to embrace diversity, and to appreciate the unique strengths and challenges of each student. I have also become more reflective, constantly questioning my teaching practices and seeking ways to improve.Gratitude to Participants and FacilitatorsI would like to express my sincere gratitude to all the participants who have made this program a success. Your dedication, enthusiasm, and willingness to learn have created a supportive and collaborative environment. I would also like to extend my appreciation to the facilitators and experts who have shared their knowledge and expertise. Your passion for teaching and commitment to professional development have inspired us all.Looking AheadAs we move forward, I am confident that the skills, knowledge, and insights gained from the NPDEP will have a lasting impact on my teaching career. I am committed to implementing the new strategies and approachesI have learned, and to continue seeking opportunities for growth and improvement.In conclusion, the National Professional Development Program for English Teachers has been a transformative experience. It has equipped me with the tools and confidence to be a more effective and engaging teacher. I am grateful for the opportunity to have participated in this program and for the support and guidance provided by all involved. As we embark on our journey back to our classrooms, let us carry with us the spirit of collaboration, innovation, and continuous improvement.Thank you for your attention, and may we all continue to make a positive difference in the lives of our students.Thank you.第2篇It is with great pleasure and a sense of accomplishment that I stand before you today to deliver my summary speech on the National Training Program for English Teachers (National English Teacher Training Program). This program has been an incredible journey of learning, growth, and reflection, and I am honored to share my experiences and insights with you all.First and foremost, I would like to extend my heartfelt gratitude to the organizers of this program for providing us with such a valuable opportunity. The National English Teacher Training Program has been a remarkable platform for English teachers across the country to come together, share ideas, and learn from each other's expertise. It has been a testament to the commitment of our government to enhance the quality of English education in our nation.During the past few weeks, we have been immersed in a rich tapestry of workshops, seminars, and interactive sessions. Each day has been filled with new knowledge, innovative teaching methods, and practicalstrategies that we can implement in our classrooms. I believe that this program has not only enriched our teaching skills but also broadened our perspectives on the English language and its role in our society.One of the highlights of this training program has been the emphasis on the importance of language proficiency. As English teachers, it is our responsibility to ensure that our students are not only proficient in the language but also confident communicators. The program has equipped us with various techniques to enhance our own language skills, which in turn, enables us to better cater to the needs of our students.One such technique that I found particularly insightful was the use of authentic materials in our teaching. Authentic materials, such as newspapers, magazines, and online resources, provide real-life contexts that help students connect with the language and understand its cultural nuances. By incorporating these materials into our lessons, we can create a more engaging and relevant learning experience for our students.Another crucial aspect of the training program was the focus on classroom management and student engagement. Effective classroom management is essential for creating a conducive learning environment where students feel safe, motivated, and empowered to participate actively. The program has provided us with practical tools andstrategies to manage our classrooms efficiently, including techniquesfor dealing with disruptive behavior, fostering collaboration, and promoting critical thinking.Furthermore, the program has emphasized the importance of technology integration in English language teaching. In today's digital age, it is imperative for us to harness the power of technology to enhance our teaching methods and make learning more interactive and accessible. We have been introduced to a variety of educational tools and platformsthat can help us create engaging lessons, track student progress, and provide personalized feedback.One of the most impactful sessions of the program was the workshop on assessment and evaluation. Assessing student learning is a critical component of teaching, and the program has provided us with a comprehensive framework for designing effective assessments. We have learned about various assessment methods, such as formative and summative assessments, and how to use them to inform our teaching practices and make informed decisions about our students' progress.Moreover, the program has emphasized the importance of professional development and continuous learning. As educators, we must remain committed to our own growth and stay updated with the latest trends and research in the field of English language teaching. The program has encouraged us to engage in reflective practice, where we can critically analyze our teaching methods and seek opportunities for improvement.In conclusion, the National English Teacher Training Program has been an invaluable experience that has transformed the way I approach my teaching profession. The knowledge, skills, and strategies that I have acquired during this program will undoubtedly enhance the quality of education I provide to my students.As we come to the end of this training, I would like to share a few thoughts with my fellow colleagues:1. Embrace the spirit of continuous learning and professional development. The field of English language teaching is ever-evolving, and we must stay committed to our own growth.2. Collaborate and share ideas with your peers. The power of collective wisdom cannot be underestimated. By working together, we can create a stronger and more effective English teaching community.3. Be patient and supportive of your students. Learning a new languageis a challenging process, and it is our role to provide a supportive environment where students can thrive.4. Be innovative and creative in your teaching methods. Engage your students with a variety of activities and resources that cater to their diverse learning styles.5. Finally, never lose sight of the passion that brought you into this profession. Teaching is a noble calling, and it is our responsibility to make a positive impact on the lives of our students.In closing, I would like to express my deepest gratitude to the organizers of this program, my fellow participants, and the entire English teaching community for this enriching experience. Let us carryforward the knowledge and skills we have gained here, and together, let us strive to create a brighter future for English education in our nation.Thank you.第3篇Good morning! It is with great pleasure and a sense of accomplishmentthat I stand before you today to share my reflections and insights from the National Training Program for English Teachers, commonly known as "Guo Pei Training." This program has been an invaluable experience that has not only enriched my professional knowledge but also transformed my teaching approach. I am honored to have the opportunity to summarize my journey and share some key takeaways with you all.Firstly, let me express my sincere gratitude to the organizers of this program for creating such a comprehensive and well-structured curriculum. The Guo Pei Training has been a blend of theoretical knowledge,practical skills, and interactive sessions that have catered to the diverse needs of English teachers across the nation. It has been an incredible platform for us to learn from each other, share our experiences, and grow together.Theoretical FoundationsThe program began with a strong emphasis on the theoretical foundations of English language teaching. We were introduced to various educational theories, methodologies, and pedagogical approaches that have shaped the field of English language education. This included insights from experts like Bloom's Taxonomy, Vygotsky's Zone of Proximal Development, and Krashen's Input Hypothesis. These theories have provided me with a deeper understanding of how language acquisition occurs and how todesign effective lesson plans that cater to the needs of my students.One particularly impactful session was the discussion on the importance of student-centered learning. The emphasis on creating a supportive and engaging classroom environment has inspired me to rethink my teaching strategies. I have learned to focus more on encouraging studentparticipation, fostering critical thinking, and promoting independencein learning. This shift in perspective has already shown positiveresults in my classroom, as students are now more motivated and actively involved in the learning process.Practical SkillsThe Guo Pei Training has also been instrumental in enhancing mypractical skills as an English teacher. We were fortunate to have sessions on classroom management, assessment techniques, and the use of technology in teaching. These workshops have equipped me with the tools and techniques necessary to manage a diverse classroom, assess student progress effectively, and integrate technology seamlessly into my lessons.One of the highlights was the hands-on session on using English language corpora and dictionaries. I have always been passionate about vocabulary development, and this session provided me with practical strategies to help my students expand their lexical repertoire. I have started incorporating corpus-based vocabulary exercises into my lessons, and the results have been impressive. Students are now more confident in using new words in context, and their overall language proficiency has improved.Collaboration and NetworkingAnother significant aspect of the Guo Pei Training was the opportunity to collaborate and network with fellow English teachers from different regions and backgrounds. These interactions have been incredibly enriching, as we shared our challenges, successes, and innovative teaching practices. The sense of community that has developed among us has been a source of immense support and inspiration.During our group activities, we were tasked with designing a unit plan based on a real-world scenario. This collaborative exercise not only tested our individual skills but also highlighted the importance of teamwork in the teaching profession. The feedback and suggestions frommy colleagues have been invaluable, and I am confident that I can apply these learnings to my future teaching endeavors.Personal GrowthLastly, the Guo Pei Training has had a profound impact on my personal growth as an educator. It has instilled in me a renewed sense of commitment to my profession and a passion for continuous learning. The program has not only equipped me with new knowledge and skills but has also challenged me to reflect on my own teaching practices and strivefor excellence.In conclusion, the National Training Program for English Teachers has been a transformative experience that has significantly enhanced my professional development. The knowledge, skills, and experiences gained from this program will undoubtedly shape my teaching practice in the years to come.As I bid farewell to this esteemed group of educators, I would like to leave you with a few words of advice:1. Embrace continuous learning: The field of English language teaching is ever-evolving, and it is crucial to stay updated with the latest trends and research.2. Collaborate and share: Never underestimate the power of collaboration and networking. Reach out to your colleagues and share your experiences and insights.3. Reflect and adapt: Regularly reflect on your teaching practices and be willing to adapt and evolve as an educator.Thank you once again to the organizers of the Guo Pei Training for this incredible opportunity. I am grateful for the knowledge, skills, and friendships I have gained during this program. Let us continue to strive for excellence in our teaching endeavors and make a positive impact on the lives of our students.Thank you.。
第1章绪论1.1 XML与模式1.1.1 XML简介1. XML声明2. 元素3. 属性4. 处理指令5. 注释6. 命名空间图1-1一个XML文档实例1.1.2 DTD简介2电子商务基础教程(第二版)图1-1一个XML文档实例图1-2一个XML DTD实例网络工程技术与实验教程 3 1.1.3 XML模式简介图1-3一个XML Schema实例4电子商务基础教程(第二版)1. 元素和属性2. 数据类型3. 匿名与命名4. 全局与局部5. 实例与模式1.2 XPath查询语言1.2.1 XPath简介1.2.2数据模型1. 文档结点2. 元素结点3. 属性结点4. 命名空间结点5. 处理指令结点6. 注释结点7. 文本结点图1-4一个查询数据模型实例网络工程技术与实验教程 5 1.2.3定位路径与定位步1. XPath轴2. 结点测试3. 谓词4. 定位路径表达式的缩写形式1.2.4基本表达式1.2.5函数调用1. 结点集合函数2. 字符串函数3. 布尔函数4. 数字函数1.3 XQuery查询语言1.3.1 XQuery简介1.3.2 XQuery查询的处理模型6电子商务基础教程(第二版)图1-5 XQuery查询的处理模型网络工程技术与实验教程71. 数据模型的产生2. 数据模型的序列化3. 模式导入4. 静态分析5. 动态计算阶段1.3.3 XQuery语法与查询实例1. FLWOR表达式2. 条件表达式3. 序列表达式4. 比较表达式5. 构造器6. 定量表达式1.4 XML查询代数参考文献1. World Wide Web Consortium. Extensible Markup Language (XML) 1.0 (Third Edition). W3C Recommendation. 4 February 2004. http: ///TR/REC-xml/2. Bosak J, Bray T, Connolly D, et al. W3C XML Specification ("XMLspec") DTD Version2.1. 15 February 2000. http: //www.w/XML/1998/06/xmlspec-report-v21.htm3. World Wide Web Consortium. XML Schema Part 0: Primer. W3C Recommendation, 2 May 2001. http: ///TR/xmlschema-0/4. World Wide Web Consortium. XML Schema Part 1: Structures. W3C Recommendation, 2 May 2001. http: ///TR/xmlschema-1/5. World Wide Web Consortium. XML Schema Part 2: Datatypes. W3C Recommendation, 2 May 2001. http: ///TR/xmlschema-2/6. World Wide Web Consortium. XML Schema: Formal Description. W3C Working Draft, 25 September 2001. http: ///TR/xmlschema-formal/7. 万常选. DTD与Schema在电子商务应用中的比较研究. 计算机应用研究, 2002,8电子商务基础教程(第二版)19(9): 30~328. World Wide Web Consortium. XML Path Language (XPath) Version 1.0. W3C Recom-mendation. 16 November 1999. http: ///TR/xpath9. World Wide Web Consortium. XML Path Language (XPath) 2.0. W3C Working Draft. 23 July 2004. http: ///TR/xpath2010. World Wide Web Consortium. XQuery 1.0: An XML Query Language. W3C Working Draft. 23 July 2004. http: ///TR/xquery/11. World Wide Web Consortium. XML Query Requirements. W3C Working Draft, 12 No-vember 2003. http: ///TR/xquery-requirements/12. World Wide Web Consortium. XQuery 1.0 and XPath 2.0 Data Model. W3C Working Draft. 23 July 2004. http: ///TR/xpath-datamodel/13. World Wide Web Consortium. XQuery 1.0 and XPath 2.0 Functions and Operators. W3C Working Draft. 23 July 2004. http: ///TR/xpath-functions14. World Wide Web Consortium. XQuery 1.0 and XPath 2.0 Formal Semantics. W3C Working Draft, 20 February 2004. http: ///TR/xquery-semantics/15. World Wide Web Consortium. XML Query Use Cases. W3C Working Draft. 12 Novem-ber 2003. http: ///TR/xquery-use-cases16. World Wide Web Consortium. XSLT 2.0 and XQuery 1.0 Serialization, W3C Working Draft. 23 July 2004. http: ///TR/xslt-xquery-serialization17. World Wide Web Consortium. Extensible Stylesheet Language (XSL). W3C Recom-mendation, 15 October 2001. http: ///TR/xsl/18. World Wide Web Consortium. XSL Transformations (XSLT) Version 1.0. W3C Recom-mendation, 16 November 1999. http: ///TR/xslt/19. World Wide Web Consortium. XML Pointer Language (XPointer) Version 1.0. W3C Working Draft, 16 August 2002. http: ///TR/xpointer/20. Robie J, Lapp J, and Schach D. XML Query Language(XQL). In: Marchiori M et al Eds. Proceedings of the International Conference on Query Languages (QL'98). Boston, Massachu-setts, USA. December 3-4, 1998. http: ///TandS/QL/QL98/pp/xql.html21. Deutsch A, Fernandez M, Florescu D, et al. XML QL: A Query Language for XML. 1998. http: ///TR/NOTE-xml-ql/22. Chamberlin D, Robie J, and Florescu D. Quilt: An XML Query Language for Heteroge-neous Data Sources. In: Suciu D et al Eds. Proceedings of the 3th WebDB International Workshop on the Web and Databases (Lecture Notes in Computer Science, V ol. 1997). Dallas, Texas, USA. May 18-19, 2000. Berlin: Springer, 2001. 1~2523. Bonifati A and Ceri S. Comparative Analysis of Five XML Query Languages. ACM SIGMOD Record, 2000, 29(1): 68~7924. Mchugh J, Abiteboul S, Goldman R, et al. Lore: A Database Management System for Semistructured Data. ACM SIGMOD Record, 1997, 26(3): 54~6625. McHugh J, and Widom J. Query Optimization for XML. In: Atkinson M P et al Eds.网络工程技术与实验教程9Proceedings of the 25th VLDB International Conference on Very Large Database. Edinburgh, Scotland. September 7 10, 1999. San Francisco: Morgan Kaufmann Publishers, 1999. 315~32626. Beech D, Malhotra A, and Rys M. A Formal Data Model and Algebra for XML. Commu-nication to the W3C, September 1999. 1~26. http: ///dbseminar/Archive/FallY99/malhotra tsld001.htm27. Beeri C and Tzaban Y. SAL: An Algebra for Semistructured Data and XML. In: Cluet S et al Eds. Proceedings of the WebDB International Workshop on the Web and Databases. Phila-delphia, USA. June 3 4, 1999. 46~5128. Christophides V, Cluet S, Simeon J. On Wrapping Query Languages and Efficient XML Integration. In: Chen W et al Eds. Proceedings of the 19th ACM SIGMOD International Confe-rence on Management of Data. Dallas, Texas, USA. May 14-19, 2000. New York: ACM Press, 2000. 141~15229. Fernandez M, Simeon J, Wadler P. An Algebra for XML Query. In: Kapooe S et al Eds. Proceedings of the 20th FSTTCS International Conference on Foundations of Software Technol-ogy and Theoretical Computer Science (Lecture Notes in Computer Science, V ol. 1974). New Delhi, India. December 13-15, 2000. Springer Verlap, 2000. 11~4530. Fernandez M, Simeon J, Wadler P. A Semi monad for Semi structured Data. In: Bussche J V et al Eds. Proceedings of the 8th ICDT International Conference on Database Theory (Lecture Notes in Computer Science, V ol. 1973). London, UK. January 4-6, 2001. Heidelberg: Springer Verlag, 2001. 263~30031. Galanis L, Viglas E, DeWitt D J, et al. Following the Paths of XML Data: An Algebraic framework for XML Query Evaluation. Niagara Publications. http: ///niagara/. 2001. 1~2532. Jagadish H V, Lakshmanan L V S, Srivastava D, et al. TAX: A Tree Algebra for XML. In: Clark J et al Eds. Proceedings of the International Workshop on Database Programming Lan-guages (Lecture Notes in Computer Science, V ol. 2397). Rome, Italy. September 8-10, 2001. Heidelberg: Springer Verlag, 2002. 149~164。
自动化系统与集成面向制造的数字孪生框架第4部分:信息交换1 范围本文件规定了参考体系结构中实体之间信息交换的技术要求。
以下网络中信息交换的要求属于本文件范围:——连接用户实体和数字孪生实体的用户网络;——连接数字孪生实体内子实体的业务网络;——将设备通信实体连接到数字孪生实体和用户实体的接入网络;——将设备通信实体与可观测制造元素相连接的邻近网络。
2 规范性引用文件下列文件中的内容通过文中的规范性引用而构成本文件必不可少的条款。
其中,注日期的引用文件,仅该日期对应的版本适用于本文件;不注日期的引用文件,其最新版本(包括所有的修改单)适用于本文件。
ISO 23247-1 自动化系统与集成面向制造的数字孪生系统框架第1部分:概述和一般原则(Automation systems and integration — digital twin framework for manufacturing — Part 1: Overview and general principles)ISO 23247-2 自动化系统与集成面向制造的数字孪生系统框架第2部分:参考架构(Automation systems and integration — digital twin framework for manufacturing — Part 2: Reference architecture)3 术语和定义ISO 23247-1界定的以及下列术语和定义适用于本文件。
ISO和IEC维护用于标准化的术语数据库网址如下:——ISO在线浏览平台网址电工百科网址——IEC提供设备通信的(一组)系统或设备。
示例:单元控制器向制造单元内的设备发送指令,并从设备上的传感器收集结果。
ISO 23247-2:2021,3.4][来源:为数字孪生模型提供实现、管理、同步和模拟等功能的(一组)系统。
示例:为制造单元提供模拟、同步以及数据分析的系统。
SINUMERIK 840D sl2016-12-07Upgrade instructionsSINUMERIK Operate 4.7 SP3 for PCU 50.5 / IPC_________________________________________________________________________________________SINUMERIK Operate 04.07.04.00 (internal version 04.07.04.00.016) for PCU 50.5 / IPC_________________________________________________________________________________________To install this software,a PCU 50.5 with previously installed PCU Base Software Win7 V10.0 (or later)a PCU 50.5 with previously installed PCU Base Software WinXP V1.4 HF9 (or later)is required.Important note for operation on PCU 50.5 Windows 7:Before switching off the power supply of the PCU 50.5 / IPC, the system should be shut down using the Operate softkey EXIT or using the Windows shutdown function, in the same way as with a PC with Windows 7. A UPS should be used to provide protection against unexpected power failures.The commissioning instructions for SINUMERIK Operate 4.7 SP2 and the PCU base software Win7 / WinXP, the Equipment Manual Operator Components and the SINUMERIK operating instructions Operate 4.7 SP2 apply accordingly. In addition, the general conditions published for the CNC software 4.7 SP4 must be observed.SINUMERIK Operate 4.7 SP4 is only released for operating SINUMERIK 840D sl NCU7x0.3 withCNC software 4.7 SP4 or with HMI Pro sl V04.05.03.04 or higher for operating a SIMATIC CPU 317 / 319.SINUMERIK Operate for PCU 50 does not include any technology cycles or measurement and ISO cycles. These are available on the CF card of the NCU, and are automatically installed when the NCU powers up - also see the notes provided in the upgrade instructions for the CNC software 4.7 SP4.InstallationImportant: A previously installed version V4.7 SP4 or higher of the SINUMERIK Operate for PCU 50 must first be uninstalled under Control Panel / Programs and Features.To install SINUMERIK Operate for PCU 50, the user must call the Setup_HMIsl_......exe file from the Windows desktop of the PCU 50. To do this, either create a network connection to a PC using the DVD supplied or copy the software from the DVD supplied to a USB flash drive and then connect it directly to the PCU 50.The installation directory is permanently defined as C:\Program Files (x86) for PCU 50 Windows 7 or F:\hmisl for PCU 50 WinXP.During the installation, you will be asked about the installation type:Operation on a SINUMERIK NCU (default setting)Operation on a SIMATIC CPU 317 / 319Integration of .NET-Framework for OEM applicationsLicensingFor operation on an NCU, SINUMERIK Operate on PCU 50 requires licensing via the SINUMERIK option6FC5800-0AP88-0YB0.====================================================================Compatibility==================================================================== SINUMERIK Operate 4.7 SP4 can be combined with 840D sl NCU 7x0.3 (B) with CNC-SW 4.7 SP4.The help system is based on the documentation for CNC SW 4.7 SP2.NC alarm texts are based on CNC SW 4.7 SP2.The additional languages V4.5.2 , V4.7.2.1 and V4.7.3 can be used. New texts in SW 4.7 SP4 are displayed in English if necessary.All drive texts (alarm and parameter texts as well as the corresponding online helps) are also displayed in English when using the additional languages V4.5.2 and V4.7.2.1.STEP7 for SINUMERIK PCU 50 from V5.5 SP3 and higher can be linked into SINUMERIK Operate 4.7 SP4 and can then be run.SINUMERIK Operate 4.7 SP3 for PCU 50.5 / IPC====================================================================Notes regarding SINUMERIK Operate on PCU 50:The "Local drive" in SINUMERIK Operate for PCU 50 Win7 is stored on the hard disk under C:\Program Files(x86)\Siemens\MotionControl\user\sinumerik\data and is always provided without option.When using SIMATIC ITC panels, no touch softkey is available for the help function.The mold and die view cannot be used with POLY and G91 blocks.The mold and die view cannot be used with BSPLINE blocks.Only the elements 0 to max. 65534 can be displayed or changed in GUD arrays, even if the array isgreater in the NC.File/folder names that contain a minus character are rejected both in the NC memory and on the local drive.Restrictions with EES:When using a USB stick on the TCU in combination with a PCU, you can currently not edit any files on the USB with active EES mode. You can only execute these files.If a USB stick on a TCU is accessed by several HMIs / NCUs in parallel, the components are notcoordinated. This entails the danger that a program being executed is modified / destroyed by another component.For the front USB of the TCU in combination withIf a specific user should be automatically logged on during installation of Operate on PCU 50 Win7, the user must have a Windows password.Othwerwise the installation is aborted and fault Fehler 1722output.When installing Operate on PCU 50, the entered password is not checked with autologon set for a user.When executing part programs from network drives or editing files on network drives, the user mustprovide for a stable, interference-free network connection to the network drives.When executing from USB stick it may occur that no program blocks are displayed if a USB stick was inserted before into another USB interface.When terminating Operate with the Exit softkey, an internal crash may occur which is not visible on the operator interface. However, a Crashlog and a dump file are created in each case.For touch panels without keyboard, the alarm screens of the Diagnostics area now include a verticalsoftkey " Cancel Alarm".Rsoftkey is displayed, the machine data MD 11280 $MN_WPD_INI_MODE must have the value 1.OPC-UA:There are two possibilities to set up a connection:Connection without securitymode "SignAndEncry"Siemens recommends that you always establish a connection with security, as only in this way theconfidentiality of the data transferred can be ensured.When restarting Operate, it may occur that the reconnection with the OPC server fails. In this case, you must restart NCU respectively PCU 50 / IPC.Settings:To operate a PCU 50 with an external monitor, the screen size of Operate can be set in the fileC:\Program Files(x86)\Siemens\MotionControl\user\sinumerik\hmi\cfg\slrs.ini with the entry[Global]SINUMERIK Operate 4.7 SP3 for PCU 50.5 / IPCResolution =; permissible values: 640x480, 800x600, 1024x768, 1280x1024Subsequently enter Autostart of Operate for a (different) user on PCU 50.5-Win7:On the desktop, start menu "All Programs", select Startup. Right-click and choose "Open". This opens an Explorer. In the right-hand Explorer window, right-click and create "New" "Shortcut".Then with "Browse ...", enter file "C:\Program Files(x86)\Siemens\MotionControl \siemens\sinumerik\hmi\ autostart\slstartup.exe".Operate includes the "AppSight-Blackbox" (tool for analyzing causes of application crashes), and is active as default. It can be deactivated by making an appropriate entry in file run_hmi.ini:[BlackBox]; enabled could be set to true or false. Default: trueenabled=trueThe log file created by the AppSight Blackbox on error is called:It is written when Operate is exited. Processes that are still running may need to be explicitly terminated. This log file must be forwarded to the SINUMERIK Hotline together with the HMI crash log.With Operate on PCU 50-WinXP the setting of the date / time only affects the clock of the NC/PLC, not the clock of the PCU 50New designsoftkeys incl. the icons on the softkeys, the appearance of the window title bars, various colors (window background colors) and the appearance / behavior of the header the displays for the operating area and operating mode can be found on the right side of the header line and, with no alarm pending, the header only shows the Siemens logo.The new skin can be activated with the display MD 9112 HMI_SKIN = 1. After a restart, Operate uses the new skin. For the new design, we recommend that you use a color depth of 32 bit.With multi-touch operation (e.g. OP015Black) the functions of the user interface have also been expanded. There are six function keys above the vertical softkeys which are always visible for the functions Undo, Redo, Open/close online help, Open/close virtual keyboard, Open/close calculator, Create screenshot.。
一种支持作战仿真开发的仿真、集成与建模高级框架摘要:本文分析并对比国内外作战仿真技术的发展现状,介绍了一种支持作战仿真开发的仿真、集成与建模高级框架(AFSIM),它是一种用于模拟和分析作战环境的软件工具,支持评估军事战略和战术决策的有效性。
同时该软件提供了完整的仿真环境模型(包括战斗平台模型、武器系统模型、机载传感器系统模型、通信系统模型以及环境效应模型等),具备快速便捷的建立作战仿真环境的能力。
AFSIM能够为建设高效能的作战仿真系统提供一种新的设计思路与方法。
关键词:作战仿真;仿真、集成与建模高级框架;集成开发环境;可视化工具An advanced framework for simulation, integration and modelingthat supports the development of combat simulationDongting jiang, Xiaofeng yan, ning LiNaval Armament Department, Chengdu, Sichuan 610000Abstract:This paper analyzes and compares the development statusof combat simulation technology at home and abroad, and proposes an Advanced Simulation, Integration and Modeling Framework (AFSIM) to support the development of operational simulation, which is a software tool for simulating and analyzing the operational environment and supporting the evaluation of the effectiveness of military strategyand tactical decision-making. At the same time, the software providesa complete simulation environment model (including combat platform model, weapon system model, airborne sensor system model, communication system model and environmental effect model, etc.), withthe ability to quickly and conveniently establish a combat simulation environment. AFSIM can provide a new design idea and method forbuilding high-performance combat simulation systems.Keywords:Combat simulation;Advanced framework for simulation、integration and modeling; Integrated development environment;isualtool11引言随着现代作战信息化与智能化演进,传统的针对单一兵种或单一平台进行建模分析的作战仿真只能对单一兵种间的单兵作战或单一平台的模拟,无法实现多元战场环境中涉及到的不同兵种以及先进武器、战斗机、舰船等的多机协同作战的模拟。
工作流参考模型英文A Reference Model for Workflow in the WorkplaceIntroductionIn today's fast-paced business environment, organizations must constantly find ways to optimize their processes and improve productivity. One way to achieve this is by implementing a robust workflow management system. A workflow is a series of activities that are performed to achieve a specific goal within an organization. It involves the movement of information, documents, or tasks from one participant to another in a predefined sequence. To help organizations understand and design effective workflow systems, this paper presents a reference model for workflow in the workplace.Definition and Components of a WorkflowA workflow consists of three main components: processes, tasks, and participants. A process is a set of activities that are organizedin a logical sequence to achieve a specific outcome. Tasks are the individual steps or activities within a process. Participants are the individuals or groups who are responsible for executing the tasks and ensuring the flow of work.The Reference ModelThe reference model for workflow in the workplace consists of four layers: the business layer, the process layer, the data layer, and the technology layer. Each layer has its own set of components and functions.1. The Business LayerThe business layer represents the overall goals and objectives of the organization. It includes components such as the organizational structure, business rules, and performance metrics. The business layer defines the context in which the workflow operates and provides a framework for aligning the workflow with the strategic goals of the organization.2. The Process LayerThe process layer defines the set of activities that need to be performed to accomplish a specific task or goal. It includes components such as process models, process definitions, and process analysis tools. The process layer provides a structured framework for organizing and managing the activities within a workflow.3. The Data LayerThe data layer represents the information that is generated, processed, and stored during the workflow. It includes components such as data models, data integration tools, and data validation mechanisms. The data layer ensures that the right information is available at the right time to support decision making and enable efficient workflow execution.4. The Technology LayerThe technology layer includes the tools and systems that are used to implement and execute the workflow. It includes components such as workflow management systems, document management systems, and collaboration tools. The technology layer provides the necessary infrastructure for automating and optimizing the workflow processes.Benefits of the Reference ModelBy using the reference model for workflow in the workplace, organizations can achieve several benefits:1. Improved Efficiency: The reference model helps streamline processes, eliminate redundancies, and minimize errors, resulting in improved efficiency and productivity.2. Enhanced Collaboration: The model facilitates effective communication and collaboration among participants, leading to better coordination and teamwork.3. Better Decision Making: The model provides real-time access to accurate and up-to-date information, enabling participants to make informed decisions quickly.4. Increased Transparency: The model promotes transparency by providing visibility into the workflow processes and progress, making it easier to identify bottlenecks and areas for improvement.5. Continuous Improvement: The model encourages organizations to regularly analyze and optimize their workflows, leading to a continuous improvement cycle that drives innovation and competitiveness.ConclusionA well-designed and efficient workflow is essential for organizations to stay competitive in today's dynamic business landscape. The reference model presented in this paper provides acomprehensive framework for understanding and implementing workflow in the workplace. By utilizing this model, organizations can optimize their processes, improve productivity, and achieve their strategic goals.Sure! Here are some additional points to further expand on the topic:6. Scalability: The reference model allows organizations to scale their workflows as their business grows. By designing processes that can accommodate higher volumes of tasks and participants, organizations can maintain efficiency and productivity even as they expand.7. Flexibility: The model provides flexibility in designing workflows to fit the specific needs of different departments or teams within an organization. By customizing processes and tasks based on the requirements of specific roles or projects, organizations can maximize effectiveness and adaptability.8. Compliance and Auditability: The reference model helps organizations ensure compliance with industry regulations and internal policies. By incorporating security measures, access controls, and audit trails, organizations can track and document workflow activities, demonstrating transparency and accountability.9. Integration with External Systems: The technology layer of the reference model enables seamless integration with external systems, such as customer relationship management (CRM) or enterprise resource planning (ERP) systems. Integrating workflows with these systems ensures efficient data exchange and eliminates manual data entry, reducing errors and improving overallproductivity.10. Mobile Accessibility: With the proliferation of mobile devices, it is crucial for workflows to be accessible anytime and anywhere. The reference model supports mobile access, allowing participants to view tasks, provide approvals, and receive notifications on their smartphones or tablets. This enhances productivity by enabling participants to stay connected and make decisions on the go.11. Analytics and Reporting: The data layer of the reference model enables organizations to collect and analyze workflow data, providing valuable insights for process improvement. By generating reports and visualizations, organizations can identify bottlenecks, monitor performance metrics, and make data-driven decisions to optimize workflows.12. Change Management: Implementing a new workflow system requires change management efforts to ensure a smooth transition. The reference model allows organizations to plan and execute change management strategies, involving stakeholders and providing training to ensure successful adoption and utilization of the workflow system.13. Continuous Training and Support: The technology layer of the reference model requires ongoing training and support to ensure participants understand and effectively use the workflow tools and systems. Organizations should provide comprehensive training programs, user manuals, and continuous support to help participants navigate the workflow processes and overcome any challenges.14. Integration of Artificial Intelligence and Automation: As technology advances, organizations can leverage artificial intelligence (AI) and automation to further optimize workflows.By incorporating AI algorithms and automation tools, organizations can automate repetitive tasks, make intelligent predictions, and enhance decision-making processes within workflows.15. Security and Data Privacy: Organizations must prioritize security and data privacy when designing and implementing workflow systems. The reference model emphasizes the need for robust security measures, including encryption, access controls, and data encryption, to protect sensitive information and ensure compliance with data protection regulations.In conclusion, the reference model for workflow in the workplace provides organizations with a comprehensive framework to optimize processes, improve productivity, and achieve their strategic goals. By implementing efficient workflows that align with organizational objectives, organizations can enhance collaboration, make better decisions, and continuously improve their workflows. With the right technology and support, organizations can leverage this reference model to drive innovation, competitiveness, and success in today's rapidly evolving business landscape.。
mybatis update、delete返回值MyBatis is a popular Java-based persistence framework that provides easy integration with databases. It simplifies the process of interacting with the database by providing a convenient way to perform SQL operations using simple XML or annotation-based configuration.In this article, we will focus on the return values of the update and delete operations in MyBatis. These two operations are essential in database management as they allow us to modify and delete data from the database. Let's dive in and understand the return values and their significance in MyBatis.Update Return Values:When we perform an update operation using MyBatis, it can return an integer value. This integer represents the number of rows affected by the update statement. For example, if we update five rows in the database, the return value will be 5.The update return value is crucial as it helps in determining the success or failure of the update operation. For instance, if the return value is zero, it indicates that the update didn't affect anyrows in the database. This can be useful when we want to verify whether the update operation was successful or not.To understand this better, let's consider an example. Suppose we have a table called "users" with columns "id", "name", and "age". We want to update the name and age of a specific user using MyBatis. Here's how we can do it:javapublic int updateUser(User user) {try (SqlSession session = sqlSessionFactory.openSession()) {int rowsAffected = session.update("updateUser", user);sessionmit();return rowsAffected;}}In the above code snippet, we open a new session using`openSession()` method, perform the update operation using the `update()` method, and finally commit the changes. The return value of the `update()` method will be returned as the result of theupdate operation.Delete Return Values:Similar to the update operation, the delete operation in MyBatis can also return an integer value. This value indicates the number of rows affected by the delete statement. For example, if we delete three rows from the database, the return value will be 3.The delete return value is useful to determine whether the deletion was successful or not. If the return value is zero, it means that no rows were affected by the delete statement, indicating a failure in the deletion operation.Let's take an example to understand this better. Suppose we have a table called "products" with columns "id", "name", and "price". We want to delete a specific product from the table using MyBatis. Here's how we can do it:javapublic int deleteProduct(int productId) {try (SqlSession session = sqlSessionFactory.openSession()) {int rowsAffected = session.delete("deleteProduct", productId);sessionmit();return rowsAffected;}}In the above code snippet, we open a new session, perform the delete operation using the `delete()` method, and commit the changes. The return value of the delete operation will be returned as the result.In conclusion, the return values of the update and delete operations in MyBatis play a significant role in determining the success or failure of these operations. By checking the return value, we can verify whether the desired changes were made to the database or not. It helps in handling error scenarios and ensures the integrity of our data. Understanding these return values is crucial when working with MyBatis for efficient database management.。
有关众包的英语作文SummaryCrowdsourcing is an emerging model of distributed problem solving, and is also widely studied and practiced to support software engineering. This paper reviews the application of crowdsourcing in software engineering, firstly reviews the definition of crowdsourcing, and draws the definition and classification of crowdsourcing software engineering. Then, it further analyzes the software engineering domains, tasks, and applications of crowdsourcing, and summarizes the industrial crowdsourcing practices and corresponding case studies in software engineering. Finally, we reveal the trends, open issues, and challenges of future research in crowdsourced software engineering.Software crowdsourcing connotation and operation process"Crowdsourcing" was proposed by Howe in 2006. Crowdsourcing is a distributed problem-solving model that leverages swarm intelligence, and Crowdsourcing Software Engineering (CSE) is derived from crowdsourcing.Software crowdsourcingCrowdsourced software engineering refers to a model in which an undefined, potentially large-scale online worker undertakes external software engineering tasks openly, as shown in the figure, dividing the field of crowdsourced software engineering. Crowdsourcing works by recruiting a global online workforce for various types of software engineering tasks, such as requirements extraction, design, coding, and testing. This new model reduces time-to-market by increasing parallelism and reducing cost and defect rates through flexible development capabilities.Domain division of crowdsourced software engineeringSoftware crowdsourcing operation processCrowdsourced software engineering has many potential advantages over traditional software development methods. Crowdsourcing can help software development organizations integrate flexible external human resources to reduce internal employment costs and speed up the development process by utilizing a distributed production model, as shown in the main execution process for software crowdsourcing .The main execution process of software crowdsourcing The software crowdsourcing operation process follows Simon's problem-solving model, which consists of two phases: the decision phase andthe implementation phase. The decision-making phase includes three typical phases: definition, design, and selection. While the implementation phase consists of an implementation phase followed by a review phase.Definition: The problem is defined, and the requester should demonstrate motivation for adopting CSE. Analyze potential issues such as cost, efficiency and scalability, intellectual property, quality of group work, etc.Design: The design phase is concerned with the development of alternative solutions. This stage may require research into potential options. For example: the granularity of tasks, crowd incentives, scale design, etc.Selection: During the selection phase, alternatives are evaluated. The output of this stage is an achievable decision.Implementation: The implementation phase is the phase where the decision is finally executed. Both CSE researchers and practitioners need to tackle a range of problems to implement crowdsourcing campaigns. For example, what intermediary platform should be used in order to get the job done? Next, how to manage tasks and workers?Review: The final review phase evaluates the results of the implementation.The above process model can provide guidance for CSE researchers and practitioners to achieve CSE. Several questions behind each stage remain open questions, pointing to important research questions.Crowdsourcing Applications in Software EngineeringCrowdsourced applications of software engineering are presented in subsections according to software development lifecycle activities related to software engineering. It is mainly divided into the following stages: software requirements, software design, software coding, software testing and verification, software development and maintenance.Crowdsourcing of Software Requirements AnalysisRequirements analysis is a critical step that affects a software project. Traditional stakeholder analysis tools require experts to manually extract stakeholder information.To reduce the cost of relying on experts to engage with stakeholders, Lim et al. integrated support for identifying stakeholders and prioritizing their needs, a large-scale requirements acquisition method based on social network analysis and collaborative filtering techniques.Hosseini et al. focus on using crowdsourcing to obtain demand. They summarize the main features of crowdsourcing and crowdsourcing in crowdsourcing requirements engineering by reviewing the existing literature to reveal the relationship between these features and the quality of elicited requirements. Wang et al. also utilize crowdsourcing to obtain demand and propose a framework for participant recruitment based on spatiotemporal availability. Theoretical analysis and simulation experiments demonstrate the feasibility of the framework.Lay people are used to process requirements documents. Extracting requirements from large natural language text sources is a difficult task when performed manually. However, these data are often used as ground truth for evaluation. This limits the generalization of the assessment to automatic requirements extraction methods. Breaux and Schaub conducted three experiments involving hiring untrained crowd workers, manually extracting requirements from privacy policy documents. The experimental results show that with the help of the task decomposition workflow, the coverage of manual requirements extraction is increased by 16% and the cost is reduced by 60%.Group stakeholders are not only the source of requirements, but can help with requirements prioritization and release planning. To support crowdsourcing requirements engineering activities, Adepetu et al. propose a conceptual crowdsourcing platform called CrowdRequire. The platform employs a competitive model that pits groups of people against each other to submit requirements specification solutions to customer-defined tasks,discussing the platform's business model, market strategy and potential challenges such as quality assurance, intellectual property, and more.Crowdsourcing of software designIn the existing commercial crowdsourcing market, there are many platforms that support software interface design, such as DesignCrowd and Crowdsping. However, few studies report the performance of software design using crowdsourcing.Huang et al. use crowds to draw mobile application wireframes and design examples on the Internet. Other scholars have proposed a crowdsourcing system called "Phantom" to help designers prototype interactive systems in real time based on sketches and functional descriptions. Experimental results show that Phantom can achieve more than 90% accuracy on user intent, and only needs a few seconds of response time.Currently, few crowdsourcing platforms support software architecture design, among which Topcoder is one of the most widely used platforms. However, industry crowdsourcing platforms such as Topcoder have limitations in evolving designs from solutions from multiple designers. Latoza et al. let designers make preliminary designs and refinements based on solutions from others. Their research demonstrates the usefulness of recombination in crowdsourced software design. Based on their findings, some recommendations for improving software design competitions are also highlighted.Nebeling et al. also propose improved software design based on data and functionality, a contribution from the crowd. However, these designs are specific website components within the domain of web engineering. Two preliminary experiments were conducted to demonstrate the performance of the method. Crowd motivation, quality assurance, safety and intellectual property issues were also widely discussed.Crowdsourcing of software codingSoftware coding using crowdsourcing focuses on three sub-areas: crowdsourcing programming environments, program optimization, and integrated development environment (IDE) enhancements.1) Group programming environment: Crowdsourced intermediaries play a key role in managing and coordinating group workers to complete the tasks of requesters. Much research has focused on providing systems that support population-based coding tasks. Goldman proposes role-specific interfaces for coordinating collaborative crowd coding efforts. By building a real-time web-based IDE collabode, the authors aim to implement emerging highly collaborative programming models such as group programming.2) Project optimization: Crowdsourcing can also be used for compilation optimization and project integration. In software crowdsourcing, a crowdsourced adaptive compiler for JavaScript code optimization is provided. Based on application performance data collected from web clients, a compiler flag recommendation system is built in the cloud. This system is used to instruct the compiler to optimize for a certain platform. Three optimized implementations are experimented with javascript code releases for eight platforms. One of the best optimizations showed an average five-fold increase in execution speed.3) IDE enhancements: The use of community knowledge to support coding activities in integrated development environments has been extensively studied. Several tools and methods have been proposed to help developers with coding and debugging.HelpMeou is a social recommender system that aids debugging by crowdsourcing suggestions. The system has a database for coding bug fixes built by crowdsourced developers. To collect fixes, the system automatically tracks changes to the code over time and records the actions that made the buggy code bug-free.Bluefix is an online tool that deals with interpreting and understanding compiler error messages for beginners. The tool helps students fix compile-timeerrors faster, with bluefix suggesting 19.52% more accurate than helpMeout.Example over Flow is a code search system that leverages community knowledge on Q&A sites to suggest high-quality embeddable code.Seahawk is an Eclipse plugin with some similar goals to Example Overflow. It attempts to leverage popular knowledge from Q&A sites such as Stackover Flow to provide documentation and programming support.Wordmatch and Snipmatch are two search tools that help developers integrate crowdsourced code snippets. WordMatch provides an end-user programming environment that allows users (with no programming experience) to generate direct answers to search queries.Bruch proposed the concept of IDE2.0 in 2012 (based on the concept of Web2.0). Bruch shows how group knowledge can help improve multiple features such as API documentation, code completion, error detection, and code search.Using swarm knowledge to find common examples from the web has similarities to the work of automatically fetching actual test cases from web-based systems. As with the possibility of combining genetic improvement and social recommendation, this similarity also points to the possibility of a hybrid version that could draw information from a combination of populations and networks for testing purposes.Crowdsourcing of software testingCrowdsourcing of software testing is often referred to as "crowdsourced testing" or "crowdsourced testing". Compared with traditional software testing, crowdsourced software testing not only has the advantage of recruiting professional testers, but also has the advantage of end-user support for the testing task.1) Usability testing: Traditional usability testing is labor-intensive, costly and time-consuming. Recruiting an online temporary group workforce could be one way to ameliorate these issues by leveraging a large potential user base andoffering end-users long-term incentives for lower labor rates.In 2013, Nebeling et al. proposed a toolkit implementation framework call ed Crowdsourcing Research on Crowdsourcing Website Usability Testing. To identify outliers in crowdsourced usability testing results, Gomide et al. in 2014 proposed a method for automatic hesitation detection using deterministic automata. The idea is to capture the user's biofeedback through the mouse's movements and skin sensors to reveal their hesitant behavior, which is highly instructive for filtering out unconfirmed usability test results.2) Performance testing: Due to differences in user behavior and execution environments, software performance in real-world environments is difficult to test. Masson et al. proposed a method that uses crowds to measure the actual performance of software products. Research has shown that the method is useful in identifying performance issues and assisting development teams in decision making.3) GUI Testing: Automated GUI test case generation is very difficult, and manual GUI testing is too slow for many applications. Vliegendhartetal company first proposed the graphical user interface testing of multimedia applications. Crowd testers were recruited from Amazon Mechanical Turk. They were asked to A/B test the user interface via a remote virtual machine. Experimental results show that it takes less than three days and $50 to complete two functionalized GUI testing tasks (100 assigned to each task).4) Test case generation: Chen et al. propose a puzzle-based automated testing (PAT) environment for decomposing target mutation and constraint solving problems into human-solvable games. Experimental results on two open source projects show that the coverage is improved by 7.0% and 5.8% compared to the coverage of two state-of-the-art test case generation methods.5) The oracle conundrum: Pastore et al. investigate crowdsourcing to alleviate the oracle conundrum, where they crowdsource automatically generated test assertions to both Aqualized groups of workers (with programming skills) and unqualified groups of workers on Amazon MechanicalTurk. Workers were asked to judge the correctness of the assertions and to further correct erroneous assertions. The experimental results suggest that crowdsourcing is a feasible way to alleviate the prophet A conundrum, although this approach requires skilled workers to provide well-designed and documented tasks.In addition, crowdsourcing is also applied to general software evaluation and more specific quality of experience evaluation.Crowdsourcing of software maintenanceSoftware development and maintenance is one of the first areas to benefit from crowdsourcing applications. To improve scalability, Bacon et al. propose a market-based software evolution mechanism. The goal of this mechanism is not to guarantee absolute "correctness" of software, but to economically fix bugs that users care about most. The proposed mechanism allows users to bid on bug fixes (or new features) and rewards reporters, testers, and developers who respond to bugs.Software documentation plays a vital role in program understanding. In crowdsourcing, a reuse method based on object inheritance is proposed, and the feasibility of the document reuse method is confirmed, which improves the document quality and coverage [6]. Moreover, document crowdsourcing, localization crowdsourcing, etc. also greatly improve the maintainability of the software.Most existing crowdsourcing software engineering methods employ a single batch model, applying crowdsourcing to solve a well-defined single task. See the waterfall model used by existing platforms such as Topcoder, which enables rapid development of practical methodologies for massive crowdsourced development work. But this phenomenon may be temporary, and as it matures, crowdsourced software engineering may become adaptive and iterative to better model the underlying software engineering processes it supports. A recent study shows that iterative reorganization can help improve crowdsourced software design. In fact, multi-crowdsourcing softwareengineering is a natural iterative process in which each group responds to and influences the outcomes of tasks performed by other groups.In this review, the crowdsourced software development models, major commercial platforms for software engineering and corresponding case studies, and crowdsourced applications of software engineering research are summarized, providing insights into crowdsourced software engineering applications in the software development life cycle. A more fine-grained view of the domain, and summarizes the challenges, development trends, open issues, and future research directions of crowdsourced software engineering. A comprehensive review of research on crowdsourcing software engineering activities can provide broad creative and design value for crowdsourcing software engineering activities, and provide guidance for the realization and improvement of crowdsourcing software engineering frameworks by reusing existing crowdsourcing knowledge Opinion.。
合同管理战略金字塔英语The Strategic Pyramid of Contract ManagementEffective contract management is a critical component of any successful business operation. It involves a comprehensive approach that encompasses the entire lifecycle of a contract, from its inception to its completion. This strategic pyramid of contract management provides a framework for organizations to optimize their contract management processes and achieve their desired outcomes.At the foundation of the strategic pyramid lies the fundamental principles of contract management. These principles serve as the bedrock upon which all other aspects of the process are built. They include a thorough understanding of contract law, the ability to effectively negotiate and draft contracts, and the implementation of robust record-keeping and documentation practices.Building upon this foundation, the next level of the pyramid focuses on the operational aspects of contract management. This involves the development and implementation of standardized processes and procedures for contract creation, review, approval, and execution. It also encompasses the establishment of clear roles andresponsibilities within the organization, ensuring that all stakeholders understand their respective duties and accountabilities.The third level of the strategic pyramid is the integration of technology into the contract management process. The rapid advancements in technology have revolutionized the way organizations manage their contracts. Contract management software, digital signatures, and cloud-based storage solutions have all contributed to the increased efficiency and transparency of contract management. By leveraging these technological tools, organizations can streamline their processes, reduce the risk of errors, and improve overall contract visibility.At the apex of the strategic pyramid lies the strategic dimension of contract management. This level focuses on the alignment of contract management with the organization's overall business objectives and strategic goals. It involves the development of a comprehensive contract management strategy that considers the organization's long-term growth, risk management, and competitive positioning.To effectively implement this strategic pyramid of contract management, organizations must adopt a holistic and collaborative approach. This requires the involvement and commitment of cross-functional teams, including legal, procurement, finance, andoperations, to ensure that all aspects of contract management are addressed.Moreover, the successful implementation of this strategic pyramid requires a continuous process of monitoring, evaluation, and adaptation. As the business environment and regulatory landscape evolve, organizations must be prepared to adapt their contract management practices to remain competitive and compliant.In conclusion, the strategic pyramid of contract management provides a comprehensive framework for organizations to optimize their contract management processes and achieve their desired outcomes. By embracing the principles, operational practices, technological integration, and strategic alignment, organizations can unlock the full potential of their contract management efforts and drive sustainable growth and success.。
Amin Ahmad amin.ahmad@ Certifications and Training∙IBM Certified Solution Developer – XML and Related Technologies.∙Sun Certified Web Component Developer for Java 2 Platform, Enterprise Edition (310-080). ∙Sun Certified Programmer for the Java 2 Platform (310-025).∙IBM MQSeries training.EducationBowling Green State UniversityBachelor of Science, majoring in Computer Science and in Pure Mathematics∙Graduated summa cum laude (cumulative grade point average 3.95/4.00)Experience Summary∙Eight years of cumulative industry experience including seven years of experience as a Java EE developer and one year as an SAP ABAP technical developer. Extensive experience in the financial sector, with Fortune 500 firms, and with large state governments.∙Served as software architect for large and medium-sized systems, as a technical, and as a framework/component designer.∙Excellent experience with important programming methodologies including: model driven architectures, object-oriented, component/container-oriented, and aspect-oriented design.Strong knowledge of object-oriented patterns including most patterns referenced by the Group of Four. Good knowledge of J2EE design patterns, including most design patterns covered in Sun’s Java EE Design Patterns guide.∙Extensive Java EE design and implementation experience using a full spectrum of technolo-gies include Servlets, Java Server Pages, Enterprise Java Beans, and JMS. Experienced in designing for very large systems handling millions of transactions per day.∙Extensive experience designing and implementing Eclipse RCP applications (utilizing OSGi), as well as writing plug-ins for the Eclipse platform. Extensive experience with the Eclipse Modeling Framework (EMF) and the Graphical Editor Framework (GEF).∙Strong background in rich client development using Swing and SWT/JFace toolkits.∙Excellent industry experience developing efficient, portable, web-based user interfaces using HTML, DHTML, JavaScript, CSS 2, and the Google Web Toolkit. Some SVG experience.∙Strong experience utilizing various XML server and client-side technologies in J2EE solu-tions, including DTD, W3C Schema, DOM 2, DOM 3, SAX, XPath, XSL-FOP, XSL-T, JAXB, Castor XML, and, to a lesser extent, JiBX and JDom.∙Extensive systems integration experience using IBM MQSeries, and including both the IBM MQSeries API for Java and the JMS API. Familiarity with installing and administering a va-riety of JMS providers, including IBM MQSeries 5.20, OpenJMS 0.7.4, JORAM, SunOne Message Queue, and SwiftMQ. Strong JMS knowledge, including hands-on experience with implementation of server-session pooling.∙Good experience with SAP R/3 3.1H in the Plant Management, Sales and Distribution, and Finance modules. Areas of experience include basic reporting, interactive reporting, BDC ses-sions, dialog programming, program optimization (especially OpenSQL query optimization), layout sets (debugging), OLE, and general debugging tasks.Resume of Amin Ahmad 1 of 17∙Good experience administering BEA WebLogic 8 and 9, including configuring clustering.Also experienced in deploying J2EE applications to IBM WebSphere (2.03, 3.5, 4.0, 5.0), Jakarta Tomcat (3, 4, 5, and 6 series).∙Experience with JDBC 2, SQL, and a number of RDBMS systems including DB2, UDB, Oracle, Firebird (Interbase branch), MySQL, and Access. Have defined schemas in the fifth normal form (5NF) through DDL following SQL 92 standards, and of implementing indices, synthetic keys, and de-normalization as appropriate to improve performance.∙Experience with UML design, especially using sequence and class diagrams. Experience with Rational Rose and Omondo UML.∙Excellent experience with several industry-standard Java integrated development environ-ments, including Visual Age 3.5, Eclipse 1.0, 2.1.x, 3.x, and, to a lesser extent, WebSphere Studio Application Developer 4.0 and 5.0, and Forte for Java, Community Edition.∙Experienced in designing and implementing modular build systems with inbuilt quality con-trol mechanisms using Ant 1.5 and Ant 1.6.∙Good experience with many quality control and testing tools, including JUnit, JProbe, hprof, OptimizeIt, Pasta Package Analyzer, CodePro Code Audit, Clover Code Coverage, and Sun DocCheck Doclet.∙Excellent experience with SVN and CVS version control systems, including Eclipse integra-tion. Some experience with CMVC, including authoring a CMVC plug-in to provide Eclipse1.0 and WSAD integration.∙Good experience with issue management systems including Bugzilla and Mantis. Experience installing and administering Mantis.∙Some Python 2.4.1 scripting experience.∙Enjoy typesetting using the Miktex distribution of TeX and LaTeX. Experienced in typeset-ting Arabic-script languages using ArabTex.∙ A background in helping others improve their technical skills: Mentored junior and mid-level programmers; published a variety of articles including for IBM DeveloperWorks; served as a technical interviewer for American Management Systems and as an instructor on Java funda-mentals for American Express; participated as a presenter in several J2EE Architecture fo-rums at American Express.∙Demonstrated desire to contribute to the open source software community. I maintain five GPL and LGPL software projects.∙Excellent communication skills and a strong desire to develop high-quality software. Employment HistoryCGI, Inc. October 2006 – August 2007Sr. Java EE consultant for Florida Safe Families Network (FSFN)—a large welfare automation project servicing approximately 70,000 transactions per hour. Responsible for system integration, security integration, team mentoring, and infrastructure development.∙Responsible for integrating FSFN authentication with IBM Tivoli Directory Server. Designed and implemented a JNDI-based solution using LDAP custom controls to parse extended serv-er response codes. The solution also implemented a custom pooling solution that was stress tested to over 5,000 authentications per minute.Resume of Amin Ahmad 2 of 17∙Designed and implemented a single sign on mechanism for FSFN and the Business Objects-based reporting subsystem. Designed and supervised implementation of data synchronization logic between the two systems.∙Involved in design and implementation of auditing infrastructure, whereby the details of any transaction in the system can be recalled for later analysis.∙Extensive infrastructure development, including:1.Configuration of WebLogic 9 server environments, include clustered environments. Ex-tensive work with node manager and Weblogic plugin for Apache. Implemented automat-ic pool monitor/re-starter tool using JMX.2.Developed and implemented improved standards for build versioning. Automated a va-riety of build, deployment, and code delivery tasks, including JSP compilation usingJSPC for WebLogic. This involved extensive use of Ant 1.7 and bash scripting.3.Extensive configuration of Apache, including configuring proxies, reverse proxies, instal-ling PHP, and the Apache Tomcat Connector.4.Created and administered MoinMoin 1.5.7 wiki to track environment information and toserve as a general developer wiki.5.Designed and implemented backup jobs and automated system checks using cron.∙Collaborated on design and implementation of search modules, including person search.1.Double metaphone (SOUNDEX), nickname, wildcard, and exact searches based on first,middle and last names. Over twenty available search criteria including a variety of demo-graphics such as age and address.2.Round trip time of under 10 seconds for complex searches returning 100,000+ results,and near-instantaneous results for most common searches. Achieving this level of per-formance required extensive manipulation of clustering and non-clustering indexes on the DB2 8.1 mainframe system, as well as work with stored procedures, JDBC directional re-sult sets, and absolute cursor positioning.∙Designed and implemented various data migration tools in Java.∙Tasked with ensuring that all developers were able to work productively on their topics. This mainly involved helping developers troubleshoot Struts, JSP, and JavaScript issues, and im-proving their knowledge of these languages and frameworks.Freescale Semiconductor August 2006 – September 2006 Collaborated on design and development of Eclipse 3.2.x-based port of Metrowerks CodeWar-rior’s command window functionality.∙Implemented a wide variety of shell features including tab-based auto-completion, auto-completion of file system paths, scrollable command history, user-configurable font face and color, and custom key bindings. This required, among other things:1.Implementation of custom SWT layouts.2.Extensive use of JFlex for lexical analysis of output data streams for auto-coloration. Resume of Amin Ahmad 3 of 173.Changes to standard image conversion process to preserve alpha values when movingbetween Sun J2SE and SWT image formats.Webalo January 2006 – August 2006 Sr. Software EngineerResp onsible for research and development of Webalo’s suite of Java-based, web-service based mobile middleware.∙Implemented a Java MIDP 2.0, CLDC 1.1-based ―Webalo User Agent‖. The Webalo User Agent resides on a user’s mobile device and allows interaction with enterprise data exposed through web services.1.Conducted research of the U.S. mobile phone market to (a) determine compatibility ofthe MIDP Webalo User Agent with phones currently in use, and to (b) determine if sim-ple changes to the technology stack could improve its compatibility.2.Implemented a custom data-grid component, optimized for low-resolution devices, withadvanced features such as automatic layout, tooltip support, column selection, and drill down support. Conducted extensive testing against a Samsung A880 device.3.Implemented a high performance, memory efficient XML parser suitable for use in anembedded environment, using JFlex 1.4.1. The parser has a smaller codebase, smaller memory footprint, faster parsing times, and fewer bugs than the older, hand-written re-cursive-decent parser.4.Focused on optimizing distribution footprint of the Webalo User Agent without sacrific-ing compatibility. Ultimately reduced size from 1.2MB to 270KB using an extensive tool chain including Pngcrush, ImageMagick 6.2.9, ProGuard 3.0, and 7-zip.5.Extensive use of Sun’s Wireless Java Toolkit 2.3 and 2.5, EclipseME, and AntennaHouse J2ME extensions for Ant. To a lesser extent, performed testing of the Webalo User Agent using MIDP toolkits from Samsung, Motorola, Sprint, and Palm (Treo 650 and 700 series).∙Designed and implemented JMX-based system monitoring for the Webalo Mobile Dashboard.Also, implemented a web-based front-end for accessing system monitoring information.∙Worked on the design and implementation of a Flash-based, mobile mashup service designer.1.Extended Webalo’s existing, proprietary tool for converting Java code to ActionScript byadding support for converting inner classes, anonymous inner classes, overloaded me-thods, and shadowed fields. Use was made of JavaCC and The Java Language Specifica-tion, Second Edition.2.Designed and implemented a framework for remote service invocation from the Flashclient to the Java EE-based server. Significant features of the framework include an XML-based serialization format and a remote proxy generator written in Java.3.Wrote Flash-based UI Framework on which the various mashup-related wizards thatcomprise the product are implemented.4.Implemented a library of XML functions with an API that is compatible in both Java andFlash environments.eCredit August 2005 – August 2006 Responsible for the end-to-end design and implementation of a visual authoring environment for commercial credit and collections processes.Resume of Amin Ahmad 4 of 17∙The authoring environment is implemented atop the Eclipse Rich Client Platform (Eclipse RCP), version 3.1, running Java 5.0. Design and development are bridged through the use of a model driven architecture, using EMF, version 2.2. The visual editor component of the system is implemented using the Graphical Editor Framework (GEF).∙Business processes are serialized into XML format for easy interchange with other systems. ∙Implemented a Windows-based installer using the Nullsoft Scriptable Install System (NSIS). Ohio Department of Taxation June 2005 – August 2005 Independent ConsultantConsulted on the design and implementation of a J2EE-based taxation system for handling taxpayer registration, returns, and workflow requirements. The system has the following characte-ristics:∙DB2 v8.0 was used as a data store and triggers were utilized extensively to implement data auditing requirements.∙Business logic was implemented within WebSphere 5.0 using EJB session and entity beans. ∙ A standard Model 2 architecture was implemented utilizing Jakarta Struts and the Tiles Document Assembly Framework.Expertise was also provided in the following areas:∙Implementation of a data access service. Wherever possible, the layered service implements component design principles such as inversion-of-control. Key features include:1.Automatic data pagination, which is a key to maintaining service level requirements.2. A scheme to decouple the work of query authors from the user interface designers.3.All aspects of a data access instance (query) are located in a cohesive, class-based unit. ∙Reviewed logical schema model. Provided feedback for indices, normalization opportunities, as well as general enhancements in the light of business requirements.∙Design of database population and deletion scripts.∙Definition of a strategy for automated quality assurance using Load Runner.∙Extensions to Struts custom tags to improve productivity of the user interface team.∙Training and mentoring developers.NYSE, New York Stock Exchange February 2005 – May 2005 Independent ConsultantServed as a senior consultant for the design and implementation of a fraud tracking system. Business logic was implemented within WebSphere 5.0, while the front-end rich client was implemented atop the Eclipse platform. Expertise was provided in the following areas:∙Formulated a JUnit-based unit testing strategy for server side business components, and provided an initial proof-of-concept.∙Proposed procedures for basic incident tracking, as well as procedures for utilizing JUnit for regression testing of incidents. Provided a proof-of-concept system using Mantis 1.0.0 run-ning on Apache HTTP Server and MySQL.∙Audited logging procedures within server-side business logic and made several proposals, including the use of Log4J nested diagnostic contexts to improve the ability to correlate log-ging statements.∙Audited server-side security model and issued several recommendations.∙Implemented a rigorous type system to increase front-end developer productivity and provided a detailed, three-stage roadmap for its evolution.Resume of Amin Ahmad 5 of 17∙Performed a comprehensive audit of the rich-client tier of the application and issued a findings report. Also performed a small-scale feasibility analysis on the use of the Eclipse Modeling Framework to improve productivity.∙Designed and implemented a data grid component to standardize display and manipulation of tabular data within the system. Key features include:1. A user interface modeled after Microsoft Excel and Microsoft Access. Features of theuser interface include column reordering, column show/hide, record-set navigator, selec-tion indicators, and row numbering.2.Support for multi-column sorting, multi-column filter specification using Boolean sheetsand supporting a variety of match operators including regular expressions.3.Support for persistent sort and filter profiles, column ordering, and column widths usingthe java.util.prefs API.4.Microsoft Excel data export.5. A developer API that greatly simplifies loading data stored in Java Beans.6.Data formatting is tightly integrated with the application’s type system.eCredit September 2004 – January 2005 Served as a senior resource in the design and implementation of the Mobius Process Development Environment (PDE). The PDE, which serves as a business rules and process flow authoring sys-tem for the Mobius business process engine, is provided as a feature for Eclipse 3.1 and Java 5.0. ∙Designed and implemented a shared workspace using Jakarta Slide 2.1 WebDAV repository.The client view of the repository is represented using an EMF 2.1 native ECore model that is loaded and persisted during plug-in activation and deactivation, and provides support for workspace synchronization, asynchronous deep and shallow refresh, locking and unlocking of resources, check-in and check-out capabilities, and a drag and drop operation for resources. In addition, a WebDAV-compliant recycle bin was implemented. Etag-based content caching was implemented to improve workspace performance. Responsiveness was increased through heavy use is made of the Eclipse 3.0 Job API.∙Designed and implemented a secure licensing model for the PDE. Licenses, which contain roles and their expiration dates for a specific principal, are obtained from a licensing server after providing a valid license key. Licenses are signed by the licensing server to prevent modification and encrypted us ing the PDE client’s public key to prevent unauthorized access.Implementation entailed the use of strong (2048 bit) RSA-based X.509 certificates for license signing and symmetric key encryption, as well as the AES symmetric cipher (Rijndael va-riant) for bulk data encryption. Heavy use is made of the Sun JCE implementation and the Bouncy Castle 1.25 JCE implementation.∙Designed and implemented a runtime view of business process servers. Wizards were included for adding new servers, and, existing servers can be visually explored and operated on. For example, servers can be started and stopped, new business processes can be deployed on particular servers, and runtime logs for particular processes can be opened. The client view of the runtime environment is represented as an EMF 2.1 annotated Java ECore model. The view provides Job-based asynchronous refresh for enhanced responsiveness.∙Implemented web services-based integration with business process servers. This involved the use of SOAP, SAAJ, and JAX-RPC, including the use of wscompile for client-side stub gen-eration.Resume of Amin Ahmad 6 of 17∙Implemented editors and viewers for several types of XML documents in the Mobius system.Models, edit frameworks, and basic editors were automatically generated from W3C schemas using EMF 2.1. Editors were then heavily customized to includea.Master-detail block support.b.Writeable and read-only modes that are visually indicated through a customization tothe toolbar. Read-only mode is automatically enabled whenever a file is opened butnot locked by the current user,c.Automatic upload to WebDAV repository as part of the save sequence.d.Enforcement of semantic constraints using problem markers for edited files.∙Designed and implemented five wizards using the JFace Wizard framework.∙Provided Eclipse mentoring for eCredit employees assigned to the project.Islamic Society of Greater Columbus February 2004 – June 2004 Provided pro bono consulting expertise towards the development of a web-enabled membership and donation database. This system served to streamline the workflow of the organization and represented an upgrade from the old fat-client system written in Microsoft Access, which suffered for many problems including data synchronization and an inefficient user interface. In addition, a variety of reporting and mass mailing features were added.∙The data tier was implemented using the Firebird 1.5 RDBMS. The schema was normalized to the fifth normal form (5NF), and synthetic keys and indices were added to optimize per-formance. Name searching was optimized through indexed soundex columns in the individual entity information table.∙ A data converter and cleanser was implemented to move data from the old data tier (Microsoft Access) to the new one (Firebird).∙The application tier utilized the model-view-controller architecture pattern and was imple-mented using Java Servlets and Java Server Pages. Data access utilized the data access object pattern.∙The presentation tier made heavy use of CSS2 to minimize data transfer requirements while maximizing cross browser portability of the application.∙The application was deployed as a WAR file to Apache Tomcat 5.0, running on a Windows XP system and was tested using both Internet Explorer 6 and Mozilla Firebird 0.8 browsers. CGI-AMS August 2003 – June 2004 Independent ConsultantConsulted as an architect for the Office Field Audit Support Tool (OFAST) project, a large project for the Ohio Department Taxation employing approximately twenty-five full-time staff, involving the design, development, testing, and deployment of a Swing-based auditing tool used by tax auditors across the state. Tax rules were coded in a custom language optimized for rules processing.∙Responsible for designing and implementing the architecture of the OFAST user interface.The architecture has the following key components:1.Screen definitions are stored in XML files and are dynamically built at runtime fromthose definitions. A W3C XML Schema, veri fied using Sun’s MSV 1.5 and IBM Web-Sphere Studio 5.0 schema validation facilities, was authored for the screen definition lan-guage.Resume of Amin Ahmad 7 of 172. A data grid type whose values can be bound dynamically to an entity array. The data gridalso supports an Excel 2000 look and feel, dynamic column sorting with visual indicators for ascending, descending, and default sort orders, line numbers, multiple layers of trans-parency, cell editors and renderers that vary from row to row in the same column, and da-ta cells whose values can be dynamically bound to reference data groups stored in the database (and are hence selectable through a combo box).3.An extensible strong-typing system for managing user input. A type contains operationsfor data validation, data parsing, and data formatting, as well as operations for returning a default renderer and editor. A type’s operations are strictly defined mathematically ther e-by enhancing flexibility and maintainability.4. A form header component that allows forms to be associated with a suggestive image, atitle, and instructions. Instructions can be dynamically modified at runtime with warning and error messages.5. A wizard framework to allow complex, sequential operations that comprise many logicalpages to be developed rapidly. Built in type validations.∙Architected and implemented the user interface and business logic tiers for the Corporate Franchise Tax audit type. Additionally, participated in functional requirements analysis as well as functional design activities.Tax forms, which constitute the basis for an audit, vary from year to year and are dynamically generated from database metadata information. In addition, corporate tax forms are inter-linked, receiving values from one another during the course of computations. The user inter-face makes heavy use of Swing, especially tables, and provides a rich set of features to sup-plement standard data entry operations, including, but not limited to: visual indicators for overridden, sourced, and calculated fields; line and form-level notes, cell-specific tooltips to indicate sourcing relationships; dynamic, multi-layered line shading to indicate selected lines and lines with discrepancies; custom navigation features including navigate-to-source func-tionality.∙Mentored developers and introduced more rigorous quality control procedures into the application development lifecycle, including: implementation of an Ant 1.5 and 1.6-based modular build system, W3C Schemas for all XML document types, and a number of standa-lone quality-control programs written using Jakarta CLI.∙Designed a visual editor for creating and editing user interface definition files using the Eclipse Modeling Framework 2.0 and W3C XML Schema 1.1.∙Designed and implemented an integrated development environment for the custom rules proc-essing language in Java Swing. The development environment provides the following features to developers:1.File and file set loading for entity definition files, trace files, user interface definitionfiles, and decision table files as well as the ability to export and import file sets.2.File viewers for each of the file types, including syntax highlighting (using the JEdit Syn-tax Package) and tree views for XML files. The trace file view includes integrated step-into support to restore the state of the rules engine to a given point in the program execu-tion.3.An entity view to support viewing individual entities and their metadata.4. A three-page wizard to guide users through the process of creating new decision tables.This wizard includes sophisticated regular expression checks (using Jakarta ORO and Perl 5 regular expressions) on the new decision table, as well as a number of other integ-rity checks.Resume of Amin Ahmad 8 of 175. A table editor component consisting of functions for compiling, deleting, saving, andvalidating decision tables, as well as four sub-screens for providing access to table data.The component also automatically logs and tracks modification histories for every table, in addition to performing transparent validation of table structure. The first screen pro-vides access to table metadata as well as read-only access to the audit log for the table.The second screen supports table modifications and includes an editor that supports auto format plus diagnostic compile features. The third screen supports a call-to view, show-ing which tables are called from the current decision table. Finally, the fourth screen pro-vides read-only access to the raw XML source for the table. Manipulation of the decision table document was accomplished using DOM 3 features in Xerces.6. A feature to export documentation for all decision tables, including comments and formaldeclarations, into hyperlinked HTML files for easy viewing in a browser.7.An expression evaluator for dynamically executing actions and conditions within the cur-rent context and displaying the results.8.Stack explorers that allow the entities on the rules e ngine’s runtime stacks to be recu r-sively explored, starting from either the data or entity stacks. Every entity’s attributes, i n-cluding array, primitive, and entity-type attributes, can be inspected.9. A navigator view that allows convenient access to all resources loaded into the IDE. Thenavigator view dynamically updates as resources are added or removed, and provides support for decorators to further indicate the state of the loaded resources. For example, active resources use a bold font, unsaved resources include a floppy disk decorator and an asterisk after the name, and invalid decision tables have a yellow warning sign decorator.er experience optimized for JDK 1.4, but supports graceful degradation under JDK 1.3using reflection.11.Support for internationalization through the use of resource bundles.∙Designed and supervised implementation of POI-based Microsoft Excel integration to replace existing Jawin COM-based integration. Resulting implementation was an order of magnitude faster than COM-based implementation and used considerably less memory as well.∙Provided two four-hour training seminars as well as a number of shorter training lectures for state employees. The purpose of the lectures was to provide junior-level programmers with the necessary information to begin programming in the OFAST environment.American Express May 2002 – August 2003 Worked as a senior developer and junior architect within the Architecture Team of American Ex-press’s Interactive Technologies Division.Standards Governance∙Involved in authoring several internal, strategic, position papers regarding J2EE strategy within American Express. For example, I was involved in defining a strategy for enterprise services.∙Designed and taught key parts of an internal Java training class for American Express Employees. Developed a curriculum consisting of lectures, programming assignments, and group exams to prepare students for the Java Programmer Exam.∙Lectured on Java, XML manipulation technologies, and data binding at several Architecture Forums and Java Forums.ConsultingProvided consulting to application teams. Generally consulting requests fell into one of three cate-gories. Overall, dozens of consulting requests were addressed.Resume of Amin Ahmad 9 of 17。
REFERENCESAlavi, M., & Carlson, P. (1992). A review of MIS Research and Disciplinary Development. Journal of Management Information Systems, 8(4), 45-62. Alexandrini, F., Krechel, D., Maximini, K., & von Wangenheim, A. (2003).Integrating CBR into the health care organization. Paper presented at the 16th IEEE Symposium on Computer-Based Medical Systems, New York, NewYork, USA.Amardeilh, F., Laublet, P., & Minel, J. (2005). Documentation Annotation and Ontology Population from Linguistic Extractions. Paper presented at the K-CAP '05, Banff, Alberta, Canada.Anderson, G. Lecture 1: Scientific Method. Retrieved February 8, 2006, from /office/ganderson/es10/lectures/lecture01/lecture01 .htmlAnthony, R.N. (1965). Planning and Control Systems: A Framework for Analysis.Cambridge, MA., Harvard University Graduate School of BusinessManagement.Australian Health Information Council. (2003). Electronic Decision Support for Australia's Health Sector . Retrieved 11 May 2006, from .au Australian Health Information Council. (2003) Electronic Decision Support Evaluation Methodology. Retrieved 11 May 2006, from.au/evaluation/guidelines.htmAvison, D. (2002). Action Research: A Research Approach for Cooperative Work.Paper presented at the 7th International Conference on Computer SupportedCooperative Work in Design, Rio de Janeiro, Brazil.Avison, D., Lau, F., & Myers, MD. (1999). Action Research. Communications of the ACM, 42(1), 94-97.Babcock, B., Babu, S., Datar, M., Motwani, R. & Widom, J. (2002). Models and Issues in Data Stream Systems. Paper presented at the 21st ACM SIGMOD-SIGART Symposium on Principles of Database Systems, Madison,Wisconsin.Beckett, D. (2004). Scalable RDBMS report. Retrieved 4 June 2004, from /2001/sw/Europe/reports/scalable_rdbms_mapping_report Becquet, C., Blachon, S., Jeudy, B., Boulicaut, JF., Gandrillon, O. (2002). Strong-association-rule mining for large-scale gene-expression data analysis: a casestudy on human SAGE data. Genome Biology, 3(12).Bench-Capon, T., Coenen, F., Nwana, H., Paton, R., Shave, M. (1993). Two aspects of the validation and verification of knowledge based systems. IEEE Expert,8(3), 76-81.Bench-Capon, T., & Visser, P. (1997). Ontologies in Legal Information Systems; The Need for Explicit Specifications of Domain Conceptualisations. Paperpresented at the 6th International Conference on AI and Law, Melbourne,Victoria, Australia.Berndt, DJ., Fisher, JW., Hevner, AR., & Studnicki, J. (2001). Healthcare Data Warehousing and Quality Assurance. Computer, December 2001, 56-65. Blackmore, K., & Bossomaier, T.R.J. (2002). Soft computing methodologies for mining missing person data. In Proceedings of Sixth Australia-Japan JointWorkshop on Intelligent and Evolutionary Systems (AJJWIES 2002),Canberra, ACT, Australia.Bonifati, A., Cattaneo, F., Ceri, S., Fuggetta, A. & Paraboschi, S. (2001). Designing Data Marts for Data Warehouses. ACM Transactions on Software Engineering and Methodology, 10(4), 452-483.Boose, J. (1985). A Knowledge Acquisition Program for Expert Systems Based on Personal Construct Psychology. International Journal of Man MachineStudies, 23, 495-525.Boucelma, O., Castano, S., & Goble, C. (2002). Report on the EDBT'02 Panel on Scientific Data Integration. SIGMOD Record, 31(4).Brossette, S., Sprague, A., Hardin, M., Waites, K., Jones, W., & Moser, S. (1998).Association rules and data mining in hospital infection control and publichealth domain. Journal of American Medical Informatics Association, 5(4),373-381.Cabena, P., Hadjinian, P., Stadler, R., Verhees, J. & Zanasi, A. (1997). Discovering Data Mining from Concept to Implementation. New Jersey, USA: Prentice-Hall PTR.Carey, S. (1994). A Guide to the Scientific Method. California, USA: Wadsworth Publishing Company.Cesnik, B. (2002). Report of the Electronic Decision Support Governance Workshop.Retrieved 11 May 2006, from .auConnolly, T., & Begg, C. (2005). Database Systems: A Practical Approach to Design, Implementation and Management (4th ed.). England: Addison-Wesley. Corsi, P., Weindling, P. (Eds.) (1983) Information Sources in the History of Science and Medicine. London: Butterworths.CRISP-DM. (2004). Retrieved 1 December 2004, from Devlin, B. (1997). Data Warehouse- from Architecture to Implementation. Reading Mass: Addison Wesley.DxPlain (2006) Lab of Computer Science, Massachusetts General Hospital.Retrieved 3 February 2006, from/projects/dxplain.htmlETL Portal. (2006). DM Review Retrieved 1 November 2006, from/portals/portal.cfm?topicId=230206Ewen, E., Medsker, C., Dusterhoft, L., Levan-Schultz, K., Smith, J., & Gottschall, M.(1999). Data Warehousing in an Integrated Health System; Building theBusiness Case. ACM, 47-53.Gahleitner, E., Behrendt, W., Palkoska, J., Weippi, E. (2005). On Cooperatively Creating Dynamic Ontologies. Paper presented at the ACM HT'05, Salzburg,Austria.Galliers, R.D. (1993). Research Issues in information systems. Journal of Information Technology, 8, 92-98.Golfarelli, M., Rizzi, S., & Vrdoljak, B. (2001). Data warehouse design from XML sources. Paper presented at the 4th ACM International Workshop on DataWarehousing and OLAP, Atlanta, Georgia, USA.Gomez-Perez, A. (2004). Retrieved 4 June 2004, from ontoweb.aifb.uni-karlsruhe.de/Members/ruben/Deliverable%201.5Goodwin, L., & Grzymala-Busse, J. (2001). Data Mining Approaches for Perinatal Knowledge Building. Handbook of Data Mining and Knowledge Discovery.New York: Oxford University Press.Goodwin, L., Iannacchione, A., Hammond, W., Crockett, P., Mahler, S., & Schlitz, K.(2001). Data Mining Methods Find Demographic Predictors of Preterm Birth.Nursing Research, 50(6), 340 - 345.Goodwin, L., Maher, S., Ochno-Machado, L., Iannacchione, M., Crockett, P., Dreiseitl, S., Vinterbo, S., & Hammond, W. (2000). Building Knowledge in aComplex Preterm Birth Problem Domain. Paper presented at the AMIAAnnual Fall Symposium, Philadelphia.Gorry, G.A, & Scott Morton, M. (1971). A Framework for Management Information Systems. Sloan Management Review, 13, 55-70.Graziano, A., & Raulin, M. (2003). Research Methods a Process of Inquiry (5th ed.).Boston: Pearson Education.Gross- Portney, L., & Watkins, M. (2000). Foundations of Clinical Research, applications to practice. (7th ed.). New Jersey: Prentice Hall Health. Gruber, TR. (1992). ONTOLINGUA: A Mechanism to Support Portable Ontologies, technical report: Knowledge Systems Laboratory, Stanford UniversityCalfornia, USA.Hagland, M. (2004). Health Care Informatics Online Data Mining. Health Care Informatics. Retrieved 1 April 2004 from http://www.healthcare-/Han, J. (1995). Mining Knowledge at Multiple Concept Levels. Paper presented at the 4th International Conference on Information and Knowledge Management.Baltimore, Maryland, USA.Han, J. (1996). Data mining techniques. ACM SIGMOD Record, Proceedings of 1996 ACM SIGMOD international conference on management of data SIGMOD'96, 25(2), Montreal, Quebec, Canada.Han, J. (1998). Towards on-line analytical mining in large databases. ACM SIGMOD Record, 27(1).Han, J. (2002). Evolving data mining into solutions for insights: Emerging scientific applications in data mining. Communications of the ACM, 45(8), 54-58. Han, J., Chiang, J., Chee, S., Chen, J., Chen, Q., Cheng, S., Gong, W., Kamber, M., Koperski, K., Liu, G., Lu, Y., Stefanovic, N., Winstone, L., Xia, B., Zaine, O., Zhang, S., & Zhu, H. (1997). DBMiner: a system for data mining in reltaional databases and data warehouses. Paper presented at the 1997 Conference ofthe Centre for Advanced Studies on Collaborative research. Toronto, Canada. Han, J, & Kamber, M. (2001). Data Mining Concepts and Techniques (1 ed.). San Francisco: Morgan Kaufmann Publishers.Han, J., & Pei, J. (2000). Mining frequent patterns by pattern-growth: methodology and implications. ACM SIGKDD Explorations Newsletter, 2(2), 14-20. Han, J., Pei, J., & Yin, Y. (2000). Mining frequent patterns without candidate generation. ACM SIGMOD Record, SIGMOD'00, 29(2), 1-12.Hayes, P., Reichherzer, T., & Mehrotra, M. (2005). Collaborative Knowledge Capture in Ontologies. Paper presented at the K-CAP '05, Banff, Alberta, Canada. Heath, J., Heath, S., McGregor, C., Smoleniec, J. (2004). DataBabes: A Case Study in Data Warehousing and Mining Perinatal Data. Paper presented atCASEMIX, Sydney, Australia.Heath, J., & McGregor, C. (2004). Research Issues in Intelligent Decision Support.Paper presented at the UWS College of Science, Technology andEnvironment, Innovation Conference, Sydney, Australia.Heath, J., McGregor, C., & Smoleniec, J. (2005). DataBabes: A Case Study in Feto-Maternal Clinical Data Mining. Paper presented at the Health InformaticsConference of Australia, Melbourne.Hummer, W., Bauer, A., & Harde, G. (2003). XCube - XML For Data Warehouses.Paper presented at the 6th ACM International Workshop on Data Warehousing and OLAP, New Orleans, Louisiana, USA.Inmon, W. (2002). Building the Data Warehouse (3rd ed.)New York: Wiley.Ji, W., Naguib, R.N.G., & Ghoneim, M.A. (2003). Neural network-based assessment of prognostic markers and outcome prediction in bilharziasis-associatedbladder cancer. Information Technology in Biomedicine, IEEE Transactionson, 7(3), 218-224.Johnson, S.B. (2004). The development of decision support systems to enable plant demographic research in the Australian cotton industry. Australia:Department of Primary Industries.Jung, G., & Gudivada, V. (1995). Automatic determination and visualisation of relationships among symptoms for building medical knowledge bases.Paperpresented at the 1995 ACM Symposium on Applied Computing, NashvilleTennessee, USA.Kawamoto, K., Houlihan, C., Balas, E., & Lobach, D. (2005). Improving clinical practice using clinical decision support systems: a systematic review of trialsto identify features critical to success. British Medical Journal, 330(7494). Kennedy, P. (2004). Extracting and Explaining Biological Knowledge in Microarray Data. Paper presented at the Pacific Asia Knowledge Discovery in Data(PAKDD) 2004, Sydney, Australia.Kimball, R. (1996). The Data Warehouse Toolkit. New York: Wiley.Kock, N.F, Avison, D., Baskerville, R., Myers, M. & Wood-Harper, T. (1999). IS Action Research: Can We Serve Two Masters? Paper presented at the 20thInternational Conference on Information Systems, Charlotte, North Carolina,USA.Kock, N.F, McQueen, R.J, Baker, M., (1996). Negotiation In Information Systems Action Research. Paper presented at the Information Systems Conference ofNew Zealand, Palmerston North, New Zealand.Kovalerchuk, B., Vityaev, E., Ruiz, J.F. (2000). Consistent knowledge discovery in medical diagnosis. IEEE Engineering in Medicine and Biology Magazine,19(4), 26-37.Lee, S., Abbott, & P. (2003). Bayesian networks for knowledge discovery in large datasets: basics for nurse researchers. Biomed Inform, 36, 389-399.Little, J.D. (1970). Models and Managers: The Concept of a Decision Calculus.Management Science, 16(8), 466-485.Lord,S., Genski, V., & Keech, C. (2004). Multiple analyses in clinical trials: sound science or data dredging? Medical Journal of Australia, 181(8), 452-454. Lyman, J., Boyd, J., & Dalton, J. (2003). Applying the HL7 reference information model to a clinical data warehouse. Paper presented at the IEEE International Conferemce on Systems, Man and Cybernetics, 2003., Washington DC, USA. Mackinnon, J., & Glick, N. (1999). Data Mining and Knowledge Discovery in Databases - An Overview. Australian and New Zealand Journal of Statistics,41(3), 255-275.Mallach, E. (2000). Decision Support and Data Warehouse Systems, New York: Irwin McGraw-Hill.Marakas, G.M. (2002a). Decision Support Systems in the 21st Century. Upper Saddle River, New Jersey: Prentice Hall.Marakas, G.M. (2002b). Modern Data Warehousing, Mining and Visualisation.Upper Saddle River, New Jersey: Prentice Hall.Masuda, G., Sakamoto, N., & Yamamoto, R. (2002). A Framework for Dynamic Evidence Based Medicine using Data Mining. Paper presented at the 15thIEEE Symposium on Computer-Based Medical Systems, Maribor, Slovenia. Masuda, G., & Sakamoto, N. (2002). A framework for dynamic evidence based medicine using data mining. Paper presented at the 15th IEEE Symposium onComputer-Based Medical Systems, 2002. (CBMS 2002), Maribor, Slovenia. Mathews, J.R. (1995). Quantification and the Quest for Medical Certainty.New Jersey: Princeton University Press.Matsumoto, T., Ueda, Y., & Kawaji, S. (2002). A software system for giving clues of medical diagnosis to clinician. Paper presented at the 15th IEEE Computer-Based Medical Systems, 2002. (CBMS 2002), Maribor, Slovenia. McCarthy, J. (2000). Phenomenal Data Mining: From Data to Phenomena. ACM SIGKDD Explorations Newsletter, 1(2), 24-29.McGregor, C., Bryan, G., Curry, J., Tracey, M. (2002). The e-Baby Data Warehouse:A Case Study. Paper presented at the 35th Hawaii International Conference onSystem Sciences, Hawaii, USA.Miquel, M., & Tchounikine, A. (2002). Software components integration in medical data warehouses: a proposal. Paper presented at the 15th IEEE Symposium on Computer-Based Medical Systems, 2002. (CBMS 2002), Maribor, Slovenia. Ohsaki, M., Sato, Y., Kitaguchi, S., Yokoi, H., & Yamaguchi, T. (2004). Comparison between objective interestingness measures and real human interest inmedical data mining. Paper presented at the 17th International Conference onInnovations in Applied Artificial Intelligence, Ottawa, Canada.Ohsaki, M., Kitaguchi, S., Yokoi, H., & Yamaguchi, T. (2005). Investigation of Rule Interestingness in Medical Data Mining. Active Mining, Springer(3430), 174-189.Pedersen, T., & Jensen,C. (1998). Research Issues in Clinical Data Warehousing.Paper presented at the 10th International Conference on Scientific andStatistical Database Management, Capri, Italy.Piantadosi, P. (1997). Clinical Trials- A Methodologic Perspective (1st ed.). New York: John Wiley & Sons.Podgorelec, V., Kokol, P., & Stiglic, M. (2002). Searching for new patterns in cardiovascular data. Paper presented at the 15th IEEE Symposium onComputer-Based Medical Systems, 2002. (CBMS 2002), Maribor, Slovenia. Popp, R., Armour, T., Senator, T., & Numryk, K. (2004). Countering Terrorism Through Information Technology. Communications of the ACM, 47(3), 36-43. Povalej, P., Lenic, M., Zorman, M., Kokol, P., Peterson, M., & Lane, J. (2003).Intelligent data analysis of human bone density. Paper presented at the 16thIEEE Computer-Based Medical Systems, 2003, New York, New York. Qiao, L., Agrawal, D., & Abbadi,A. (2003). Supporting Sliding Windows Queries for Continuous Data Streams. Paper presented at the 15th InternationalConference on Scientific and Statistical Database Management, Cambridge,MA, USA.Raghupathi, Winiwarter, Werner, & Tan,J. (2002). Strategic IT Applications in Health Care. Communications of the ACM, 45(12), 56-61.Rao, R., Niculescu, R., Germond, C., Rao, H. (2003). Clinical and Financial Outcomes Analysis with Existing Hospital Patient Records. Paper presented at the SIGKDD, Washington DC.Rindfleisch, T. (1997). Privacy, Information Technology and Health Care.Communications of the ACM, 40(8), 93-100.Robinson, J.B. (2005). Understanding and Applying decision support systems in Australian farming systems research. University of Western Sydney, Sydney. Roddick, J., Fule, P., & Graco,W. (2003). Exploratory Medical Knowledge Discovery: Experiences and Issues. SIGKDD Explorations Newsletter, 5(1),94-99.Roiger, R., & Geatz, M. (2003). Data Mining, England: Addison Wesley.Sabou, M., Wroe, C., Goble, C., & Mishne, G. (2005). Learning Domain Ontologies for Web Service Descriptions: an experiment in Bioinformatics. Paperpresented at the IW3C2, Chiba, Japan.Sackett, D., Rosenberg, W., Muir Gray, J., Haynes, B., & Scott-Richardson, W.(1996). Evidence based medicine: what it is and what it isn't. British MedicalJournal, 312, 71-71.Schubart, J., & Einbinder, J. (2000). Evaluation of a data warehouse in an academic health sciences center. International Journal of Medical Informatics, 60(3),319-333.Simon, H.A. (1960). The New Science of Management Decision. New York: Harper and Collins.Summons, P., Giles, W., & Gibbon,G. (1999). Decision Support for Fetal Gestation Age Estimation. Paper presented at the 10th Australiasian Conference onInformation Systems, Wellington, New Zealand.Susman, G.I, & Evered, R.D. (1978). An Assessment of the Scientific Merits of Action Research. Administrative Science Quarterly, 23, 582-603.Sydney South West Area Health Service. (2006). Retrieved 27th December 2005, 2005, from .au/Service_Facility.aspx Tsymbal, A., Cunningham, P., Pechenizkiy, M., & Puuronen, S. (2003). Search strategies for ensemble feature selection in medical diagnostics. Paperpresented at the 16th IEEE Symposium on Computer-Based Medical Systems, 2003, New York, New York.Turban, E., & Aronson J. (2001). Decision Support Systems and Intelligent Systems.Upper Saddle River, NJ: Prentice Hall.Upadhyaya, S., & Kumar, P. (2005). ERONTO: A Tool for Extracting Ontologies from Extended E/R Diagrams. Paper presented at the SAC'05, Santa Fe, NewMexico, USA.Vassiliadis, P., Simitsis, A., & Skiadopoulos, S. (2002). Conceptual Modeling for ETL Processes. Proceedings of 5th ACM International Workshop on datawarehousing and OLAP, McLean, VA, USA, 14-21.Wang, H., Fan, W., Yu, S., & Han, J. (2003). Mining concept-drifting data streams using ensemble classifiers. Paper presented at the 9th ACM SIGKDD,Washington DC, USA.Warren, J., & Stanek, J. (2005). Decision Support Systems. In Conrick & M (Eds.), Health Informatics Transforming Healthcare with Technology (pp. 252-265).Melbourne: Thomson.Webb, G., Han, J., & Fayyad, U. (2004). Panel Discussion. Paper presented at the 8th Pacific Asia Knowledge Discovery in Data, Sydney, Australia.Webb, G.I. (2001, August 2001). Discovering associations with numeric variables.Paper presented at the 7th ACM SIGKDD international conference onknowledge discovery and data mining, Boston, MA, USA.Webb, G.I, Butler, S., & Newlands, D. (2003). On detecting differences between groups. Paper presented at the 9th ACM SIGKDD international conference on knowledge discovery and data mining, Washington DC, USA.Webb, G.I. (2000). Efficient search for association rules. Paper presented at the 6th ACM SIGKDD international conference on knowledge discovery and datamining, Boston, MA, USA.Wong, M.L., Lam, W., Leung, K. S., Ngan, P. S., & Cheng, J.C.Y. (2000).Discovering knowledge from medical databases using evolutionoryalgorithms. IEEE Engineering in Medicine and Biology Magazine, 19(4), 45-55.Xintao, W., & Daniel, B. (2002). Learning missing values from summary constraints.ACM SIGDDD Explorations Newsletter, 4(1).Xu, Z., Cao, X., Dong., Y., & Wenping, S. (2004). Formal Approach and Automated Tool for Translating ER Schemata into OWL Ontologies, 8th Pacific AsiaKnowledge Discovery in Data Conference, 2004, Sydney, Australia.Yu, C. (2004). A web-based consumer-oriented intelligent decision support system for personalized e-service. Paper presented at the 6th International conference onelectronic commerce ICEC '04, Delft, The Netherlands.Yu, & P. (2004). Keynote Address. Paper presented at the 8th Pacific Asia Knowledge Discovery in Data, Sydney, Australia.Zaidi, S., Abidi, S., & Manickam, S. (2002). Distributed data mining from heterogeneous healthcare data repositories: towards an intelligent agent-based framework. Paper presented at the 15th IEEE Symposium on Computer-Based Medical Systems, 2002. (CBMS 2002)Maribor, Slovenia. Zdanowicz, J. (2004). Detecting Money Laundering and Terrorist Financing with Data Mining. Communications of the ACM, 47(5), 53-55.Zeleznikow, J., & Nolan, J. (2001). Using Soft Computing to build real world intelligent decision support systems in uncertain domains. Decision SupportSystems, 31, 263-285.Zorman, M., Kokol, P., Lenic, M., Povalej, P., Stiglic, B., & Flisar, D. (2003).Intelligent platform for automatic medical knowledge acquisition: detectionand understanding of neural dysfunctions. Paper presented at the 16th IEEEsymposium on Computer-Based Medical Systems, 2003, New York, NewYork.。
A Framework for XML-based Integration of Data,Visualization and Analysis in a Biomedical DomainN.Bales,J.Brinkley,E.S.Lee,S.Mathur,C.Re,and D.SuciuUniversity of WashingtonAbstract.Biomedical data are becoming increasingly complex and het-erogeneous in nature.The data are stored in distributed informationsystems,using a variety of data models,and are processed by increas-ingly more complex tools that analyze and visualize them.We present inthis paper our framework for integrating biomedical research data andtools into a unique Web front end.Our framework is applied to the Uni-versity of Washington’s Human Brain Project.Specifically,we presentsolutions to four integration tasks:definition of complex mappings fromrelational sources to XML,distributed XQuery processing,generation ofheterogeneous output formats,and the integration of heterogeneous datavisualization and analysis tools.1IntroductionModern biomedical data have an increasingly complex and heterogeneous na-ture,and are generated by collaborative yet distributed environments.For both technical and sociological reasons these complex data will often be stored,not in centralized repositories,but in distributed information systems implemented under a variety of data models.Similarly,as the data becomes more complex, the tools to analyze and visualize them also become more complex,making it difficult for individual users to install and maintain them.The problem we are addressing is how to build a uniform Web interface that (a)gives users integrated access to distributed data sources,(b)allows users to formulate complex queries over the data without necessarily being competent in a query language,(c)allows access to existing visualization tools which do not need to be installed on the local workstation,and(d)allows control of existing data analysis tools,both for data generation,and processing of query results.Our specific application is the integration of data sources containing multi-modality and heterogenous data describing language organization in the brain, known as the University of Washington’s Human Brain Project[7].The Web front end is targeted towards sophisticated and demanding users(neuroscience researchers).We examine in this paper the components that are needed to per-form such a data integration task,and give a critical assessment of the available XML tools for doing that.We have identified a few data management problems that need to be ad-dressed in order to achieve integration:complex mappings from relational sourcesto XML,distributed XQuery processing,graphical XQuery interfaces,genera-tion of heterogeneous output formats,and integration of data visualization and analysis tools.The main contribution in this paper is to describe our framework for achieving the integration of data,queries,visualization,and analysis tools. Specifically,we make the following contributions:Complex Relational-to-XML Mapping In our experience,writing complex mappings from relational data to XML data was one of the most labor inten-sive tasks.We propose a simple extension to XQuery that greatly simplifies the task of writing complex mappings.Distributed XQuery We identify several weaknesses of the mediator model for data integration,and propose an alternative,based on a distributed query language.It consists of a simple extension to XQuery,called XQueryD[28], which allows users to distribute computations across sites. Heterogeneous Output Formats Users want to map the data into a variety of formats,either for direct visualization,or in order to upload to other data processing tools(spreadsheets,statistical analysis tools,etc).We describe a simple interface to achieve that.Heterogeneous Visualization Tools We propose an approach for integrating multiple data visualization tools,allowing their outputs to be incorporated into query answers.2Application DescriptionThe driving application for this work is the University of Washington Integrated Brain Project,the goal of which is to develop methods for managing,sharing,in-tegrating and visualizing complex,heterogeneous and multi-modality data about the human brain,in the hope of gaining a greater understanding of brain func-tion than could be achieved with a single modality alone[7].This project is part of the national Human Brain Project(HBP)[21],whose long-term goal is to develop interlinked information systems to manage the exploding amount of data that is being accumulated in neuroscience research.Within the UW HBP the primary data are acquired in order to understand language organization in the brain.Because each type of data is complex,we have developed and are continuously developing independent tools for managing each type,with the belief that each such tool will be useful to other researchers with similar types of data,and with the aim of integrating the separate tools, along with external tools,in a web-based data integration system that relies on XML as the medium of data exchange.The web-based integration system we are developing is called XBrain[35, 36].The components that this system seeks to integrate include data sources, visualization tools and analysis tools.2.1Data SourcesWe describe here three of the data sources in XBrain,which illustrate data stored in three different data models:relational,ontology,and XML.A relational database:CSM(Cortical Stimulation Mapping)This is a patient-oriented relational database stored in MySQL,which records data ob-tained at the time of neurosurgery for epilepsy.The data primarily represent the cortical locations of language processing in the brain,detected by noting errors made by the patient during electrical stimulation of those areas.The database also contains thefile locations of image volumes,3-D brain models,and other data needed in order to reconstruct a model of the brain from MRI images,and to use that model as a basis for calculating the locations of the language sites. Data are entered by means of a web-based application[20],but only minimal browse-like queries were supported by the legacy application.The database has 36tables containing103patients,and is8MB in size.An ontology:FMA(Foundational Model of Anatomy)This ontology is the product of a separate,major research project conducted over more than ten years at the University of Washington[30].The FMA is a large semantic network containing over70,000concepts representing most of the structures in the body, and1.2million relationships,such as part-of,is-a,etc.The FMA relates to the CSM database through the names of anatomical brain regions where the stimulation sites are located:e.g.FMA could be used tofind neighboring, contained,or containing regions of specific stimulation sites in CSM.The FMA ontology is stored and managed in Protege1,which is a general purpose ontology managing system,and does not support a query language.A separate project[27] built a query interface to FMA,called OQAFMA,which supports queries written in StruQL[15],a query language specifically designed for graphs.An XML File:IM(Image Manager)As part of an anatomy teaching project we have developed a tool for organizing teaching images[5].Each image has associated with it one or more annotation sets,consisting of one or more annotations.An annotation consists of a closed polygon specified by a sequence of image coordinates on the image and an anatomical name describing that region.As in the CSM database,the names are taken from the FMA.In the original project the data is stored in a relational database.For the purpose of integrating it in XBrain we converted it into a single XML document,because it is infrequently updated because it has a natural recursive structure.To query it,we use the Galax[13]XQuery interpretor.2.2Visualization ToolsMany tools for Web-based visualization and interaction with both2-D and3-D images have been developed,in our lab and elsewhere.For example,we have developed interactive tools for2-D images[6],and tools for3-D visualization of CSM and other functional language data mapped onto a3-D model of a patient or population brain[26].These tools are being integrated as part of the XBrain project.While each tool is designed to display a single image at a time,in XBrain we allow users to integrate images generated by several visualization tools with the data returned by queries.1/2.3Analysis ToolsFinally,users are sophisticated,and they generally develop or use various analysis tools to generate the data that are entered into the various data sources,or to further process the results of a query.An example tool for the UW HBP is the Visualization Brain Mapper(VBM)[19],which accepts a specificationfile generated from the CSM database,then creates a mapping of stimulation sites onto a generated3-D model.A second example is our X-Batch program[18] which provides a plugin to a popular functional image analysis program while transparently writing to a backend database.These tools are being integrated into XBrain,by having queries generate appropriate data formats for them. 2.4The ProblemThe UW HBP data sources and tools illustrate the increasingly complex and het-erogenous nature of modern biomedical data,as well as the increasingly collab-orative yet distributed environment in which they are generated.These complex data are stored,not in a centralized repository,but in distributed information systems implemented under a variety of data models.The tools to analyze and visualize them are also quite complex,making it difficult for individual users to install and maintain them.Thus,the problem we are addressing is how to build a uniform Web interface that(a)gives users integrated access to distributed data sources,(b)allows users to formulate complex queries over the data without necessarily being competent in a query language,(c)allows access to existing visualization tools which do not necessarily need to be installed on the local workstation,and(d)allows control of existing data analysis tools,both for data generation,and processing of query results.XBrain is our proposed framework for addressing these problems.3The XBrain Integration ArchitectureOur architecture is shown in Fig.1.All sources store data in their native format and have to map the data to XML when exported to the query processor.Most mapped sources accept XQuery over their data,with one exception:OQAFMA accepts StruQL queries,because the rich structure of the ontology describing all of human anatomy requires a richer language than XQuery for recursive path traversals in the ontology graph.The data from all sources are integrated by a module supporting a distributed extension of the XQuery language(XQueryD). This module sends queries to the local sources and integrates the resulting XML data fragments.The resulting XML query answer can be presented to the user in one of multiple formats:as a plain XMLfile,as a CSV(Comma Separated Values)file,or a nested HTMLfile.In the latter case,the image anchors embed-ded in the XMLfile are interpreted by calling the appropriate image generation Webservices,and the resulting HTML pages together with complex images is presented to the user.The user inputs queries expressed in XQueryD through a JSP page.We describe the specific data management tasks next.Fig.1.The XBrain Integration Architecture.4Specific Data Management Tasks4.1Mappings to XMLWe mapped the relational CSM database to XML using SilkRoute[11,35].To define the map,one needs to write an XQuery program that maps the entire relational database to a virtual XML document,called the public ers query this view using XQuery,which SilkRoute translates to SQL,then converts the answers back to XML.For example,the user query belowfinds the names of all structures over all patients,in which a CSM error of type2(semantic paraphasia)occurred at least once in one patient:<results>{for$trial in PublicView("Scrubbed.pv")/patient/surgery/csmstudy/trialwhere$trial/trialcode/term/abbrev/text()="2"return$trial/stimsite/name()}</results>Here PublicView indicates thefile containing the public view definition(a large XQuery).SilkRoute translates the query automatically into a complex SQL statement:SELECT FROM trial,csm,...,stimsiteWHERE term.type=’CSM error code’AND abbrev=‘2’AND...ExprSingle::=TblExpr|...(*all expressions in XQuery remain here*)TblExpr::=NameClause WhereClause?OmitClause?RenameClause?ReturnClause? NameClause::="table"TblName("as"<NCName>)?OmitClause::="omit"ColName(","ColName)*RenameClause::="rename"ColName as<NCName>(","ColName as<NCName>)*ReturnClause::="return"EnclosedExprTblName::=<NCName>ColName::=<NCName>FunctionCall::=<QName"(">(ExprSingle(","ExprSingle)*)?")"("limit"IntegerLiteral)?Fig.2.The Grammar for RXQueryWriting the public view was a major task.For XBrain,it had583lines of XQuery code,which was repetitive,boring to write,and error prone.We needed a more efficient tool to write such mappings,in order to easily extend XBrain to other sources.For that,we developed a simple extension to XQuery that allows the easy specification of complex mappings from relational data to XML. RXQuery:A Language for Mapping Relations to XML Our new lan-guage allows users to concisely specify complex mappings from relational databases to XML such that(1)default mappings are done automatically,by using the re-lational database schema,and(2)the user can override the defaults and has the full power of XQuery.The grammar is shown in Fig.2.It extends the XQuery syntax with seven new productions,by adding“table expressions”,TblExpr to the types of expressions in the language.The RXQuery preprocessor takes as input a relational database schema and an RXQuery expression and generates an XQuery expression that represents a public view.Example1.We illustrate RXQuery with three examples,shown in Fig. 3.In all of them we use a simple relational database schema consisting for the two relations below,which are a tiny,highly simplified fragment of CSM:patient(pid,name,dob,address)surgery(sid,pid,date,surgeon)Consider Q1in Fig.3and its translation to XQuery.By default,every column in the relational schema is mapped into an XML element with the same name. Here CanonicalView()is a SilkRoute function that represents the canonical view of the relational database.Query Q2illustrates the omit and the rename keywords that omit and/or rename some of these attributes,and the where and the return clauses that allow the user to restrict which rows in the table are to be exported in XML and to add more subelements to each row.In Q2,the name column is omitted, the dob column is exported as the@date-of-birth attribute,rather than the default dob element,and an additional element age is computed for each row.RXQuery is especially powerful when specifying complex,nested public views, which is the typical case in practice.Q3is a very simple illustration of this power.RXQuery Translation to XQueryQ1table patient for$Patient in CanonicalView()/patientreturn<patient><pid>{$Patient/pid/text()}</pid><name>{$Patient/name/text()}</name><dob>{$Patient/dob/text()}</dob><address>{$Patient/address/text()}</address></patient>Q2table patientomit namerename dob as@date-of-birthwhere$Patient/dob/text()<1950return<age>{2005-$Patient/dob/text()}</age>for$Patient in CanonicalView()/patientwhere$Patient/dob/text()<1950return<patient date-of-birth=‘‘$Patient/dob/text()’’> <pid>{$Patient/pid/text()}</pid><address>{$Patient/address/text()}</address> <age>{2005-$Patient/dob/text()}</age></patient>Q3table patientreturntable surgery omit pidwhere$Patient/pid/text()=$Surgery/pid/text()for$Patient in CanonicalView()/patientreturn<patient><pid>{$Patient/pid/text()}</pid><name>{$Patient/name/text()}</name><dob>{$Patient/dob/text()}</dob><address>{$Patient/address/text()}</address>{for$Surgery in CanonicalView()/surgerywhere$Patient/pid/text()=$Surgery/pid/text()return<surgery><sid>$Surgery/sid/text()</sid><date>$Surgery/date/text()</date><surgeon>$Surgery/surgeon/text()</surgeon></surgery></patient>Fig.3.Examples of RXQuery and their translations to XQuery.Here,the nested subquery is a simple table expression,which is expanded au-tomatically by the preprocessor into a complex subquery.In addition to the features illustrated in the example,RXQuery includes functions,which we have found to be important in specifying complex map-pings,since parts of the relational database need to be included several times in the XML document.The limit n clause(see Fig.2)represents a limit on the recursion depth,when the function is recursive:this allows us some limited form recursive XML views over relational data(SilkRoute does not support recursive XML structures).One measure of effectiveness of RXQuery is its conciseness,since this is cor-related to the readability and maintainability of the public views.Fig.4reports the number of lines for two public views:for CSM and for the original version of IM(which is in a relational database).The CSM public view became about 5times smaller,shrinking from583lines in XQuery to125lines in RXQuery. The IM public view shrank from an original XQuery with1383lines of code to an RXQuery expression with only151lines.In both examples,the XQuery pub-lic view generated automatically by the RXQuery preprocessor was only sightly larger than the original manual public view.PV for CSM:Public View Lines Words Chars XQuery(manual)583160528582 RXQuery(w/o functions)1413524753 RXQuery(w/functions)1253034159 XQuery(generated)634163334979PV for IM:Public View Lines Words Chars XQuery(manual)1383341964393 RXQuery(w/o functions)381117814987 RXQuery(w/functions)1514275603 XQuery(generated)1427357566105Fig.4.Two examples of large public views defined in RXQuery:on the CSM database and on the original relational version of the IM(Image Manager)database.The tables show the original,manual definition of the public view in XQuery,the definition in RXQuery without functions,the same with functions,and the resulting,automaticallly generated XQuery.Mapping Other Data Sources to XML While most data sources can be mapped to XML in a meaningful way,sometimes this is not possible.In such cases we decided to keep the original data model,rather than massaging it to an artificial XML structure.The Foundational Model of Anatomy(FMA)is a rich ontology,which is best represented as a graph,not a tree.We kept its query interface,OQAFMA,which uses StruQL as a query language and allows users to express complex recursive navigation over the ontology graph.For example, the StruQL query below returns all anatomical parts that contain the middle part of the superior temporal gyrus:WHERE Y->":NAME"->"Middle part of superior temporal gyrus",X->"part"*->Y,X->":NAME"->ParentCREATE Concept(Parent);The query computes a transitive closure of the part relationship.While the query data model is best kept as a graph,the query answers can easily be mapped back into XML.In our example,the answer returned by OQAFMA is:<results><Concept><Ancestor>Neocortex</Ancestor></Concept><Concept><Ancestor>Telencephalon</Ancestor></Concept>........</results>Finally,native XML data are queried directly using XQuery.In our case,the Image Manager data(IM)is stored in XML and queried using Galax[13].The following examplefinds all images annotated by the middle part of the superior temporal gyrus:for$image in document("image_db.xml")//imagewhere$image/annotation_set/image_annotation/name/text()="middle part of the superior temporal gyrus"return4.2Distributed XQuery ProcessingThe standard approach to data integration is based on a mediator,an architec-ture proposed by Gio Wiederhold[38].With this approach,a single mediator schema isfirst described over all sources,and all local sources are mapped into the mediated schema.ExprSingle::="execute at"<URL>["xquery"{ExprSingle}|"foreign"{String}]("handle"<VAR>:<NAME-SPACE><EXPR>)*Fig.5.Grammar for XQueryDWe found this approach too heavy duty for our purpose,for three reasons. First,mediators are best suited in cases when the same concept appears in several sources,and the mediated concept is the set union of the instance of that concept at the sources.For example,BioMediator,a mediator-based data integration project for genetic data[32],integrates several sources that have many overlapping concepts:e.g.most sources have a gene class,and the mediator defines a global gene class which is the logical union of those at the local sources; similarly,most sources have a protein concept,which the mediator also unions. By contrast,in XBrain the concepts at the sources are largely disjoint,and the mediated schema would trivially consist of all local schemas taken together, making the mediator almost superfluous.The second reason is that mediator based systems require a unique data model for all sources,in order to be able to perform fully automatic query translation.They also hide the schema details at sources from the user,allowing inexperienced users to access large numbers of data sources.None of these applies to XBrain:some sources(like FMA)are best kept in their native datamodel, which is not XML,and our sophisticated users are quite comfortable with the details of the source schemas.Finally,despitefifteen years of research,there are currently no widely avail-able,robust tools for building mediator systems.Our approach in XBrain is different,and is based on a distributed evaluation of XQuery.All local sources are fully exposed to the users,who formulate XQuery expressions over them.XQueryD:A Distributed XQuery Language The goal is to allow users to query multiple sources in one query.While this can already be done in XQuery, it supports only the data shipping model(through the document()function): it fetches all data sources to a single server,then runs the query there.This is a major limitation for many applications,especially when some data sources are very large,or when a data source is only a virtual XML view over some other logical data model.For example,our CSM data source is not a real XML document,but a virtual view over a relational database.If we materialized it, the8MB relational database becomes a30MB XML document;clearly,it is very inefficient to fetch the entire data with document().We propose a simple extension to XQuery that allows query shipping to be expressed in the language, in addition to data shipping.The language consists of a single new construct added to ExprSingle,and is shown in Fig.5Example2.We illustrate XQueryD with one single example.The query in Fig.6 integrates three sources:the Image database(XML),the CSM databases(rela-for$image in document("image_db.xml")//imagelet$region_name:=execute at"/axis/csm.jws"xquery{for$trial in PublicView("Scrubbed.pv")/patient/surgery/csmstudy/trialwhere$trial/trialcode/term/abbrev/text()="2"return$trial/stimsite/name()},$surrounding_regions:=for$term in$region_namereturn<term>{(execute at"/oqafma"foreign{WHERE Y->":NAME"->"$term",X->("part")*->Y,X->":NAME"->AncestorCREATE Concept(Ancestor);})/results/Concept/text()}</term>where$image/annotation_set/image_annotation/name/text()=$surrounding_regions/text()return$image/oid/text()Fig.6.Example of a query in XQueryDtional),and the OQAFMA database(ontology).The query starts at the image database,and iterates over all images.The execute at command instructs the query processor to send the subsequent query to a remote site for execution:there are two remote queries in this example.The query returns all images are anno-tated by an anatomical name that is part of the anatomical region surrounding any language site with error type2.We initially implemented XQueryD by modifying Galax to accept the new constructs,which was a significant development effort.Once new versions of Galax were released,we found it difficult to keep up with Galax’code evolu-tion.We are currently considering implementing a translator from XQueryD to XQuery with Webservice calls that implement the execute statements.For the future,we argue for the need of a standard language extension of XQuery to support distributed query processing in query shipping mode.Discussion There is a tradeoffbetween the mediator-based approach and the distributed query approach.In XQueryD users need to know the sources’schemas, but can formulate arbitrarily complex queries as long as these are supported by the local source.Adding a new source has almost no cost.A mediator based systems presents the user with a logically coherent mediated schema,sparing him the specific details at each source;it can potentially scale to large number of sources.On the other hand,users can only ask limited form of queries sup-ported by the mediator,typically conjunctive queries(i.e.without aggregates or subqueries),and the cost of adding a new source is high.4.3Graphical Query InterfaceIn order to allow easy access to the integrated sources and to all data processing tools,XBrain needs to allow users to formulate complex queries over the datawithout necessarily being competent in a query language.Our current approach is to provide the user with(a)a free form for typing XQueryD expressions,(b)a number of predefined XQueryD expressions,which can be modified by the users in free form,and(c)a simple interface that allows users to save query expressions and later retrieve and modify them.This part of the system will be extended in the future with elements of graphical query languages;there is a rich literature on graphical query interfaces,e.g.QBE[40]and XQBE[4].4.4Heterogeneous Output FormatsThe output of XQueryD is a single XML document describing the results of integrating data from the different data sources.While such a document may be useful for analysis programs or XML-savvy users,it is not the most intu-itive.Thus,we allow users to choose alternative output formats,including both common formats such as HTML or CSV(comma separated values for input to Excel),and formats that are specific for an application.In our approach,the system generates automatically an XSLT program for each XQueryD,and for each desired output format.The XSLT program is simply run on the query’s answer and generates the desired output format.We currently support CSV,HTML,and some proprietary formats for image generation tools. To generate the XSLT program,the system needs to know the structure of the XML output,i.e.the element hierarchy and the number of occurrences of each child,which can be*,1,or?(0or1).In an early version we computed this structure by static analysis on the query(type inference),but we found that code brittle and hard to maintain and are currently extracting the structure from the XML output:the tiny performance penalty is worth the added robustness. Figure7(a)shows four possible output formats for the same output data:in XML format,in CSV format,in HTML format,and as an image.4.5Integration of Heterogeneous Visualization ToolsCommon output transformations,such as HTML or CSV as noted in the previ-ous section,can be part of a generic integrated application that could be applied to many different problems.However,each biomedical application will have its own special output requirements that may best be addrsseed by independent visualization tools.Our approach to this problem is to create independent tools that can run as web services callable by the XBrain application.We have exper-imented with such services for a2-D image visualization tool that accepts XML output from the CSM database and generates an image showing the locations on a2-D sketch of the brain where specific types of language processing occur.Such an approach may also be useful for3-D visualization of query results, using a server-based version of our BrainJ3D visualization tool[26].Interactive visualization and analysis of the results,which might include new query forma-tion,will require alternative approaches,such as a Web Start application that can create an XQuery for the integrated query system.。