Atlas DAQ Implementation of the Information Service
- 格式:pdf
- 大小:71.86 KB
- 文档页数:37
Quality management systems – Guidelines for performance improvements1 ScopeThis International Standard provides guidelines beyond the requirements given in ISO 9001 in order to consider both the effectiveness ad efficiency of a quality management system, and consequently the potential for improvement of the performance of an organization. When compared to ISO 9001, the objectives of customer satisfaction and product quality are extended to include the satisfaction of interested parties ad the performance of the organization.This International Standard is applicable to the processes of the organization and consequently the quality management principles on which it is based can be deployed throughout the organization. The focus of this International Standard is the achievement of ongoing improvement, measured through the satisfaction of customers and other interested parties.This International Standard consists of guidance and recommendations and is not intended for certification, regulatory or contractual use, nor as a guide to the implementation of ISO 9001.2 Normative referenceThe following normative document contains provisions which, through reference in this text, constitute provisions of this International Standard. For dated references, subsequent amendments to, or revisions of, any of these publications do not apply. However, parties to agreements based on this International Standard are encouraged to investigate the possibility of applying the most recent edition of the normative document indicated below. For undated references, the latest edition of the normative document referred to applies. Members of ISO and IEC maintain registers of currently valid International Standards.ISO 9000:2000, Quality management systems – Fundamentals and vocabulary.3 Terms and definitionsFor the purposes of this International Standard, the terms and definitions given in ISO 9000 apply.The following terms, used in this edition of ISO 9004 to describe the supply-chain, have been changed to reflect the vocabulary currently used:supplier organization customer (interested parties )Throughout the text of this International Standard, wherever the term “product” occurs, it ca also mean “service”.4 Quality management system4.1 Managing systems and processesLeading and operating an organization successfully requires managing it in a systematic and visible manner. Success should result from implementing and maintaining a management system that is deigned to continually improve the effe ctiveness and efficiency of the organization’s performance by considering the needs of interested parties. Managing an organization includes quality management, among other management disciplines.Top management should establish a customer-oriented organizationa) by defining systems and processes that can be clearly understood, managed and improved in effectiveness a well as efficiency, andb) by ensuring effective and efficient operation and control of processes and the measures and data used determine satisfactory performance of the organization.Examples of activities to establish a customer-oriented organization include- defining and promoting processes that lead to improved organizational performance,- acquiring and using process data and information on a continuing basis,- directing progress towards continual improvement, and- using suitable methods to evaluate process improvement, such as self-assessments and management review.Examples of self-assessment and continual improvement processes are given in annexes A and B4.2 DocumentationManagement should define the documentation, including the relevant records, needed to establish, implement and maintain the quality management system and to support and effective and efficient operation of the organization’s processes.The nature and extent of the documentation should satisfy the contractual, statutory and regulatory requirements, and the needs and expectations of customers and other interested parties and should be appropriate to the organization. Documentation may be in any form or medium suitable for the needs of the organization.In order to provide documentation to satisfy the needs and expectations of interested parties management should consider- contractual requirements from the customer and other interested parties,- acceptance of international, national ,regional and industry sector standards,- relevant statutory and regulatory requirements,-decisions by the organization,- decisions by the organization,- sources of external information relevant for the development of the organization’s competencies, and- information about the needs and expectations of interested parties.The generation, use and control of documentation should be evaluated with respect to the effectiveness and efficiency of the organization against criteria such as- functionality (such as speed of processing),- user friendliness,- resources needed,- policies and objectives- current and future requirements related to managing knowledge.- benchmarking of documentation systems, and- interfaces used by organization’s customers, suppliers and other interested parties.Access to documentation should be ensured for people in the organization and to other interested parties, based on the organization’s communication policy.4.3 Use of quality management principlesTo lead and operate an organization successfully, it is necessary to manage it in a systematic and visible manner. The guidance to management offered in this International Standard is based on eight quality management principles.These principles have been developed for use by top management in order to lead the organization toward improved performance. These quality management principles are integrated in the contents of this International Standard and are listed belowa) Customer focusOrganizations depend on their customers and therefore should understand current and future customer needs, should meet customer requirements and strive to exceed customer expectations.b) LeadershipLeaders establish unity of purpose and direction of the organization. They should create and maintain the internal environment in which people can become fully involved in achieving the organi zation’s objectives.c) Involvement of peoplePeople at all levels are the essence of an organization and their full involvement enables their abilities to be used for the organization’s benefit.d) Process approachA desired result is achieved more efficiently when activities and related resources are managed as a process.e) System approach to managementIdentifying, understanding and managing interrelated processes as a system contributes to the organization’s effectiveness and efficiency in achieving i ts objectives.f) Continual improvementContinual improvement of the organization’s overall performance should be a permanent objective of the organization.g) Factual approach to decision makingEffective decisions are based o the analysis of data and information.h) Mutually beneficial supplier relationshipsAn organization and its suppliers are interdependent and a mutually beneficial relationship enhances the ability of both to create value.Successful use of the eight management principles by an organization will result in benefits to interested parties, such as improved monetary returns, the creation of value and increased stability.5 Management responsibility5.1 General guidance5.1.1 IntroductionLeadership, commitment and the active involvement of the top management are essential for developing and maintaining an effective and efficient quality management system to achieve benefits for interested parties. To achieve these benefits, it is necessary to establish, sustain and increase customer satisfaction. Top management should consider actions such as-establishing a vision, policies and strategic objectives consistent with the purpose of the organization, -leading the organization by example, in order to develop trust within its people,-communicating organizational direction and values regarding quality and the quality management system,-participating in improvement projects, searching for new methods, solutions and products,-obtaining feedback directly on the effectiveness and efficiency of the quality management system,-identifying the product realization processes that provide added value to the organization.-creating an environment that encourages the involvement and development of people, and-provision of the structure and resources that are necessary to support the organization’s strategic plans.Top management should also define methods for measurement of the organization’s performance in order to determine whether planned objectives have been achieved.Methods include-financial measurement,-measurement of process performance throughout the organization,-external measurement, such as benchmarking and third-party evaluation,-assessment of the satisfaction of customers, people in the organization and other interested parties,-assessment of the perceptions o customers and other interested parties of performance of products provided, and-measurement of other success factors identified by management.Information derived from such measurements and assessments should also be considered as input to management review in order to ensure that continual improvement of the quality management system is the driver for performance improvement of the organization.5.1.2 Issues to be consideredWhen developing, implementing and managing the organization’s quali ty management system, management should consider the quality management principles outlined in 4.3.On the basis of these principles, top management should demonstrate leadership in, and commitment to, the following activities:-understanding current and future customer needs ad expectations, in addition to requirements;-promoting policies and objectives to increase awareness, motivation and involvement of people in the organization;-establishing continual improvement as an objective for processes of the organization;-planning for the future of the organization and managing change;-setting and communicating a framework for achieving the satisfaction of interested parties.In addition to small-step or ongoing continual improvement, top management should also consider breakthrough changes to processes as a way to improve the organization’s performance. During such changes, management should take steps to ensure that the resources and communication needed to maintain the functions of the quality management system are provided.Top management should identify the organization’s product realization processes, as these are directly related to the success of the organization. Top management should also identify those support processes that affect either the effectiveness and efficiency of the realization processes or the needs and expectations of interested parties.Management should ensure that processes operate as an effective and efficient network. Management should analyse and optimize the interaction of processes, including both realization processes and support processes.Consideration should be given to-ensuring that the sequence and interaction of processes are designed to achieve the desired results effectively and efficiently,-ensuring process inputs, activities and outputs are clearly defined and controlled,-monitoring inputs ad outputs to verify that individual processes are linked and operate effectively ad efficiently,-identifying and managing risks, and exploiting performance improvement opportunities,-conducting data analysis to facilitate continual improvement of processes,-identifying process owners and giving them full responsibility and authority,- managing each process to achieve the process objectives, and- the needs and expectations of interested parties.5.2 Needs and expectations of interested parties5.2.1 GeneralEvery organization has interested parties, each party having needs and expectations. Interested parties of organizations include-customers and end-users,-people in the organization,-owners/investors (such as shareholders, individuals or groups, including the public sector, that have a specific interest in the organization),-suppliers and partners, and-society in terms of the community and the public affected by the organization or its products.5.2.2 Needs and expectationsThe success of the organization depends on understanding and satisfying the current and future needs and expectations of present and potential customers and end-users, as well as understanding and considering those of other interested parties..In order to understand and meet the needs and expectations of interested parties, an organization should-identify its interested parties and maintain a balanced response to their needs and expectations,-translate identified needs and expectations into requirements,-communicate the requirements throughout the organization, and-focus on process improvement to ensure value for the identified interested parties.To satisfy customer and end-user needs and expectations, the management of an organization should-understand the needs and expectations of its customers, including those of potential customers,-determine key product characteristics for its customers and end-users,-identify and assess competition in its market, and-identify market opportunities, weaknesses and future competitive advantage.Examples of customer and end-user needs and expectations, as related to the organization’s products, include-conformity,-dependability,-availability,-delivery,-post-realization activities,-price and life-cycle costs,-product safety,-product liability, and-environmental impact.The organization should identify its people’s needs and expectations for recognition, work satisfaction, and personal development. Such attention helps to ensure that the involvement and motivation of people are as strong as possible.The organization should define financial and other results that satisfy the identified needs and expectations of owners and investors.Management should consider the potential benefits of establishing partnerships with suppliers to the organization, in order to create value for both parties. A partnership should be based on a join strategy, sharing knowledge as well as gains and losses. When establishing partnerships, an organization should-identify key suppliers, and other organizations, as potential partners,-jointly establish a clear understanding of customers’ needs and expectations,-jointly establish a clear understanding of the partners’ needs and expect ations, and-set goals to secure opportunities for continuing partnerships.In considering its relationships with society, the organization should-demonstrate responsibility for health and safety,-consider environmental impact, including conservation of energy and natural resources,-identify applicable statutory and regulatory requirements, and-identify the current and potential impacts on society in general, and the local community in particular, of its products, processes and activities.5.2.3 Statutory and regulatory requirementsManagement should ensure that the organization has knowledge e of the statutory and regulatory requirements that apply to its products, processes and activities and should include such requirements as part of the quality management system. Consideration should also be given to-the promotion of ethical, effective and efficient compliance with current and prospective requirements, -the benefits to interested parties from exceeding compliance, and-the role of the organization in the protection of community interests.5.3 Quality policyTop management should use the quality policy as a means of leading the organization toward improvement of its performance.An organization’s quality policy should be an equal and consistent part of the organization’s overall policies and strategy.In establishing the quality policy, top management should consider-the level and type of future improvement needed for the organization to be successful,-the expected or desired degree of customer satisfaction,-the development of people in the organization,-the needs and expectations of other interested parties,-the resources needed to go beyond ISO 9001 requirements, and-the potential contributions of suppliers and partners.The quality policy ca be used for improvement provided that-it is consistent with top management’s vision and strategy for the organization’s future,-it permits quality objectives to be understood and pursued throughout the organization,-it demonstrates top management’s com mitment to quality and the provision of adequate resources for achievement of objectives,-it aids in promoting a commitment to quality throughout the organization, with clear leadership by top management,-it includes continual improvement as related to satisfaction of the needs and expectations of customers and other interested parties, and-it is effectively formulated ad efficiently communicated.As with other business policies, the quality policy should be periodically reviewed.5.4 Planning5.4.1 Quality objectivesThe organization’s strategic planning and the quality policy provide a framework for the setting of quality objectives should be capable of being measured in order to facilitate an effective and efficient review by management. When establishing these objectives, management should also consider-current and future needs of the organization and the markets served,-relevant findings from management reviews,-current product and process performance,-levels of satisfaction of interested parties,-self-assessment results,-benchmarking, competitor analysis, opportunities for improvement, and-resources needed to meet the objectives.The quality objectives should be communicated in such a way that people in the organization can contribute to their achievement. Responsibility for deployment of quality objectives should be defined. Objectives should be systematically reviewed and revised as necessary.5.4.2 Quality planningManagement should take responsibility for the quality planning of the organization. This planning should focus on defining the processes needed to meet effectively and efficiently the organization’s quality objectives and requirements consistent with the strategy of the organization.Inputs for effective ad efficient planning include-strategies of the organization,-defined organizational objectives,-defined needs and expectations of the customers and other interested parties,-evaluation of statutory and regulatory requirements,-evaluation of performance data of the products,-evaluation of performance data of processes,-lessons learned from previous experience,-indicated opportunities for improvement, and-related risk assessment and mitigation data.Outputs of quality planning for the organization should define the product realization and support processes needed in terms such as- skills and knowledge needed by the organization,- responsibility and authority for implementation of process improvement plans,- resources needed, such as financial and infrastructure,- metric s for evaluating the achievement of the organization’s performance improvement- needs for improvement including methods and tools, and- needs for documentation, including records.Management should systematically review the outputs to ensure the effectiveness and efficiency of the processes of the organization.5.5 Responsibility, authority and communication5.5.1 Responsibility and authorityTop management should define and then communicate the responsibility and authority in order to implement ad maintain an effective and efficient quality management system.People throughout the organization should be given responsibilities and authority to enablethem to contribute to the achievement of the quality objectives and to establish their involvement, motivation and commitment.5.5.2 Management representativeA management representative should be appointed and given authority by top management to manage, monitor, evaluate and coordinate the quality management system. This appointment is to enhance effective and efficient operation and improvement of the quality management system. The representative should report to top management and communicate with customers and other interested parties on matters pertaining to the quality management system.5.5.3 Internal communicationThe management of the organization should define and implement an effective and efficient process for communicating the quality policy, requirements, objectives and accomplishments. Providing such information c an aid in the organization’s performance improvement and directly involves its people in the achievement of quality objectives. Management should actively encourage feedback and communication from people in the organization as a means of involving them.Activities for communicating include, for example-management-led communication in work areas,-team briefings and other meetings, such as for recognition of achievement,-notice-boards, in-house journals/magazines,-audio-visual and electronic media, such as email and websites, and-employee surveys and suggestion schemes.5.6 Management review5.6.1 GeneralTop management should develop the management review activity beyond verification of the effectiveness and efficiency of the quality management system into a process that extends to the whole organization, and which also evaluates the efficiency of the system. Management reviews should be platforms for the exchange of new ideas, with open discussion and evaluation of the inputs being stimulated by the leadership of top management.To add value to the organization from management review, top management should control the performance of realization and support processes by systematic review based on the quality management principles. The frequency of review should be determined by the needs of the organization. Inputs to the review process should result in outputs that extend beyond the effectiveness and efficiency of the quality management system. Outputs from reviews should provide data for use in planning for performance improvement of the organization.5.6.2 Review inputInputs to evaluate efficiency as well as effectiveness of the quality management system should consider the customer and other interested parties and should include- status and results of quality objectives and improvement activities,- status of management review action items,-results of audits and self-assessment of the organization,- feedback on the satisfaction of interested parties, perhaps even to the point of their participation, - market-related factors such as technology, research and development, and competitor performance, - results from benchmarking activities,- performance of suppliers,- new opportunities for improvement,- control of process and product nonconformities,- marketplace evaluation and strategies,- status of strategic partnership activities,- financial effects of quality related activities, and- other factors which may impact the organization, such as financial, social or environmental conditions,and relevant statutory and regulatory changes.5.6.3 Review outputBy extending management review beyond verification of the quality management system, the outputs of management review can be used by top management as inputs to improvement processes. Top management can use this review process as a powerful tool in the identification of opportunities for performance improvement of the organization. The schedule of reviews should facilitate the timely provision of data in the context of strategic planning for the organization. Selected output should be communicated to demonstrate to the people in the organization how the management review process leads to new objectives that will benefit the organization.Additional outputs to enhance efficiency include, for example-performance objectives for products and processes,-performance improvement objectives for the organization,-appraisal of the suitability of the organization’s structure and resources,-strategies and initiatives for marketing, products, and satisfaction of customers and other interested parties,-loss prevention and mitigation plans for identified risks, and-information for strategic planning for future needs of the organization.Records should be sufficient to provide for traceability ad to facilitate evaluation of the management review process itself, in order to ensure its continued effectiveness and added value to the organization.6 Resource management6.1 General guidance6.1.1 IntroductionTop management should ensure that the resources essential to the implementation of strategy and the achievement of the organization’s objectives are identified and made available. This should include resources for operation and improvement of the quality management system, and the satisfaction of customers and other interested parties. Resources may be people, infrastructure, work environment, information, suppliers and partners, natural resources and financial resources.6.1.2 Issue to be consideredConsideration should be given to resources to improve the performance of the organization, such as-effective, efficient and timely provision of resources in relation to opportunities and constraints,-tangible resources such as improved realization and support facilities,-intangible resources such as intellectual property,-resources and mechanisms to encourage innovative continual improvement,-organization structures, including project and matrix management needs,-information management and technology,-enhancement of competence via focused training, education and learning,-development of leadership skills and profiles for the future managers of the organization,-use of natural resources and the impact of resources on the environment, and-planning for future resource needs.6.2 People6.2.1 Involvement of peopleManagement should improve both the effectiveness and efficiency of the organization, including the quality management system, through the involvement and support of people. As an aid to achieving its performance improvement objectives, the organization should encourage the involvement and development of its people-by providing ongoing training and career planning,-by defining their responsibilities and authorities,-by establishing individual and team objectives, managing process performance and evaluating results, -by facilitating involvement in objective setting and decision making,-by recognizing and rewarding,-by facilitating the open, two-way communication of information,-by continually reviewing the needs of its people,-by creating conditions to encourage innovation,-by ensuring effective teamwork,-by communicating suggestions and opinions,-by using measurements of its people’s satisfaction, and-by investigating the reasons why people join and leave the organization.。
Oracle Enterprise Manager 13c Cloud ControlOracle SOA Management Pack Enterprise EditionMANAGEMENT FOR ORACLE SOA SUITE AND ORACLE SERVICE BUS APPLICATIONS Oracle Enterprise Manager is Oracle’s integrated enterprise IT management product line, and provides the industry’s first complete cloud lifecycle management solution. Oracle Enterprise Manager’s Business-Driven IT Management capabilities allow you to quickly set up, manage and support enterprise clouds and traditional Oracle IT environments from applications to disk. Enterprise Manager allows customers to achieve best service levels for traditional and cloud applications through management from a business perspective including for Oracle Fusion Applications, provide maximum return on IT management investment through the best solutions for intelligent management of the Oracle stack and engineered systems and gain unmatched customer support experience through real-time integration of Oracle’s knowledgebase with each customer environment.F E A T U R E S∙Track and monitor end-to-end business transactions across tiers∙Monitor the performance of Oracle SOA Suite and Service Bus∙Integrated web service testing and synthetic transaction monitoring∙Integrated authoring, attachment, and monitoring of security policies∙Collection and analysis of SOA configuration information∙Automated provisioning of Oracle SOA Suite and Service Bus∙Seamless Lift and Shift of SOA Domains and composites to Oracle Cloud. Fusion Middleware ManagementOracle Enterprise Manager’s Fusion Middleware Management solutions provide full-lifecycle management for Oracle WebLogic, SOA suite, Coherence, Identity Management, WebCenter, and Oracle Business Intelligence Enterprise Edition. Oracle Enterprise Manager provides a single console to manage these assets from a business and service perspective, including user experience management, change and configuration management, patching, provisioning, testing, performance management, business transaction management and automatic tuning for these diverse environments.SOA ManagementUnderstanding complex service dependencies, monitoring consumer expectations, and controlling service ownership costs are the biggest barriers to effectively managing service-oriented architecture (SOA) applications and infrastructure. To overcome these challenges, administrators need solutions that increase service visibility and production assurance while lowering the cost and complexity of managing SOA environments. Oracle SOA Management Pack Enterprise Edition provides runtime governance as well as comprehensive service and infrastructure management functionality to help organizations maximize the return on investment from their SOA initiatives.Automate SOA Service and Transaction managementOracle SOA Management Pack Enterprise Edition provides administrators with a consolidated browser-based view of the entire SOA environment, enabling them to monitor and manage all their components from a central location. This streamlines the correlation of availability and performance problems for all components across the SOA environment. Oracle Enterprise Manager integrates with the Oracle Fusion MiddlewareControl, Oracle Service Bus console, and Oracle Business Activity Monitoring. With aKEY BENEFITS∙Provides visibility into complex SOA orchestrations across the enterprise ∙Minimizes the cost of setting up and maintaining performance monitoring ∙Reduces the effort associated with manual application deployment∙Dramatically improves the ability to keep up with environmental changes ∙Significantly lowers the total cost of ownership for SOA∙Significantly reduces time required to move SOA and OSB assets to Oracle Cloud.∙Single Pane of Glass to monitor and manage assets across clouds. rich set of service and system-level dashboards, administrators can view service levels for key web services, SOA composites, Oracle Service Bus proxy and business services, as well as SOA infrastructure components.Figure 1. Service Bus and SOA Composites Heat MapOracle Enterprise Manager allows you to manage your Oracle SOA Suite applications leveraging a model-driven “top-down” approach within your development, quality assurance (QA), staging, and production environments. Business application owners and operational staff can automatically discover and correlate your SOA composites, components, services, and back-end Java EE implementations through detailed modeling and drill-down directly into the performance metrics at the component level. Business transactions and service dependencies can be automatically discovered and the message flows mapped. Details about individual and aggregate transaction execution can be searched for and displayed.By providing and maintaining the business context while traversing your organization’s application infrastructure, your developers and operational staff can leverage Oracle Enterprise Manager to meet the high availability and top performance criteria necessary to maximize business results.Figure 2. SOA composite instance search and traceOracle Enterprise Manager enables your application development and support teams to: ∙Continuously discover components, transaction flow, service dependenciesand relationships∙Monitor business transactions as they flow across tiers∙Manage Oracle SOA Suite applications with minimal manual effort, regardless of application-specific knowledge or programming expertise∙View aggregated dependences or drill-down to method-level interactions∙Monitor endpoint performance with both synthetic service tests and deep component implementation visibility∙Search and analyze single instance transaction performance, with built-in report generation for slowest running or faulted instances∙Link to related diagnostic and database metrics, taking advantage of SOA Suite specific knowledge∙Quickly isolate and diagnose the root cause of SOA application performance problems in QA, staging, and production environmentsFigure 3. SOA composite instance search and trace∙Quickly view the SOA dehydration database performance by viewing the dehydration store growth rate, table space, wait bottlenecks, top SOA SQLsand a lot more.Configuration ManagementConfiguration information for the Oracle SOA Suite, Oracle Service Bus, and BPEL processes are collected and stored in the Oracle Enterprise Manager repository. With this information, administrators can:∙View the historic configuration changes across the SOA Suite and Oracle Service Bus environment∙Baseline a working configuration by saving it in the repository∙Compare SOA Suite and Oracle Service Bus server and domain configuration parameters with other servers and domainsLifecycle ManagementOracle SOA Management Pack Enterprise Edition allows administrators to automate SOA Suite patching, deployment, and server provisioning, as well as Service Bus deployment and server provisioning.SOA administrators can automate patching of SOA infrastructure spread across multiple machines in parallel. Patch plans can be created that will comprise of multiple patches, while patch conflicts can be proactively detected by running patch plan analysis before actually applying the patch plan to the entire SOA Infrastructure setup. Rollbacks can be automated similar to patching automation.Administrators can deploy multiple SOA composites and Service Bus projects to servers using the deployment procedure framework. A multistep interview process lets users choose the source files for the process, project or resource, target domain, and credentials, then schedule a future deployment using the job system. This enables administrators to:∙Clone directly from test to production∙Clone from fully tested gold image stored in software library∙Provision new composite or new version of existing composite to existing SOA Infrastructure∙Specify composite from software library or file system∙Optionally specify configuration planAdministrators can provision new Service Bus and SOA Suite domains based on Middleware Provisioning Profiles in Software Library. The provisioning process allows for configuration parameters to be set on the domains being provisioned.Historical Analysis and ReportingIn addition to real-time monitoring of metrics for SOA infrastructure targets, Oracle Enterprise Manager stores the collected metric and configuration data in a central repository enabling administrators to analyze metrics through various historical views (such as Last 24 Hours, Last 7 Days, and Last 31 Days) to facilitate strategic trend analysis and reporting. Customizable service and system dashboard functionality allow users to create reports on various services and systems for service level availability, usage, performance, and business indicators.O R A C L E D A T A S H E E TFigure 4. SOA Composite export, SOA Diagnostics and IWS Report Snapshots.Now users also have option to generate and view IWS [Integration Workload Statistics] reports from within Enterprise Manager. IWS reports list transaction data for all composites with many more details.The Ideal ChoiceSOA delivers agility to an enterprise; however, if not properly managed, may increase management complexity and cost. Oracle Enterprise Manager makes it easy for IT administrators to effectively manage SOA complexity by providing runtime governance in conjunction with business and IT alignment. Offering service level management, triage, and root cause analysis at all SOA application levels, Oracle Enterprise Manager is an ideal choice for maximizing consistent SOA application performance and creating a superior ownership experience.O R A C L E D A T A S H E E TC O N T A C T U SFor more information about SOA Management Pack, visit or call +1.800.ORACLE1 to speakto an Oracle representative.C O N N E C T W I T H U S/oracle /oracle /oracle Copyright © 2016, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission.Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and。
精品文档,值得拥有1. Managing Endings: A ChecklistYes No____ ____ Have I studied the change carefully and identified who is likely to losewhat—including what I myself am likely to lose?____ ____ Do I understand the subjective realities of these losses to the people who experience them, even when they seem like overreaction to me?____ ____ Have I acknowledged these losses with sympathy?____ ____ Have I permitted people to grieve and publicly expressed my own senseof loss?____ ____ Have I permitted people to grieve and publicly expressed my own senseof loss?____ ____ Have I found ways to compensate people for their losses?____ ____ Am I giving people accurate information and doing it again and again? ____ ____ Have I defined clearly what is over and what isn't?____ ____ Have I found ways to "mark the ending"?____ ____ Am I being careful not to denigrate the past but, when possible, to findways to honor it?____ ____ Have I made a plan for giving people a piece of the past to take withthem?____ ____ Have I made it clear how the ending we are making is necessary toprotect the continuity of the organization or conditions on which theorganization depends?____ ____ Is the ending we are making big enough to get the job done in one step? Final Questions:What actions can you take to help people deal more successfully with the endings that are takingplace in your organization? What can you do today to get started on this aspect of transition management?1 / 1。
Cost ManagementOverview/Benefits ABC & ABMActivity-based management is a useful but sometimes overlooked cost management tool that allows companies to determine not only accurate costs but also the costs of alternative actions.While more traditional cost management utilizes activity-based costing, which answers the question“What do things cost?,” activity-based management (ABM) answers the question“What causes cost to occur?” Utilizing this technique allows a company to determine the costs of alternative actions such as rearranging equipment to produce things more quickly (time-based competition), purchasing more expensive but higher quality materials to reduce lost sales to disappointed customers (total quality management) or comparing different ways of reengineering a business process. U.S. organizations have been pioneers in developing ABM, in part because the diversity in products, services, and people is much greater in this country.Direct Services ProvidedArthur Andersen’s direct cost manage ment services include the following:☐Profitability analysis - Products, Customers, Channels☐Benchmarking of cost management practices☐Economic value analysis☐Value chain analysis☐Cost reduction using activity-based management☐Process cost analysis☐Target costing/Profit planning☐Transfer pricing analysis☐Implementing standard cost systems☐Activity-based budgetingIndirect Services ProvidedCost management often works in tandem with other management initiatives, integrating with the other services offered by Business Consulting. Indirectly, cost management supports the following:☐reengineering of the finance function☐development of performance measurement systems☐reengineering using activity-based analysis☐implementation of improvement programs (TQM, JIT)☐value engineeringEntry Point/Target Buyer☐CFO; Controller☐Manufacturing☐Telecom☐Utilities☐Services☐Healthcare☐Insurance☐Grocery (ECR)Engagement SizeWhile some pilot jobs are performed for less, the typical engagement size for an entry level cost management engagement if between $100,000 - $300,000. However, a full-scale implementation generally starts at $400,000.Keys to Identifying Opportunities for Cost Reduction ☐High inventories:∙Large central stockroom∙Interactive WIP orders on floor∙Turnover lower than industry average∙WIP turnover exceeds process time∙Piece rate incentive pay∙Productivity measured on per machine/per worker basis☐Long production lead times:∙Functional plant layout, no product focus∙Long distance between work stations∙Large manufacturing lot sizes∙High scrap rate∙Planning lead time exceeds process time∙Bottlenecks at inspection department∙Missed customer delivery dates☐High overhead rates:∙Several forklifts∙Employees walking around, searching for equipment∙Significant handling, storage and inspection costsActivity-Based Cost ManagementOverview & BenefitsActivity-based cost management (ABCM) is the overall practice of activity-based management (ABM), activity-based costing (ABC), and activity-based budgeting (ABB). The uses of ABCM information are numerous. Companies are using ABCM to reengineer operations, improve benchmarking, increase revenues, simplify the budget process, and establish performance measures. Whether implemented alone or as part of a larger performance management system, ABCM provides companies with better information to manage their business. Service Line Questions1. What is the business objective (or problem) being addressed?2. Are current product/service costs accurate?3. What factors contribute most to costs?4. What processes should be improved? Why? How?5. Which products/services are most profitable?6. Do you know which customers are profitable and which are not?7. Which distribution channels are most profitable?6. What departments’ costs appear to be out-of-line? Why?7. Who are the project stakeholders/customers?8. What are their expectations? What is their definition of success?9. What obstacles to change exist? How can they be overcome?What Information Do Activity Based Systems Provide?- Activity cost- Business process costs- Visibility to the root causes of costs- Highlight and assist in focusing on important cost- Reliable cost information about products and servicesActivity-Based ManagementOverview & BenefitsActivity-based management (ABM) is the broad discipline that focuses on achieving customer value and company profit through the management of activities. It draws on activity-based costing (ABC) as a major source of infomation.When we refer to ABC, we are usually describing the use of activity analysis to improve the costing process. It is equally applicable to determining customer costs, channel costs, etc. ABC focuses on determining “what things cost.”Benefits typically derived from ABC include:☐more accurate product costs;☐determination of the costs of services;☐determination of customer costs;☐identification of market or distribution channel costs;☐determination of project costs;☐determination of contract costs;☐determination of what products, customers, or channels to emphasize;☐tracking of direct mail catalog profitability;☐support for measurement of economic value analysis;☐support for contract negotiations;☐support for increasing revenue by helping customers understand their cost reductions through use of your products and services;☐support for target costing;☐support for benchmarking; and☐determination of shared services charge-out amounts.The broader use of activity-based approaches inherent in ABM revolves around using activity-based infor mation to manage operations. ABM focuses more on “how to change and improve your costs.”Benefits typically derived from ABM include:☐identification of redundant costs;☐analysis of value added and non-value added costs;☐quantification of the cost of quality by element;☐identification of customer focused activities;☐analysis of the cost of complexity;☐identification of process costs and support of process analysis;☐measurement of the impact of reengineering efforts;☐better understanding of cost drivers;☐evaluation of manufacturing flexibility investments; andactivity-based budgeting.。
New Client Background Investigations in the U.S.(While these specific procedures are required in the United States, other countries should complete the Level 1 procedures which follow.)The purpose of a new client background investigation is to strengthen our screening of prospective clients that are new to the Firm or an existing client that has had significant changes in principal investors or senior management. Investigations of the background of prospective new clients should include an investigation of the reputation and character of principal investors and key management personnel and an analysis of the financial history and current financial position of the relevant organization.To complete our analysis, two levels of investigation may be performed:∙Level 1 - Limited investigations consisting of procedures performed by the prospective engagement partner at the local office level∙Level 2 - Full investigations consisting of online data base research conducted by the Risk Management Support (RMS) group in Chicago in addition to the procedures performed by the local officeBackground Investigation RequirementsA Level 1 investigation is required for all new clients to the Firm or if there have been significant changes in principal investors or senior management of the existing client.A Level 2 investigation must be completed if:∙We have concerns about the integrity, ethical or professional characteristics of the client’s management team (Consideration 1d in Risk SMART)∙The potential engagement is the result of a criminal act or suspected criminal act (Consideration 1e in Risk SMART)∙We have concerns about the prospective client’s financial stability (Consideration 3b in Risk SMART)∙There has been a high or unusual turnover within the prospective c lient’s organization (Consideration 3c in Risk SMART)∙The results of a Level 1 investigation are insufficient to make an acceptance/rejection decisionLevel 1 - Limited Investigation Procedures1.Verify the company’s existence as a legal entity. Use publications such as Standard & Poor’sRegister of Corporations, Wards Business Directory, or Dun & Bradstreet’s Million DollarDirectory. In some cases, you may need to reference state incorporation and/or licensing records.2.Identify the company’s principals. This may be done simultaneously with the step above.3.Discuss the prospective client with local office partners and Firmwide industry heads who mayprovide additional industry contacts.4.Determine if sufficient information has been gathered to make a client acceptance decision. If not,expand the investigation to include a Level 2 investigation.Level 2 - Full Investigation Proceduresplete the steps identified in the Level 1 Investigation Procedures.plete the “New Client Background Investigation Request Form” (Form NCI-1) which isavailable through the Electronic Forms and Schedules application on the Audit Reference Resource Disc or by sending a Lotus Notes message to the RMS Group (address the message to “RMSGroup”) or by calling (312) 931-2469.3.Submit the form to the RMS group in Chicago.4.The RMS group is responsible for completing the full investigation. A written report summarizingthe results of its investigation will be returned to the submitting partner. RMS has a three day target for turnaround of all investigation requests.5.Review the results of the investigation. If necessary, discuss the results with a Firmwide industryhead or the BC Regional Quality Leader to reach an acceptance/rejection decision.6.If the results of the Level 2 investigation were insufficient to make a decision, extended proceduresshould be initiated. Consult with the RMS Group to determine the options available to best address the specific situation.。
Project Completion MemoTo: The FilesFrom:Date: April 10, 1995Subject:Company ABC Business Process SimulationClient Name:Company ABCClient Code:ABC123Job Number:25Main Client Contact Person:AA Project Team Members:ABC Project Team Members:Project Overview:During this project, we trained key personnel in process mapping techniques, attended software training, and identified issues and modifications.AA created the process maps and also began to perform business process simulations in the inventory control areas. However, ABC felt that we could be more valuable if we helped identify potential modifications and issues. Our final deliverable consisted of issues, modifications, current and proposed process flows, and sample BPS scripts and user procedures.Project Duration:November - February, 1995Ending Fee Status:Gross Fees Accumulated: $200,000Net Fees Collected: 160,000PFA: 30,000UFA: 10,000Major Issues EncounteredOrganizational/Procedural:ABC was concerned that we were spending too much time learning the software. We had to spend time in learning the software because thesoftware training was too high level and in order to identify modifications, we had to become familiar with the software.Software/Hardware Related:∙Although we were informed that all of the necessary hardware/software was ordered and shipped at the appropriate time, we learned that a large amount of the hardware was not ordered/ready. This caused a large delay in theproject.Lessons Learned:∙Listen to the client! The client wanted us to help identify issues and keep them headed in the right direction. They wanted their personnel, not AA, to become the experts in the application software they were implementing.There was a perception among the AA project team that we needed to do more configuring of the software, getting more down into the guts of it, to give them their moneys worth; rather than just identifying issues. We placed our own judgment on what would be valuable to them, rather than truly listening to them. About 1/2 way through the project, we finally had a meeting of the minds and began to more clearly understand what the client wanted.∙For BPS projects we should verify that the project team, both AA and client personnel, will receive adequate training of the software prior to theConference Room Pilot portion of the BPS.∙We should verify that all of the hardware/software required is obtained and setup/working prior to the kick-off of the BPS project. We wasted a lot of time waiting for them to get hw/sw installed even to a point were we could use the system from a couple terminals.∙Document all statement made by the client regarding timing of tasks outside of AA control (client or vendors items). This will help pinpoint project slippage not due to AA personnel.What Worked:∙We used an Access database to track open/pending/closed issues, we were able to easily track all of the issues identified and addressed throughout the project. At the end of our project, this ended up being our major deliverable.Open Items/Follow on Work:∙ABC is currently undertaking the remainder of the project themselves. There may possibly be follow on work if they can not satisfactorily complete thenecessary steps to accomplish an implementation.。
DataWorksSOFTWARE FOR THE MANAGEMENTOF PROJECT DATADataWorks is software of the Product Data Management (PDM) type: it manages product data, increases planning productivity, allows management and control of, and access to the data relevant to design processes, planning and production.Thanks to this safe and controlled activity, DataWorks allows the company to efficiently issue high quality products onto the market.This method interacts with the whole life cycle of a product and gives the opportunity to have access to the right information in each phase of its development.High productivityDataWorks gives a connection between the developmentactivities of the product and those that support its construction.The functionality of DataWorks revolves around themanagement of project data (revisions, control of access,management of documentation) allowing the definition of flowfor management of revisions, issue and process variations ofproduct data, all this thanks to the greater definition and thebetter management of correlated files. With DataWorks it is easyto answer questions such as•Where are the files of the manual of a product?•When and by whom has a detail been modified?•Have the assemblies that contain a modified detailbeen updated?•How many flanges are contained in that group?•Where is the support that i have to change used?DataWorks is the productive and efficient answer to all thesequestions: it exploits and rationalizes the wealth of informationof the company, making it organized and rapidly useable,speeding up the development cycles, thanks to the parallelindustrialization processes; moreover it allows a severe control ofdata and ensures its automatic distribution.Information: the wealth of the companyWith DataWorks information is administered by just onerelational data base, this guarantees its integrity and rapidavailability. Anyone can work simultaneously on the data, alwaysfinding it up-to-date.At the same time the access to data correlated files takesplace under the control of DataWorks, which establishes theform of use, the possibility of overwriting and the simultaneousaccess by more than one authorized user. In this way documentsare immediately usable within the team of work,optimizing time and drastically diminishing the risk of loss ofimportant information.Management of group use, through administration tools andthe centralization of archives, introduces new levels ofinformation security, guaranteeing controlled access to the latestrevisions of company project data.Different applicationsDataWorks is an open environment that allows managementof information originating from different applications.It is possible to organize and have easy access to CAD data,word processing files, part program, data sheets and any otherdocumentation data of a project. This product is integrated with:•Autocad ®•SolidWorks ®•CoCreate ME10 ®•Microsoft Office ®•Acrobat Reader ®Structure of DataWorksDataWorks has at its disposal a series of functions for themanagement of technical data: three-dimensional models,drawings, documents and information.Technical data created by the user is saved in a relational database, while files connected to a company part number areinserted in one or more file system directory (memorization area)controlled by the application. The functionality can be summedup in:•Definition and creation of product families•Dynamic association of specific attributes for eachproduct family•Management of technical part list data•Management of coding•Recording of documents and management of revisions•Management of B.O.M. using a multilevel iconeditorQueries on part numbers, documents, B.O.M.structure and “where used” of part numbers•Control of access to data•Release of whole B.O.M. or single part numbers•Possibility of creation of procedures finalized in theautomation of routine work, for example automaticcodingData flowThe first step towards making the information necessary for management of a product available to DataWorks, is the creation of a part number.During the coding phase it is possible to use a code search to identify the first one available.When there is an existing part number in the data base of DataWorks it is possible to:•use the editor of B.O.M. to create one that puts together the various part number codes•visualize the correlation between codes and make eventual changes and memorize these variations inthe data base•connect to the part numbers created, a series of documents (three-dimensional model, technicalmanual, spreadsheet, Part Program) so that it ispossible to associate all the technical informationconcerning it.DataworksWebIn the Web version, DataWorks allows the consultation, in real time, of data produced by the technical department from remote places connected to the server via Internet.This makes possible all activities of interrogation of data bases: research, “where used” and Bill of Material (B.O.M.) structure and local printing of documents connected to part numbers.DataWorksWeb is an indispensable tool for those who have productive units or offices distant from their central officesERPThe ERP procedure of DataWorks, allows the connection between different company sectors by means of automatic data transfer, guaranteeing continuous, safe up-dating of the data. System RequirementClient•Microsoft Windows XP Professional x32 or x64 Edition•Windows 7 Professional x32 or x64 EditionServer•Microsoft Windows Server 2008 R2DatabaseRelational databases containing information are based onDBMS applications. Supported databases:•Microsoft SQL Server 2008 R2•Oraclethey offer the company extreme flexibility of choice in functionof the available resources, presence of other applicationpackages and the nature and dimensions of the data base to bemanaged.23870 Cernusco Lombardone (LC) Italytel.: +39 039 99 09 703fax. +39 039 99 05 125E-mail: *****************www.aebtechno.it。
ATLAS Internal NoteDAQ-No-87April 14 1998 ATLAS DAQBack-end softwareHigh-Level DesignIssue:DraftRevision:1Reference:ATLAS-DAQ-BE-HDCreated:14 April 1998Last modified:14 April 1998Prepared By: ATLAS DAQ Backend Software GroupAbstractA summary of the high-level design for the ATLAS [1] Data Acquisition System(DAQ) back-end software is presented. This is the basis for the detailed-design andimplementation of the back-end software which covers the needs of the ATLASDAQ prototype named “-1”. The software described in this document is the workof the ATLAS back-end DAQ sub-group including the following people:CERN: D.Burckhart, M.Caprini (on leave from Institute of Atomic Physics, Bucharest), R.Jones,L.Mapelli, A.Patel (now with Alcatel, Brussels), I.Soloviev (on leave from PNPI, St Petersburg),T.WildishCPPM, IN2P3-CNRS, Marseille: L.Cohen, P.Y.Duval, R.Nacasch, Z.QianInstitute of Atomic Physics, Bucharest: E.Badescu, A.RaduJINR, Dubna: I.Alexandrov, V.Kotov, K.RybaltchenkoUniversity of Geneva, Geneva: Lorenzo MonetaLIP, Lisbon: A.Amorim, H.WoltersNIKHEF, Amsterdam: H.Boterenbrood, W.Heubers. R.HartUniversity of Sheffield, Sheffield: S.WheelerPNPI, Gatchina, St. Petersburg: S.Kolos, Y.RyabovOverview 7Operational environment 9Back-End DAQ Software Components 9The software component model 9Core components 10Run control 10Configuration database 10Message reporting system 10Process manager 10Information service 10Trigger / DAQ and detector integration components 11 Partition and resource manager 11Status display 11Run bookkeeper 11Event dump 11Test manager 11Diagnostics package 12Software Technologies 13Introduction 13General purpose programming toolkit 13Persistent data storage 13Light-weight in-memory object manager 13Objectivity/DB database system 14Inter-process communication 15Dynamic object behaviour 15Graphical user interfaces 16Java 16MVC and X-Designer 17Back-end DAQ software components 18Introduction 18Run control 18Run Control architecture 19Generic controller state chart 20Error Recovery 21The DAQ supervisor 24Elements of the run-control component 26Configuration database 26Configurations, Partitions and Authorization Control 27Hardware Configuration Database Skeleton 29 Message reporting system 30Process Manager 31The Client 31The Agent 32The DynamicDB 32Information Service 32Relationship between the Message Reporting System and Information Service 33Information Service Architecture 33Multiple servers architecture using ILU 35Trigger/DAQ and Detector Integration Components 35Partition and Resource manager 35Relationship between the Process Manager and the Resource Manager 37 Status display 37Event dump 39Run bookkeeper 40Test Manager 41Databases 43Diagnostics Package 43References 441OverviewThis document is a summary of the high-level design of for the ATLAS DAQ Back-end software. Thisdesign is intended to satisfy the requirements defined in the User Requirements Document [2]. Theintended audience are project reviewers and developers of the back-end software. It is the product of aATLAS DAQ Back-end software group and is drawn from the more detailed individual componentdesign documents. A description of the software process used for the development of this software andtesting plans are also included.The back-end software encompasses all the software to do with configuring, controlling and monitoringthe DAQ but specifically excludes the management, processing or transportation of physics data.Figure1 is a basic context diagram for the back-end software showing the exchanges of informationwith the other sub-systems. This context diagram is very general and some of the connections to theother sub-systems may not be implemented in the prototype DAQ/Event Filter Prototype “-1-” project.Figure1 DAQ Back-End software context diagram showing exchanges with othersub-systemsFigure2 shows on which processors the back-end software is expected to run, that is the event filter processors, supervisor and the LDAQ processors in the detector read-out crates. Note that the back-end software is not the only software that will be run on such processors. A set of operator workstations, situated after the event filter processor farm, dedicated to providing the man-machine interface and hosting many of the control functions for the DAQ system will also run back-end software.Figure2 ATLAS DAQ/Event Filter Prototype “-1” architectureProcessors runningOperatorworkstationsThe back-end software is essentially the “glue” that holds the various sub-systems together. It does not contain any elements that are detector specific as it is to be used by all possible configurations of the DAQ and detector instrumentation.The back-end software is but one sub-system of the whole DAQ system and it must co-exist andco-operate with the other sub-systems. In particular, interfaces are required to the followingsub-systems of the DAQ and external entities:triggerreceives trigger configuration details and can modify the configurationprocessor farmback-end software synchronises its activities with the processor farm when handling physics dataacceleratorbeam details are retrieved from the accelerator and feedback is provided to the machineoperators on the state of the beam at the detector siteevent builderback-end software synchronises its activities with the event builder for the handling of physics dataLocal DAQcontrol and configuration information as well as synchronizationDCSreceives status information from detector control system and sends control commands1.1Operational environmentThe environment in which the software is to run is partly dependent on the platforms chosen by theData-Flow group and described in [3]. It is expected that this environment will be a heterogeneouscollection of UNIX workstations, PCs running Windows NT and embedded systems (known as LDAQ processors) running various flavours of real-time UNIX operating systems (e.g. LynxOS) connected viaa Local Area Network (LAN).A great number of hardware components have to be used to provide the necessary computing power.These will be back-end and LDAQ processors loosely coupled by local networks and data flowchannels. We assume that network connections (e.g. ethernet or replacements) running the mostpopular protocols (e.g. tcp/ip) are available to all the target platforms for exchanging controlinformation and that its use will not interfere with the physics data transportation performance of theDAQ.The ATLAS prototype DAQ system will need to be able to run using all or only a part of itssub-systems. It will be assembled in a step by step manner, according to financial and technicaldependencies and constraints. A high degree of modularity is needed to connect, disconnect or add newcomponents at will.Many groups of people will interact at the various hardware and software levels, and so we have toforesee a significant level of sub-system unavailability and that this shall be detected and toleratedduring DAQ system startup. Hence checking procedures shall be a concern to detect such configurationproblems.The failure of an individual component shall not affect the operation of other components. Everysoftware component shall be designed with some form of self-test capability that can be used to verify aminimum of functionality.1.2Back-End DAQ Software Components1.2.1The software component modelThe user requirements gathered for the back-end sub-system have been divided into groups related toactivities providing similar functionality. The groups have been further developed into components ofthe back-end with a well defined purpose and boundaries. The components have interfaces withcomponents and external systems, specific functionality and their own architecture.From analysis of the components, it was shown that several domains recur across all the componentsincluding data storage, inter-object communication and graphical user interfaces.1.2.2Core componentsThe following 5 components are considered to be the core of the back-end subsystem, The corecomponents constitute the essential functionality of the back-end subsystem and have been givenpriority in terms of time-scale for development.1.2.2.1Run controlThe run control system controls the data taking activities by coordinating the operations of the DAQsub-systems, back-end software components and external systems. It has user interfaces for the shiftoperators to control and supervise the data taking session and software interfaces with the DAQsub-systems and other back-end software components. Through these interfaces the run control canexchange commands, status and information used to control the DAQ activities.1.2.2.2Configuration databaseA data acquisition system needs a large number of parameters to describe its system architecture,hardware and software components, running modes and the system running status. One of the majordesign issues of Atlas DAQ is to be as flexible as possible, parameterized by the contents of databases.1.2.2.3Message reporting systemThe aim of the Message Reporting System (MRS) is to provide a facility which allows all softwarecomponents in the ATLAS DAQ system and related subsystems to report error messages to othercomponents of the distributed DAQ system. The MRS performs the transport, filtering and routing ofmessages. It provides a facility for users to define unique error messages which will be used in theapplication programs.1.2.2.4Process managerThe purpose of the process manager is to perform basic job control of software components of theDAQ. It is capable of starting, stopping and monitoring the basic status (e.g. running or exited) ofsoftware components on the DAQ workstations and LDAQ processors independent of the underlyingoperating system. In this component the terms process and job are considered equivalent.1.2.2.5Information serviceThe Information Service (IS) provides an information exchange facility for software components of theDAQ. Information (defined by the supplier) from many sources can be categorised and made availableto requesting applications asynchronously or on demand.1.2.3Trigger / DAQ and detector integration componentsGiven that the core components described above exist, the following components are required tointegrate the back-end with other on-line subsystems and detectors.1.2.3.1Partition and resource managerThe DAQ contains many resources (both hardware and software) which cannot be shared and so theirusage must be controlled to avoid conflicts. The purpose of the Partition Manager is to formalise theallocation of DAQ resources and allow groups to work in parallel without interference.1.2.3.2Status displayThe status display presents the status of the current data taking run to the user in terms of its main runparameters, detector configuration, trigger rate, buffer occupancy and state of the subsystems.1.2.3.3Run bookkeeperThe purpose of the run bookkeeper is to archive information about the data recorded to permanentstorage by the DAQ system. It records information on a per-run basis and provides a number ofinterfaces for retrieving and updating the information.1.2.3.4Event dumpThe event dump is a monitoring program with a graphical user interface that samples events from thedata-flow and presents them to the user in order to verify event integrity and structure.1.2.3.5Test managerThe purpose of the test manager is to organise individual tests for hardware and software components.The individual tests themselves are not the responsibility of the test manager which simply assures theirexecution and verifies their output. The individual tests are intended to verify the functionality of agiven component. They will not be used to modify the state of a component or to retrieve statusinformation. Tests are not optimized for speed or use of resources and are not a suitable basis for othercomponents such as monitoring or status display.1.2.3.6Diagnostics packageThe diagnostics package uses the tests held in the test manager to diagnose problems with the DAQ andverify its functioning status. By grouping tests into logical sequences, the diagnostic framework canexamine any single component of the system (hardware or software) at different levels of detail in orderto determine as accurately as possible the functional state of components or the entire system. Thediagnostic framework reports the state of the system at a level of abstraction appropriate to any requiredintervention.2Software Technologies2.1IntroductionThe various components described in this document all require a mixture of facilities for data storage,inter-process communication in a LAN network of processors, graphical user interfaces, complexlogic-handling and general operating system services. To avoid unnecessary duplication, the samefacilities are used across all components. Such facilities must be portable (the prototype DAQ willinclude processors running Solaris, HPUX and WNT). In particular they must be available on theLynxOS real-time UNIX operating system selected for use on the LDAQ processors. Candidatefreeware and commercial software packages were evaluated to find the most suitable product for eachtechnology.2.2General purpose programming toolkitRogue Wave Tools.h++[9] is a C++ foundation class library available on many operating systems(Unix, MS Windows, WNT, OS/2) that has become an industrial standard and is distributed by a widevariety of compiler vendors. It has proven to be a robust and portable library that can be used for DAQprogramming since it supports many useful classes and can be used in a multi-threaded environment.We have acquired the sources for the library and ported it to LynxOS.Since this decision, the C++ language has been accepted as an international standard and the StandardTemplate Library (STL) has become widely available. However, for the length of the current project wewill continue to use Tools.h++ since migration would involve modifying source code.2.3Persistent data storageWithin the context of the ATLAS DAQ/Prototype “-1” project, the need for a persistent data managerto hold configuration information was identified. The ATLAS DAQ group has evaluated variouscommercial and shareware data persistence systems (relational databases, object databases and objectmanagers) but no single system satisfied all the documented user requirements.As a consequence, it was decided to adopt a two-tier architecture, using a light-weight in-memorypersistent object manager to support the real-time requirements and a full ODBMS as a back-up and forlong-term data management.2.3.1Light-weight in-memory object managerFor the object manager, a package called OKS has been developed on top of Rogue Wave’s Tools.h++C++ class library. The OKS system is based on an object model that supports objects, classes,associations, methods, data abstraction, inheritance, polymorphism, object identifiers, compositeobjects, integrity constraints, schema evolution, data migration, active notification and queries. TheOKS system stores database schema and data in portable ASCII files and allows different schema anddata files to be merged into a single database. It includes Motif based GUI applications to designdatabase schema and to manipulate OKS objects (Figure3) A translator has been developed betweenthe OMT object model and OKS object model implemented with StP [10].Figure3 The OKS Data Editor2.3.2Objectivity/DB database systemObjectivity/DB is a commercial object oriented database management system introduced to CERN bythe RD45 project [11]. We evaluated the basic DBMS features (schema evolution, access control,versioning, back-up/restore facilities etc.) and the C++ programming interface. A prototype translatorhas been developed between the OMT object model and Objectivity object model implemented withStP [10].2.4Inter-process communicationMessage passing in our distributed environment is a topic of major importance since it is thecommunication backbone between the many processes running on the different machines of the DAQsystem. Reliability and error recovery are very strong requisites as well as the ability to work in anevent driven environment (such as X11). Many components of the back-end software require atransparent means of communicating between objects independent of their location (i.e. betweenobjects inside the same process, in different processes or on different machines).We chose to evaluate the Object Management Group’s [12] Common Object Request BrokerArchitecture (CORBA) standard. ILU [13] is a freeware implementation of CORBA by Parc Xerox.The object interfaces provided by ILU hide implementation distinctions between different languages,between different address spaces, and between operating system types. ILU can be used to buildmulti-lingual object-oriented libraries with well-specified language-independent interfaces. It can alsobe used to implement distributed systems and to define and document interfaces between the modulesof non-distributed programs. ILU interfaces can be specified either in the OMG’s CORBA InterfaceDefinition Language (OMG IDL) or ILU’s Interface Specification Language (ISL). We have portedILU to LynxOS.2.5Dynamic object behaviourMany applications within the ATLAS DAQ prototype have complicated dynamic behaviour which canbe successfully modelled in terms of states and transitions between them. Previously, state diagrams,implemented as finite state machines, have been used which, although effective, become ungainly assystem size increases. Harel statecharts address this problem by implementing additional features suchas hierarchy and concurrency.CHSM [14] is an object-oriented language system which implements Harel statecharts as Concurrent,Hierarchical, finite State Machines supporting many statechart concepts as illustrated in the abstractexample shown in Figure4 (a):Hierarchystates f and e are child-states of parent state q.Clusters (logical-exclusive-or state groups)states a, b and c are in cluster x. To be in x is to be in a, b or c. The transition to d is takenregardless of x’s child-state.Historywhen cluster x is re-entered the previous child-state is entered.Sets (logical-and state groups)cluster p and cluster q are child states of set s. To be in set s is to be in both child-states p andq.Concurrencyif event α occurs while the statechart is in child-states a and f then it will simultaneously maketransitions to states e and c.Guard conditionsthe transition from b to c only occurs if v < 4.Broadcastingevent ε is broadcast when the transition from f to e is made.Actionsfunction f() is executed when the transition from x to d is made, in addition actions can beexecuted on entering or exiting a state (not shown).CHSMs are described by means of a CHSM description text file and Figure 4 (b) shows the CHSM description corresponding to the statechart shown in Figure 4 (a). The CHSM compiler converts the description to C++, which may be integrated with user defined code.Figure 4 (a) StateChart (b) CHSM descriptionWe have evaluated the CHSM language system and have shown it to be suitable for describing thedynamic behaviour of typical DAQ applications. For example, the prototype DAQ run control has been implemented using CHSM.2.6Graphical user interfacesModern data acquisition systems are large and complex distributed systems that require sophisticated user interfaces to monitor and control them. X11 and Motif are the dominant technologies on UNIX workstations but the advent of WNT has forced us to reconsider this choice.2.6.1JavaJava is a simple object-oriented, platform-independent, multi-threaded, general-purpose programming environment. The aim of our evaluation was to understand if Java could be used to implement the status display component of the DAQ. A demonstration application was developed to investigate such topics as, creating standard GUI components, client-server structure, use of native methods, specific widgets for status displays and remote objects. The performance was compared to X11 based alternatives. The demo contains three essential parts: servers, simulators and applets. Servers realise the binding with remote objects, simulators create simulation data and update remote objects (to mimic the DAQ), applets put simulation data to remote objects or get them from remote objects and display them. The appearance of the Java implementation of the status display is shown in Figure 18set s(p,q) is {cluster p(x,d) is {cluster x(a,b,c) history { upon enter %{enterx();%} upon exit %{exitx();%} delta->d %{ f(); %}; } is { state a { alpha,epsilon->c; beta->b; } state b { gamma [v<4]->c; } state c; } state d; } cluster q(e,f) is { state e { beta->f %{epsilon();%};} state f { alpha->e;} } }(b)(a)The entire application is written in Java (JDK1.0), communication is realised by Java IDL (alpha2). Theservers and simulators run on a SUN (Solaris2.5) and the applets can be loaded from any machine usinga Java compatible browser. Work is continuing on the integration of the demo status display with theILU CORBA system described above.2.6.2MVC and X-DesignerX-Designer [15] is an interactive tool for building graphical user interfaces (GUIs) using widgets of thestandard OSF/Motif toolkit as building blocks that has been used extensively in the RD13 project [16].It is capable of generating C, C++ or Java code to implement the GUI. We investigated the constructionof GUIs with a Model-View-Controller (MVC) architecture using the C++ code generation capabilitiesof the tool.A number of widget hierarchies were created which correspond to commonly used patterns in GUIs(e.g. data entry field with a label). These also correspond to views or controller-view pairs in the MVCarchitecture. By making each hierarchy a C++ class it could be added to the X-Designer palette andused in subsequent designs. A GUI for the existing RD13 Run Control Parameter database wassuccessfully built using these definitions.X-Designer also provides a facility called XD/Replay which can be used to record or playback any Xtbased application. In record mode it writes a high-level description of actions performed on theapplication in a script. On replaying the script, the recorded actions are executed on the application.This facility will be useful for automating the testing of any DAQ application with a Motif GUI and forproducing automated demonstrations of applications.Work will continue to investigate if the MVC approach and Java code generation facilities ofX-Designer can be used to develop the DAQ graphical user interfaces. X-Designer is currently beingused to develop a GUI for the run control component.3Back-end DAQ software components3.1IntroductionThe back-end components described use a common base for key software technologies. They also have inter-dependencies between them as represented (in a simplified manner) below (Figure 5).Figure 5 Dependencies between back-end core components and external packagesThis chapter examines each of the back-end components defined in the previous chapter in greater detail.3.2Run controlThe run control system controls the data taking activities by coordinating the operating of the DAQ sub-systems, back-end software components and external systems. It has user interfaces for the shift operators to control and supervise the data taking session and software interfaces with the DAQ sub-systems and other back-end software components. Through these interfaces the run control can exchange commands, status and information used to control the DAQ activities.Through the user interface it receives commands and information describing how the user wants the DAQ system to take data. It allows DAQ users to select a DAQ system configuration, parameterize it for a run and start and stop the data taking sub-systems.Tools.h++ILU Objy OKS Data Access Libraries IPCInformation ServiceMessage Reporting System Run ControlExternal Packages Back-End Component Dependencies Non-core components not shownCHSMProcess ManagerThe Run Control component operates in a environment consisting of multiple partitions that may take data simultaneously and independently. Each copy of the run control is capable of controlling one partition marshalled by the Partition Manager (section 3.7.1 on page 35).In general the run control needs to send commands to the other DAQ sub-systems in order to control their operation and receive change of state information. The external sub-systems are autonomous and independent of the run control so their detailed internal states remain hidden. If a sub-system changes state the run controller reacts appropriately, for example stop the run if a detector is no longer able to produce data. The run control will interact with a dedicated controller for each sub-system.3.2.1Run Control architectureThe architecture of the run control can be seen as a hierarchy of control entities called controllers, each with responsibility for a well defined component or part of the DAQ. The controller’s state is thesimplified external view of the current working condition of the component or part of the DAQ under its responsibility.Each controller can receive commands from the outside world. Commands cause a controller to execute actions which potentially change the visible state of the component. A controller can also react to local events occurring in the DAQ component under its responsibility (see Figure 6). Typically its reaction will be to execute some actions and potentially change its visible state.Figure 6 Interactions between a controller and the component under controlThe controllers are organised into a hierarchical tree structure that reflects the general organisation of the DAQ system itself. Each controller in the tree can have one parent (or superior) controller and any number of child (subordinate) controllers. At the top of the tree is a single controller which represents the overall state of the entire on-line system. Below the overall, or general, controller are a set of controllers, one for each major sub-system of the DAQ and the physics detectors. Below eachsub-system controller there may be further component controllers which are responsible for individual components.The hierarchical tree of controllers transmit messages between themselves to exchange commands and status information. In general, commands starting from the human operator, are sent to the general controller which forwards them to the sub-system controllers who in turn forward them to component controllers and so on. In this respect commands flow from the root of the tree towards the leaves. The result of commands are sent back through the tree so that the human operator is made aware of any change in the state of the system. Any node in the control tree can perform actions on the commands it ponent under controlControllerCommandsactions events Status。
Atlas DAQImplementation of the Information ServiceAuthors:Kolos S.Keywords:Information Service, ILUAbstractThis note presents the description of the Information Service(IS) implementation library. Library consists from two parts: a server and a client. The server part classes are used to implement an information storage that can be accessed from any DAQ application using the client part classes. Client part classes represent an application program interface to information storage. NoteNumber:037Version:1.3Date:15/12/98Reference:http://atddoc.cern.ch/Atlas/Notes/037/Note037-1.html1Information Service architectureThe requirements and architecture of IS is presented in the technical note[2]. This note describes the implementation of IS which satisfies these requirements and is done using ILU system on the top of IPC package. Figure 1 shows the general architecture of IS. An informa-tion is stored on a server and can be inserted, updated and removed by any client application.IS implementation is based on the IPC package which provides the base functionality for the work with IPC partitions, Corba servers and Corba objects.Figure 1: IS general architectureIS provides the next features for information manipulation:rmation is the object which has a name, a type and a value.•Name is string identifier which must be unique across all other objects in one has the following format“SERVER_NAME.OBJECT_NAME”•Type is a string identifier of a particular type of information. Information objects with dif-ferent structures must have different type identifiers. Information objects with the same structures should have the same type identifiers.rmation objects are stored on a server which must have a unique name across all the servers in a particular partition.3.Any client application can access any information in any server of any partition. It can:•create an information of any type•delete an information•get the value of an information•update the value of an information•subscribe to a particular information to be notified when the information value will be changed•subscribe to several information items on a particular server using a regular expression as the subscription argument. A client will received notifications about values changes for that information objects whose names satisfy the regular expression.•cancel subscriptions made by this client application4.A server has a possibility to backup all the objects and all the subscriptions to a file for check pointing purposes5.A server can restore all the objects and all the subscriptions from the backup file 2IS implementation library structureIS implementation is provided in the form of a library. IS library consists of two parts - the server implementation classes and the client interface classes. The server part contains all the necessary classes for IS servers implementation. Figure 2 shows the IS server classes and their relationships.The client classes provide an API to access the IS. This API hides the details of the communi-cation implementation and makes the usage of the system transparent and independent of the communication package.Figure 2: IS server classes relationshipsServer part consists of two classes which implement the exported functionality of IS. ISFac-tory class is intended for information management. ISInfoTrue class is responsible for access to information value and type.Class IPCPartition from the IPC package is used in both server and client parts of the library.IPCPartition class provides a way of splitting the information between different partitions.Figure 3 shows all the classes in IS library and their relationships.2.1Server Implementation ClassesISFactory class implements ten remote methods:•create_info - to create a new information object•delete_info - to delete an information object•get_info_list - to return list of all existing information objects for a given server•subscribe - to subscribe for a particular information object•unsubscribe - to unsubscribe for particular information object•query_subscribe - to subscribe for all objects matching a specific query in a particular server •query_unsubscribe - to cancel a query subscription•set_value - to set value for a particular information object•get_type_and_value - to get value and type of a particular information object•get_type - to get type of a particular information objectObjects of ISInfoTrue class are not accessible outside server application. ISInfoTrue is a pri-vate class. It is used only by ISFactory class and instances of this class cannot be created by user.Server application is looks quite simple. One should create a partition object, then create the factory object and call run the method on it.Example of server program#include “isinfotrue.h”void main(int ac, char **av){IPCPartition p(“1”);ISFactory f(“runcontrol”,p);f.run();}2.2Client Implementation ClassesThe client part of the library includes the following classes:•ISCallbackInfo - This class provides a way to obtain an information via the subscription callback function. See the subscribe method of the ISInfoReceiver class.•ISInfoDictionary - It is client side representation of dictionary of names and values of infor-mation objects. It allows to create/delete an information and get/set the value of informa-tion.•ISInfoIterator - This class allows sequential access to all names and information objects ona particular server.•ISInfoReceiver - It is responsible for subscription and alarm management on the client side.This class is derived from class IPCServer and therefore has a methods for server animation (see IPCServer class description).•ISServerIterator - This class allows sequential access to all servers in a particular partition.•ISInfo - This class allows to define different types of information.•ISInfoAny - This class allows to access an attributes values of information object of any type.Figure 3 shows the classes of IS library and their relationships. The complete reference to these classes are presented in the next chapter. There are some other classes, which are not drawn at this diagram. These classes have no relations with the others and are explained in the next chapter also.Figure 3: IS library classes3Class ReferenceThe reference is presented here as an alphabetical listing of classes with their member and pub-lic functions.3.1The differences from first version•Developer does not need to provide an information type (See “class ISInfo” on page 11).•Method find of class ISInfoDictionary returns now the information class ID which is unique unsigned integer number (See “class ISInfoDictionary” on page 21).•All type methods return now the information class ID instead of an information class name string (See “class ISCallbackInfo” on page 6,“class ISInfo” on page 11,“class ISInfoItera-tor” on page 23).•Class ISCallbackInfo has now a method called reason () which returns the reason of call-back invocation (See “class ISCallbackInfo” on page 6).•There is new class ISInfoAny (See “class ISInfoAny” on page 13).•The current version is implemented on the top of IPC package. The classes ISAlarm and ISServer are implemented now in the IPC and called IPCAlarm and IPCServer respec-tively[1].•The possibility to use a vectors of base C++ types in the information definition is added.•The definition of the ISInfoAny class is changed.3.2The differences from previous version•The possibility to add a parameter to the subscription callback is added (See “class ISIn-foReceiver” on page 25).This class provides a way to obtain an information via the subscription callback function. See the subscribe method of the ISInfoReceiver class.This class has no public constructors and can’t be created by a user. The pointer to an instance of this class is passed as a argument to a subscription callback.Public member functionsISTypetype();Returns the type number of the information whose value was changed.const char *name();Returns the name of the information whose value was changed.void *parameter();Returns the value of the parameter has been specified by the ISInfoReceiver::subscribe method.longvalue(ISInfo & info);Puts the new value of the information to info object. This object must has the same type as the object whose value was changed. Returns IS_INFO_INCOMPATIBLE_TYPE if the type of info is not the same as type of the information whose value was changed. Otherwise returns IS_SUCCESS.ISReasonreason();Returns the reason of notification.ISReason is a typedef for the enum is_T_Reason:enum is_T_Reason{ ISReasonCreated, ISReasonUpdated, ISReasonDeleted }Examplevoid callback(ISCallbackInfo * isc){// callback functionISInfoInt i;if ( isc->type() == i.type() ){isc->value(i);int ii = i;cout << “Object “ << isc->name() << “ has value “ << ii << endl; }elsecout<< “Object “<<isc->name()<<“ has unknown type“<<isc-> type(); }int main(int ac, char **av){SInfoReceiver ir;ir.subscribe(“runcontrol.rc1.FSMState”,callback);ir.subscribe(“runcontrol.rc1.RCStatus”,callback);ir.run();}This class is responsible for a server implementation. It implements the dictionary of informa-tion objects that can be accessed either from the same process or from the other processes via ISInfoDictionary,ISInfoReceiver and ISInfoIterator classes. It inherits from the IPCServer class.Public constructorsISFactory( const char * sid, IPCPartition & p );Constructs the factory object with sid server name in the p partition.ISFactory( const char * sid );Constructs the factory object with sid server name in the default partition.Public member functionsostream&getLogStream();Returns current log stream. By default log stream is suppressed and this function returns stub stream. The output to this stream has no effect.voidsetLogStream(ostream * strm = (ostream *)0 );Set current log stream to strm stream. If strm is NULL this function turns the log stream to an initial state, i.e. suppresses output.voidrun();Inherited from IPCServer class. Animates a server. This method implements an interruptible loop. It never returns, but it is possible to stop it by calling stop method of this class.voidloop();Inherited from IPCServer class. Animates a server. This method implements an endless loop. It never returns. Calling the stop method has no effect.voidstop();Inherited from IPCServer class. Stops a server which has been animated by run method. IPCAlarm*setAlarm( double period, RWBoolean (*per_alarm)(void *), void * rock);Inherited from IPCServer class. Creates an alarm which will call per_alarm function with rock argument every period seconds.Per_alarm function should return TRUE if it wants tobe called again after period seconds and FALSE to cancel an alarm. Returns the pointer to IPCAlarm object.longunsetAlarm( IPCAlarm * alarm);Inherited from IPCServer class. Cancels the alarm which has been created by setAlarm method.const char *getID();Inherited from IPCServer class. Returns the server name.IPCPartition&partition();Inherited from IPCServer class. Returns the partition where the server was created. RWBooleandoSoon(RWBoolean (*do_soon)(void *), void * rock);Inherited from IPCServer class. Function do_soon will be called with rock argument as soon as possible after the server will be animated by calling run or loop methods. Allows user ini-tialization activity once the server is running. Returns FALSE if the action can’t be performed, otherwise returns TRUE.Related global operatorsThey are made as the basis of the backup facility for IS.RWBooleanoperator<<( ofstream& ofstrm ,ISFactory& isf);Stores all the information to backup stream.Ofstrm is the object of class ofstream constructed for the backup file.RWBooleanoperator>>( ifstream& ifstrm ,ISFactory& isf);Restores all the information from backup stream.Ifstrm is the object of class ifstream con-structed for the backup file.Examplechar * backup_to;RWBoolean per_alarm( void * rock ){// alarm functionISFactory * f = (ISFactory*)rock;cerr << “ Saving...” ;// here we provide the system backupofstream fout(backup_to,ios::out | ios::trunc);if ( fout.good() )fout << *f;// backup to the file streamelsecerr << “Bad backup file - “ << backup_to << endl;cerr << “Done” << endl;return TRUE;}int main(int ac, char **av){IPCPartition p(av[1]);// 1st argument contains partition name ISFactory f(av[2],p);// 2nd argument contains server namef.setAlarm(60,per_alarm,&f);//per_alarm function will be called every minute to make a system// backupif ( av[3]){//3-d argument contains the file name to restore from ifstream fin(av[3]);if ( fin.good() )// if the file is validfin >> f;// we restore the informationelsecerr << “Bad backup file - “ << av[3] << endl;backup_to = av[3];// we assume to backup to the same file }f.run();// server animation - this function must be calledreturn 0;}Class ISInfo is an abstract base class for distributed information objects. This class contains virtual functions for storing and retrieving information objects. Objects that inherit this base class must redefine these functions.Protected constructorThe constructors are protected and therefore can be used only by derived class objects.ISInfo()Creates an information object.Virtual functionsvirtual voidpublishGuts(ISostream& strm);Writes an information object’s state to an output streamvirtual voidrefreshGuts(ISistream& strm);Reads an information object’s state from an input streamPublic member functionsISTypetype();Returns a type of information. The ISType is typedef for an unsigned C++ type. The type number is unique for different types of object and is completely defined by the object structure, as by types of object attributes as by the order of attributes definition.RWTime&type();Returns a time of the last update of the information. For the just created information returns creation time. For the definition of RWTime see [3].Example#include “isinfo.h”class TestInfo: public ISInfo{public:TestInfo(){ s = new char[1024];}~TestInfo(){delete s;}void publishGuts(ISostream& ostr){ ostr << i << f << d << l << c << s; } void refreshGuts(ISistream& istr){ istr >> i >> f >> d >> l >> c >> s; } int i;float f;double d;long l;char c;char *s;};This class should be used in case the structure of the information object you want to access is unknown. A reference to an object of class ISInfoAny can be used in any IS functions instead of a reference to a specific object derived from class ISInfo. This class allows a read-only access to all object attributes. The access to the attributes is organized in the stream manner. Each call to value method advances the position of the stream to the value of the next attribute. In order to access all the attributes again the reset method must be called.Public constructorISInfoAny ( )Creates an information object.Public member operatorenum ISDomainoperator()();Returns the domain of the current attribute of the information object. Enumeration ISDomain is defined as:enum ISDomain{IS_DOMAIN_ERROR = 0,IS_CHAR = 51, IS_UCHAR = 52, IS_SHORT = 53, IS_USHORT = 54,IS_INT = 55, IS_UINT = 56, IS_LONG = 57, IS_ULONG = 58,IS_FLOAT = 59, IS_DOUBLE = 60, IS_STRING = 61, IS_USTRING = 62, IS_CHAR_ARRAY = 63, IS_UCHAR_ARRAY = 64, IS_INT_ARRAY = 65,IS_UINT_ARRAY = 66, IS_LONG_ARRAY = 67, IS_ULONG_ARRAY = 68,IS_SHORT_ARRAY = 69, IS_USHORT_ARRAY = 70,IS_DOUBLE_ARRAY = 71, IS_FLOAT_ARRAY = 72};Returns IS_DOMAIN_ERROR if there is no more attributes available.Public member functionslongentries ();Returns the number of attributes in the current information object.voidreset();Resets the object to the state it had immediately after construction.value ( int& r);Get the next int and store it in r. Returns TRUE if operation is successful, otherwise returns FALSE.RWBooleanvalue( short& r);Get the next short and store it in r. Returns TRUE if operation is successful, otherwise returns FALSE.RWBooleanvalue(long& r);Get the next long and store it in r. Returns TRUE if operation is successful, otherwise returns FALSE.RWBooleanvalue(char& r);Get the next char and store it in r. Returns TRUE if operation is successful, otherwise returns FALSE.RWBooleanvalue(char *& r);Get the next character string and store it in r. Returns TRUE if operation is successful, other-wise returns FALSE.This function allocates the necessary amount of memory to store the string. User is responsible to free this memory.RWBooleanvalue(float& r);Get the next float and store it in r. Returns TRUE if operation is successful, otherwise returns FALSE.RWBooleanvalue(double& r);Get the next double and store it in r. Returns TRUE if operation is successful, otherwise returns FALSE.RWBooleanvalue(unsigned int& r);Get the next unsigned int and store it in the r. Returns TRUE if operation is successful, other-wise returns FALSE.RWBooleanvalue(unsigned short& r);Get the next unsigned short and store it in r. Returns TRUE if operation is successful, other-wise returns FALSE.RWBooleanvalue(unsigned long& r);Get the next unsigned long and store it in r. Returns TRUE if operation is successful, other-wise returns FALSE.value(unsigned char& r);Get the next unsigned char and store it in r. Returns TRUE if operation is successful, other-wise returns FALSE.RWBooleanvalue(unsigned char *& r);Get the next unsigned character string and store it in r. Returns TRUE if operation is success-ful, otherwise returns FALSE. This function allocates the necessary amount of memory to store the string. User is responsible to free this memory.RWBooleanvalue(char*& r, size_t & N);Get a vector of char s and store them in the array beginning at r. Set the size of array to N. Returns TRUE if operation is successful, otherwise returns FALSE. This function allocates the necessary amount of memory to store the array. User is responsible to free this memory. RWBooleanvalue(short*& r, size_t & N);Get a vector of short s and store them in the array beginning at r. Set the size of array to N. Returns TRUE if operation is successful, otherwise returns FALSE. This function allocates the necessary amount of memory to store the array. User is responsible to free this memory. RWBooleanvalue(int*& r, size_t & N);Get a vector of int s and store them in the array beginning at r. Set the size of array to N. Returns TRUE if operation is successful, otherwise returns FALSE. This function allocates the necessary amount of memory to store the array. User is responsible to free this memory. RWBooleanvalue(long*& r, size_t & N);Get a vector of long s and store them in the array beginning at r. Set the size of array to N. Returns TRUE if operation is successful, otherwise returns FALSE. This function allocates the necessary amount of memory to store the array. User is responsible to free this memory. RWBooleanvalue(float*& r, size_t & N);Get a vector of float s and store them in the array beginning at r. Set the size of array to N. Returns TRUE if operation is successful, otherwise returns FALSE. This function allocates the necessary amount of memory to store the array. User is responsible to free this memory. RWBooleanvalue(double*& r, size_t & N);Get a vector of double s and store them in the array beginning at r. Set the size of array to N. Returns TRUE if operation is successful, otherwise returns FALSE. This function allocates the necessary amount of memory to store the array. User is responsible to free this memory. RWBooleanvalue(unsigned char*& r, size_t & N);Get a vector of unsigned char s and store them in the array beginning at r. Set the size of array to N. Returns TRUE if operation is successful, otherwise returns FALSE. This function allo-cates the necessary amount of memory to store the array. User is responsible to free this mem-ory.RWBooleanvalue(unsigned short*& r, size_t & N);Get a vector of unsigned short s and store them in the array beginning at r. Set the size of array to N. Returns TRUE if operation is successful, otherwise returns FALSE. This function allo-cates the necessary amount of memory to store the array. User is responsible to free this mem-ory.RWBooleanvalue(unsigned int*& r, size_t & N);Get a vector of unsigned int s and store them in the array beginning at r. Set the size of array to N. Returns TRUE if operation is successful, otherwise returns FALSE. This function allocates the necessary amount of memory to store the array. User is responsible to free this memory. RWBooleanvalue(unsigned long*& r, size_t & N);Get a vector of unsigned long s and store them in the array beginning at r. Set the size of array to N. Returns TRUE if operation is successful, otherwise returns FALSE. This function allo-cates the necessary amount of memory to store the array. User is responsible to free this mem-ory.Example#define PRINT_VALUE(isa,Domain,dname){Domain value; \isa.value(value);\cout << “ { “ << dname << “ : “ << value << “ }” << endl; \break; \}#define PRINT_ARRAY_VALUE(isa,Domain,dname){Domain * value; \size_t k;\isa.value(value,k);\cout << “ array of “ << dname << “ : { “ ; \for ( i = 0; i < k-1; i++ )\cout << value[i] << “, “;\cout << value[i] << “ }” << endl;;\break; \}void print_info(ISInfoAny& isg){int n;int N;size_t i;cout << “ It has “ << (N = isg.entries()) << “ field(s):” << endl; for ( n = 0; n < N; n++ ){switch (isg()){case IS_INT:PRINT_VALUE(isg,int,”integer”)case IS_UINT:PRINT_VALUE(isg,unsigned int,”unsigned integer”) case IS_SHORT:PRINT_VALUE(isg,short,”short”)case IS_USHORT:PRINT_VALUE(isg,unsigned short,”unsigned short”) case IS_LONG:PRINT_VALUE(isg,long,”long”)case IS_ULONG:PRINT_VALUE(isg,unsigned long,”unsigned long”)case IS_FLOAT:PRINT_VALUE(isg,float,”float”)case IS_DOUBLE:PRINT_VALUE(isg,double,”double”)case IS_CHAR:PRINT_VALUE(isg,char,”character”)case IS_UCHAR:PRINT_VALUE(isg,unsigned char,”unsigned character”) case IS_STRING:PRINT_VALUE(isg,char *,”string”)case IS_USTRING:PRINT_VALUE(isg,unsigned char *,”unsigned string”)case IS_CHAR_ARRAY:PRINT_ARRAY_VALUE(isg,char,”character”)case IS_UCHAR_ARRAY:PRINT_ARRAY_VALUE(isg,unsigned char,”unsigned character”) case IS_INT_ARRAY:PRINT_ARRAY_VALUE(isg,int,”integer”)case IS_UINT_ARRAY:PRINT_ARRAY_VALUE(isg,unsigned int,”unsigned integer”) case IS_SHORT_ARRAY:PRINT_ARRAY_VALUE(isg,short,”short”)case IS_USHORT_ARRAY:PRINT_ARRAY_VALUE(isg,unsigned short,”unsigned short”) case IS_LONG_ARRAY:PRINT_ARRAY_VALUE(isg,long,”long”)case IS_ULONG_ARRAY:PRINT_ARRAY_VALUE(isg,unsigned long,”unsigned long”) case IS_FLOAT_ARRAY:PRINT_ARRAY_VALUE(isg,float,”float”)case IS_DOUBLE_ARRAY:PRINT_ARRAY_VALUE(isg,double,”double”)default:cout << “ { error - **Unknown type** } “ << endl; }}}int main(int ac, char **av){ISInfoIterator ii(argv[1]);ISInfoAnyisg;while ( ii() ){ii.value(isg);print_info(isg);}}There are a set of classes which are derived from ISInfo, each of them represents one of the base C++ types. All these classes have the same set of operators and methods. They are:•ISInfoInt•ISInfoLong•ISInfoChar•ISInfoUnsignedChar•ISInfoUnsignedInt•ISInfoUnsignedLong•ISInfoFloat•ISInfoDouble•ISInfoStringPublic member operatorsISInfoT&operator=(T data)Assignment operator. Copies the value of data to self. Returns a reference to self.operator T()Type conversion operator. Access to the information’s data as a value of type T.Public member functionsvoidsetValue(T data);Sets of information value to data.T&getValue();Returns the value of information.Example#include “isinfo.h”ISInfoDictionary id(p);ISInfoInt c;for ( int i = 0; i < 10; i++ ){sprintf(name,”runcontrol.rc%d.fsmstate”,i);c = i;// use overloaded = operatorid.insert(name,c);}ISInfoDictionary class allows access to the dictionary of names and values of information objects implemented by ISFactory class. It allows to create/delete an information and get/set the value of information.Public constructorsISInfoDictionary( );Creates a dictionary in default partition.ISInfoDictionary( IPCPartition & p );Creates a dictionary in the p partition.Public member functionslonginsert(const char * name, ISInfo & info);If the object associated with name is in dictionary, then simply returns error status IS_INFO_EXIST. Otherwise, inserts the info object into the dictionary, associates it with name name and returns IS_SUCCESS.longremove(const char * name);Removes object associated with name from dictionary. Returns IS_SUCCESS if object exists or IS_INFO_NOT_EXIST in other case.ISTypefind(const char * name);Returns the type id of object with name name if object exists. Otherwise, returns FALSE. longfindValue(const char * name, ISInfo & info);Reads the value of object associated with name into info and returns IS_SUCCESS if the object exists. Otherwise, returns IS_INFO_NOT_ object must has the same type as information associated with name. Otherwise this function returns IS_INFO_INCOMPATIBLE_TYPE.longupdate(const char * name, ISInfo & info);Updates the value of object associated with name from info and returns IS_SUCCESS if the object exists. Otherwise, returns IS_INFO_NOT_EXIST. Info object must has the same type as information associated with name. Otherwise this function returns IS_INFO_INCOMPATIBLE_TYPE.All methods will fail and return IS_INFO_ACCESS_ERROR if the specific server is not reachable. Otherwise they return IS_SUCCESS.Example#include “isinfo.h”void main(void){ISInfoDictionary id(p);TestInfo c;c.i = 1;c.f = 1.3;c.d = 13.13;c.l = 0xffff;c.c = ‘b’;c.s = strdup(“new text”);ISType type;if (( type = id.find(”runcontrol.rc1.testinfo”)) != FALSE){// object already exists in dictionarycout << “ Type of object is “ << type << endl;id.findValue(”runcontrol.rc1.testinfo”,c);cout<< c.i << “ “ << c.f << “ “ << c.d << “ “<< c.l << “ “ << c.c << “ “ << c.s << endl;}elseid.insert(”runcontrol.rc1.testinfo”,c);// insert the new object}。