外国文献翻译dynamic incentive problems in operations management.
- 格式:doc
- 大小:40.50 KB
- 文档页数:3
- disruption ,: Global convergence vs nationalSustainable - ,practices and dynamic capabilities in the food industry: A critical analysis of the literature5 Mesoscopic - simulation6 Firm size and sustainable performance in food -s: Insights from Greek SMEs7 An analytical method for cost analysis in multi-stage -s: A stochastic / model approach8 A Roadmap to Green - System through Enterprise Resource Planning (ERP) Implementation9 Unidirectional transshipment policies in a dual-channel -10 Decentralized and centralized model predictive control to reduce the bullwhip effect in - ,11 An agent-based distributed computational experiment framework for virtual - / development12 Biomass-to-bioenergy and biofuel - optimization: Overview, key issues and challenges13 The benefits of - visibility: A value assessment model14 An Institutional Theory perspective on sustainable practices across the dairy -15 Two-stage stochastic programming - model for biodiesel production via wastewater treatment16 Technology scale and -s in a secure, affordable and low carbon energy transition17 Multi-period design and planning of closed-loop -s with uncertain supply and demand18 Quality control in food - ,: An analytical model and case study of the adulterated milk incident in China19 - information capabilities and performance outcomes: An empirical study of Korean steel suppliers20 A game-based approach towards facilitating decision making for perishable products: An example of blood -21 - design under quality disruptions and tainted materials delivery22 A two-level replenishment frequency model for TOC - replenishment systems under capacity constraint23 - dynamics and the ―cross-border effect‖: The U.S.–Mexican border’s case24 Designing a new - for competition against an existing -25 Universal supplier selection via multi-dimensional auction mechanisms for two-way competition in oligopoly market of -26 Using TODIM to evaluate green - practices under uncertainty27 - downsizing under bankruptcy: A robust optimization approach28 Coordination mechanism for a deteriorating item in a two-level - system29 An accelerated Benders decomposition algorithm for sustainable - / design under uncertainty: A case study of medical needle and syringe -30 Bullwhip Effect Study in a Constrained -31 Two-echelon multiple-vehicle location–routing problem with time windows for optimization of sustainable - / of perishable food32 Research on pricing and coordination strategy of green - under hybrid production mode33 Agent-system co-development in - research: Propositions and demonstrative findings34 Tactical ,for coordinated -s35 Photovoltaic - coordination with strategic consumers in China36 Coordinating supplier׳s reorder point: A coordination mechanism for -s with long supplier lead time37 Assessment and optimization of forest biomass -s from economic, social and environmental perspectives – A review of literature38 The effects of a trust mechanism on a dynamic - /39 Economic and environmental assessment of reusable plastic containers: A food catering - case study40 Competitive pricing and ordering decisions in a multiple-channel -41 Pricing in a - for auction bidding under information asymmetry42 Dynamic analysis of feasibility in ethanol - for biofuel production in Mexico43 The impact of partial information sharing in a two-echelon -44 Choice of - governance: Self-managing or outsourcing?45 Joint production and delivery lot sizing for a make-to-order producer–buyer - with transportation cost46 Hybrid algorithm for a vendor managed inventory system in a two-echelon -47 Traceability in a food -: Safety and quality perspectives48 Transferring and sharing exchange-rate risk in a risk-averse - of a multinational firm49 Analyzing the impacts of carbon regulatory mechanisms on supplier and mode selection decisions: An application to a biofuel -50 Product quality and return policy in a - under risk aversion of a supplier51 Mining logistics data to assure the quality in a sustainable food -: A case in the red wine industry52 Biomass - optimisation for Organosolv-based biorefineries53 Exact solutions to the - equations for arbitrary, time-dependent demands54 Designing a sustainable closed-loop - / based on triple bottom line approach: A comparison of metaheuristics hybridization techniques55 A study of the LCA based biofuel - multi-objective optimization model with multi-conversion paths in China56 A hybrid two-stock inventory control model for a reverse -57 Dynamics of judicial service -s58 Optimizing an integrated vendor-managed inventory system for a single-vendor two-buyer - with determining weighting factor for vendor׳s ordering59 Measuring - Resilience Using a Deterministic Modeling Approach60 A LCA Based Biofuel - Analysis Framework61 A neo-institutional perspective of -s and energy security: Bioenergy in the UK62 Modified penalty function method for optimal social welfare of electric power - with transmission constraints63 Optimization of blood - with shortened shelf lives and ABO compatibility64 Diversified firms on dynamical - cope with financial crisis better65 Securitization of energy -s in China66 Optimal design of the auto parts - for JIT operations: Sequential bifurcation factor screening and multi-response surface methodology67 Achieving sustainable -s through energy justice68 - agility: Securing performance for Chinese manufacturers69 Energy price risk and the sustainability of demand side -s70 Strategic and tactical mathematical programming models within the crude oil - context - A review71 An analysis of the structural complexity of - /s72 Business process re-design methodology to support - integration73 Could - technology improve food operators’ innovativeness? A developing country’s perspective74 RFID-enabled process reengineering of closed-loop -s in the healthcare industry of Singapore75 Order-Up-To policies in Information Exchange -s76 Robust design and operations of hydrocarbon biofuel - integrating with existing petroleum refineries considering unit cost objective77 Trade-offs in - transparency: the case of Nudie Jeans78 Healthcare - operations: Why are doctors reluctant to consolidate?79 Impact on the optimal design of bioethanol -s by a new European Commission proposal80 Managerial research on the pharmaceutical - – A critical review and some insights for future directions81 - performance evaluation with data envelopment analysis and balanced scorecard approach82 Integrated - design for commodity chemicals production via woody biomass fast pyrolysis and upgrading83 Governance of sustainable -s in the fast fashion industry84 Temperature ,for the quality assurance of a perishable food -85 Modeling of biomass-to-energy - operations: Applications, challenges and research directions86 Assessing Risk Factors in Collaborative - with the Analytic Hierarchy Process (AHP)87 Random / models and sensitivity algorithms for the analysis of ordering time and inventory state in multi-stage -s88 Information sharing and collaborative behaviors in enabling - performance: A social exchange perspective89 The coordinating contracts for a fuzzy - with effort and price dependent demand90 Criticality analysis and the -: Leveraging representational assurance91 Economic model predictive control for inventory ,in -s92 - ,ontology from an ontology engineering perspective93 Surplus division and investment incentives in -s: A biform-game analysis94 Biofuels for road transport: Analysing evolving -s in Sweden from an energy security perspective95 - ,executives in corporate upper echelons Original Research Article96 Sustainable - ,in the fast fashion industry: An analysis of corporate reports97 An improved method for managing catastrophic - disruptions98 The equilibrium of closed-loop - super/ with time-dependent parameters99 A bi-objective stochastic programming model for a centralized green - with deteriorating products100 Simultaneous control of vehicle routing and inventory for dynamic inbound -101 Environmental impacts of roundwood - options in Michigan: life-cycle assessment of harvest and transport stages102 A recovery mechanism for a two echelon - system under supply disruption103 Challenges and Competitiveness Indicators for the Sustainable Development of the - in Food Industry104 Is doing more doing better? The relationship between responsible - ,and corporate reputation105 Connecting product design, process and - decisions to strengthen global - capabilities106 A computational study for common / design in multi-commodity -s107 Optimal production and procurement decisions in a - with an option contract and partial backordering under uncertainties108 Methods to optimise the design and ,of biomass-for-bioenergy -s: A review109 Reverse - coordination by revenue sharing contract: A case for the personal computers industry110 SCOlog: A logic-based approach to analysing - operation dynamics111 Removing the blinders: A literature review on the potential of nanoscale technologies for the ,of -s112 Transition inertia due to competition in -s with remanufacturing and recycling: A systems dynamics mode113 Optimal design of advanced drop-in hydrocarbon biofuel - integrating with existing petroleum refineries under uncertainty114 Revenue-sharing contracts across an extended -115 An integrated revenue sharing and quantity discounts contract for coordinating a - dealing with short life-cycle products116 Total JIT (T-JIT) and its impact on - competency and organizational performance117 Logistical - design for bioeconomy applications118 A note on ―Quality investment and inspection policy in a supplier-manufacturer -‖119 Developing a Resilient -120 Cyber - risk ,: Revolutionizing the strategic control of critical IT systems121 Defining value chain architectures: Linking strategic value creation to operational - design122 Aligning the sustainable - to green marketing needs: A case study123 Decision support and intelligent systems in the textile and apparel -: An academic review of research articles124 - ,capability of small and medium sized family businesses in India: A multiple case study approach125 - collaboration: Impact of success in long-term partnerships126 Collaboration capacity for sustainable - ,: small and medium-sized enterprises in Mexico127 Advanced traceability system in aquaculture -128 - information systems strategy: Impacts on - performance and firm performance129 Performance of - collaboration – A simulation study130 Coordinating a three-level - with delay in payments and a discounted interest rate131 An integrated framework for agent basedinventory–production–transportation modeling and distributed simulation of -s132 Optimal - design and ,over a multi-period horizon under demand uncertainty. Part I: MINLP and MILP models133 The impact of knowledge transfer and complexity on - flexibility: A knowledge-based view134 An innovative - performance measurement system incorporating Research and Development (R&D) and marketing policy135 Robust decision making for hybrid process - systems via model predictive control136 Combined pricing and - operations under price-dependent stochastic demand137 Balancing - competitiveness and robustness through ―virtual dual sourcing‖: Lessons from the Great East Japan Earthquake138 Solving a tri-objective - problem with modified NSGA-II algorithm 139 Sustaining long-term - partnerships using price-only contracts 140 On the impact of advertising initiatives in -s141 A typology of the situations of cooperation in -s142 A structured analysis of operations and - ,research in healthcare (1982–2011143 - practice and information quality: A - strategy study144 Manufacturer's pricing strategy in a two-level - with competing retailers and advertising cost dependent demand145 Closed-loop - / design under a fuzzy environment146 Timing and eco(nomic) efficiency of climate-friendly investments in -s147 Post-seismic - risk ,: A system dynamics disruption analysis approach for inventory and logistics planning148 The relationship between legitimacy, reputation, sustainability and branding for companies and their -s149 Linking - configuration to - perfrmance: A discrete event simulation model150 An integrated multi-objective model for allocating the limited sources in a multiple multi-stage lean -151 Price and leadtime competition, and coordination for make-to-order -s152 A model of resilient - / design: A two-stage programming with fuzzy shortest path153 Lead time variation control using reliable shipment equipment: An incentive scheme for - coordination154 Interpreting - dynamics: A quasi-chaos perspective155 A production-inventory model for a two-echelon - when demand is dependent on sales teams׳ initiatives156 Coordinating a dual-channel - with risk-averse under a two-way revenue sharing contract157 Energy supply planning and - optimization under uncertainty158 A hierarchical model of the impact of RFID practices on retail - performance159 An optimal solution to a three echelon - / with multi-product and multi-period160 A multi-echelon - model for municipal solid waste ,system 161 A multi-objective approach to - visibility and risk162 An integrated - model with errors in quality inspection and learning in production163 A fuzzy AHP-TOPSIS framework for ranking the solutions of Knowledge ,adoption in - to overcome its barriers164 A relational study of - agility, competitiveness and business performance in the oil and gas industry165 Cyber - security practices DNA – Filling in the puzzle using a diverse set of disciplines166 A three layer - model with multiple suppliers, manufacturers and retailers for multiple items167 Innovations in low input and organic dairy -s—What is acceptable in Europe168 Risk Variables in Wind Power -169 An analysis of - strategies in the regenerative medicine industry—Implications for future development170 A note on - coordination for joint determination of order quantity and reorder point using a credit option171 Implementation of a responsive - strategy in global complexity: The case of manufacturing firms172 - scheduling at the manufacturer to minimize inventory holding and delivery costs173 GBOM-oriented ,of production disruption risk and optimization of - construction175 Alliance or no alliance—Bargaining power in competing reverse -s174 Climate change risks and adaptation options across Australian seafood -s – A preliminary assessment176 Designing contracts for a closed-loop - under information asymmetry 177 Chemical - modeling for analysis of homeland security178 Chain liability in multitier -s? Responsibility attributions for unsustainable supplier behavior179 Quantifying the efficiency of price-only contracts in push -s over demand distributions of known supports180 Closed-loop - / design: A financial approach181 An integrated - / design problem for bidirectional flows182 Integrating multimodal transport into cellulosic biofuel - design under feedstock seasonality with a case study based on California183 - dynamic configuration as a result of new product development184 A genetic algorithm for optimizing defective goods - costs using JIT logistics and each-cycle lengths185 A - / design model for biomass co-firing in coal-fired power plants 186 Finance sourcing in a -187 Data quality for data science, predictive analytics, and big data in - ,: An introduction to the problem and suggestions for research and applications188 Consumer returns in a decentralized -189 Cost-based pricing model with value-added tax and corporate income tax for a - /190 A hard nut to crack! Implementing - sustainability in an emerging economy191 Optimal location of spelling yards for the northern Australian beef -192 Coordination of a socially responsible - using revenue sharing contract193 Multi-criteria decision making based on trust and reputation in -194 Hydrogen - architecture for bottom-up energy systems models. Part 1: Developing pathways195 Financialization across the Pacific: Manufacturing cost ratios, -s and power196 Integrating deterioration and lifetime constraints in production and - planning: A survey197 Joint economic lot sizing problem for a three—Layer - with stochastic demand198 Mean-risk analysis of radio frequency identification technology in - with inventory misplacement: Risk-sharing and coordination199 Dynamic impact on global -s performance of disruptions propagation produced by terrorist acts。
论坚持推动人类命运共同体英文版(中英文版)On the Persistence in Promoting a Community with a Shared Future for MankindIn the era of globalization, the concept of a community with a shared future for mankind has gained increasing attention and support.It is a vision that emphasizes the interconnectedness and interdependence of all nations, highlighting the need for joint efforts to address global challenges.This essay aims to discuss the importance of persistently promoting this concept and the practical steps required to achieve it.Firstly, the promotion of a community with a shared future is essential for fostering peace and stability worldwide.In a world where conflicts and disputes are prevalent, this concept serves as a guiding principle for countries to resolve differences through dialogue and cooperation.By emphasizing the shared interests of humanity, it encourages nations to work together in addressing issues such as terrorism, climate change, and poverty, thereby creating a more peaceful and harmonious world.Secondly, the concept of a shared future is crucial for promoting inclusive and sustainable development.Globalization has led to unprecedented economic integration, yet the benefits have not been evenly distributed.To achieve balanced development, it is necessary to adopt a holistic approach that takes into account the well-being of allpeople and preserves the environment for future generations.The community with a shared future for mankind advocates for the implementation of policies that promote social justice, economic equity, and environmental sustainability.Thirdly, the persistent promotion of this concept is vital for strengthening global governance.The complex and interconnected nature of contemporary challenges requires a more effective and representative system of international cooperation.By advocating for a community with a shared future, countries can work together to reform and improve global governance mechanisms, making them more inclusive, transparent, and efficient.To translate this concept into reality, the following steps should be taken:1.Enhance communication and dialogue among nations.Building mutual trust and understanding is the foundation for collaborative efforts.Regular exchanges of views and experiences can help countries identify common interests and develop strategies for joint action.2.Strengthen international cooperation in various fields.Issues such as climate change, public health, and economic development cannot be addressed by a single country alone.Multilateral cooperation should be enhanced to pool resources and expertise for the benefit of all.3.Promote the principle of shared responsibility.Each country shouldcontribute its fair share to addressing global challenges, taking into account its capabilities and level of development.This principle ensures that the burden of responsibility is distributed equitably.4.Encourage the participation of all stakeholders.The promotion of a community with a shared future should involve not only governments but also civil society, businesses, and academia.The diverse perspectives and expertise of these stakeholders can contribute to more innovative and effective solutions.5.Foster a culture of peace and mutual cation and cultural exchanges play a crucial role in shaping public opinion and nurturing a sense of global citizenship.By promoting understanding and appreciation of different cultures, societies can build bridges of cooperation and tolerance.In conclusion, the persistence in promoting a community with a shared future for mankind is an essential endeavor for the well-being and survival of our planet.Through enhanced communication, international cooperation, and a shared commitment to global governance, we can work together to create a more peaceful, just, and sustainable world for all.论坚持推动人类命运共同体在全球化时代,人类命运共同体的概念日益受到关注和支持。
外文文献翻译原文及译文(节选重点翻译)公共政策和人力资本外文翻译最新中英文文献出处:Research Policy,Volume 48, Issue 9, November 2019, 1-19 译文字数:5300 多字英文Driving innovation: Public policy and human capitalHelena Lenihan, Helen Guirk, Kevin MurphyAbstractHuman capital, the set of skills, knowledge, capabilities and attributes embodied in people, is crucial to firms’ capacity to absorb and organize knowledge and to innovate. Research on human capital has traditionally focused on education and training. A concern with the motivationally-relevant elements of human capital such as employees’ job satisfaction, organizational commitment, and willingness to change in the workplace (all of which have been shown to drive innovation), has often been overlooked in economic research and by public policy interventions to date. The paper addresses this gap in two ways: First, by studying firms’ human resource systems that can enhance these elements of human capital, and second, using the results of this research as a springboard for a public policy program targeted at elements of human capital that have been ignored by traditional education and training interventions. Using a sample of 1070 employee-managers in Ireland, we apply a series of probit regressions to understand how different human resources systems influence the probability of employee-managers reporting the motivationally-relevant elements of human capital. The research: (1) Finds that respondents in organizations with certain humanresource systems are more likely to report motivationally-relevant elements of human capital. Specifically, employee-managers in organizations with proactive work practices and that consult with their employee-managers increase the predicted probability of reporting that they are satisfied with their job, willing to change, and are committed to the organization; (2) Highlights the need to consider the role of policy interventions to support the motivationally-relevant elements of human capital; (3) Proposes a new policy program offer to support the motivationally-relevant elements of human capital in order to increase firms’ innovation activity.Keywords: Innovation, Human capital, Human resource systems, Innovation policy, Policy programIntroductionInnovation is a well-recognized determinant of growth, and it is a challenge for both academics and practitioners to understand why and how firms innovate (Montalvo et al., 2006). Human capital, the set of skills, knowledge, capabilities, and other attributes embodied in people that can be translated into productivity (Abel and Gabe, 2011; Fulmer and Ployhart, 2014), is crucial to firms’ capacity to absorb and organize knowledge and to innovate (Protogerou et al., 2017; Teixeira and Tavares-Lehmann, 2014; Subramaniam and Youndt, 2005).Traditionally, economists have defined human capital largely interms of knowledge and intellectual capital. It is now widely recognized that this focus on knowledge does not fully capture the domain of human capital (Arvanitis and Stucki, 2012; Bell, 2009). In the last 20 years, the human capital concept has evolved significantly, and current conceptions of human capital include a wide range of human attributes that are relevant to job performance and productivity, ranging from personality traits, work attitudes and values (Ployhart and Moliterno, 2011) to characteristics such as creativity, wellbeing, self-efficacy and resilience (Grimaldi et al., 2012, 2013; Madrid et al., 2017; Newman et al., 2014; OECD, 2007; Tan, 2014).The expansion of the domain of human attributes that define human capital can be usefully understood with a taxonomy highlighting the distinction between can do and will do attributes (Ployhart and Moliterno, 2011; see also Chiaburu and Lindsay, 2008; Gibbons and Weingart, 2001; Zhao and Chadwick, 2014). According to this taxonomy, some attributes contribute to employees’ ability to execute essential job tasks. Classic exemplars of can do attributes include cognitive ability, general knowledge, job knowledge and problem-solving skills. Other human attributes influence willingness to exert effort, to contribute ideas and to assist fellow colleagues. Classic exemplars of will do attributes include job-related personality traits, work attitudes and beliefs.This can do/will do taxonomy is highly consistent with almost acentury of research on the determinants of human performance, research that recognizes both ability and motivation as independent determinants of job performance; for the most recent meta-analytic review of the roles of motivation and ability, see Van Iddekinge et al. (2018). There is considerable evidence that innovation and the success of organizations, require behaviors that go beyond the usual role requirements of jobs and depend substantially on employees’ motivation and willingness to engage in these behaviors (Chiu, 2018; McGuirk et al., 2015; Shalley, 1995; Menold et al., 2014). In particular, employees’ attitudes regarding both their jobs and their organizations appear to be important determinants of their willingness to engage in the work behaviors needed to support innovation (Allen et al., 2011; Bateman and Organ, 1983; Cetin et al., 2015; Moorman, 1993; Zhao and Chadwick, 2014; Coad et al., 2014; Kato et al., 2015). These perceptions and attitudes about jobs and organizations comprise a critically important component of human capital that can be brought to bear in fostering innovation in organizations.Knowledge and job-related skills represent can do attributes; tangible proxies for these attributes (e.g., level of education, amount of job training) have been the traditional focus of public policy aimed at enhancing human capital (Becker, 1964; Cohen and Soto, 2007; Marshall et al., 1993; Nistor, 2007). Despite growing evidence regarding theimportance of will do human capital attributes in business, there has been an almost complete absence of public policy initiatives to address these aspects of human capital. This is in large part because the targets for public policy are less obvious when attempting to build will do attributes. Policy interventions addressing the will do aspects of human capital are a prime focus of the current paper.In this study, we aim to address the following key questions: (1) What human resource systems, policies, and practices of firms are linked to motivationally-relevant (will do) human capital attributes, such as employee-managers’ job satisfaction, commitment to their organization, and willingness to change? (2) What are the implications for public policy in terms of policy instruments that can effectively promote the development and support of these human capital attributes? As we describe below, both of these represent distinct contributions to the empirical and policy-oriented literatures. This is achieved by demonstrating the empirical links between several organizational policies and practices and will do elements of human capital that are relevant to innovation. We then use this information as a springboard for a public policy program intervention designed to help organizations assess and tailor their policies and practices in ways that can facilitate the growth of human capital to support the firm’s innovative capacity.We focus on employee-managers, a cohort used in severalinnovation studies (e.g., Leiva et al. (2011) and seen as key to innovation (Fitjar et al., 2013). We argue for the importance of creating a firm-level culture that hones human resource systems, thus promoting innovation. In this context, managers are key. Following Becker’s (1964) and Oketch’s (2006) studies of the determinants of human capital as measured by education, we seek to examine the determinants of motivationally- relevant elements of human capital. Understanding the factors that underpin these human capital attributes is significant for innovation theory development and is of practical value to policy makers and firms seeking to increase innovation activity.The remainder of this paper is organized as follows: In section 2, we set out the theoretical context of the research. In section 3, we explain the data and methodology. In section 4, we present the empirical results of the regression analyses. In section 5, we discuss policy supports and implications for policy regarding the development of the motivationally- relevant elements of human capital. We propose a new policy program offer, with the ultimate aim of driving firm-level innovation. Section 6 concludes and explores both the implications and the limitations of our research.Theoretical context of human capital and human resource systems Interest is growing in measuring human capital beyond education and training (e.g., Perdreau et al., 2015; Arvanitis and Stucki, 2012).However, there are challenges to measuring human capital’s motivationally-relevant elements, such as work attitudes or motivation (Coronado et al., 2008); measuring these elements is an attempt to make visible what is invisible (Kramer, 2008). These challenges may explain why, in economic research and public policy, researchers frequently overlook these elements of human capital.Our analysis focuses on three elements of human capital that appear to be the most directly relevant to understanding employee-ma nagers’ willingness and motivation to contribute to innovation in work organizations. These elements are employee-managers’ job satisfaction, commitment to their organization, and willingness to change in the workplace.How motivationally-relevant elements of human capital provide a foundation for innovationThe first element of human capital we focus on, job satisfaction, is defined as individuals’ wellbeing or level of contentment in relation to their job (Judge and Kammeyer-Mueller, 2012). Job satisfaction supports a number of firm-level functions, including formulation of knowledge and problem-solving strategies (Judge and Kammeyer-Mueller, 2012; Whitman et al., 2010). Individuals who are highly satisfied with their jobs are more likely to engage in behaviors necessary for successful motivation, for example, they are motivated to exert extra effort, takerisks, learn new skills, and contribute unique ideas to their organization (Bowling, 2010; Organ and Ryan, 1995; Weikamp and Göritz, 2016). In contrast, individuals who are less satisfied by their jobs (e.g., because they find their job stressful) are less likely to engage in behaviors necessary for successful innovation (Eatough et al., 2011; LePine et al., 2002).The second element of human capital we focus on is employee- managers’ identification with and commitment to their organization (Mowday et al., 1981; Williams and Anderson, 1991). A wide range of work attitudes can contribute to firms’ performance (Melesse, 2016).Constructs such as organizational identification and commitment are particularly relevant to understanding innovation because innovativebehavior is often risky; these risks are more readily undertaken by individuals who both trust and care for the success of their organization (Dalal, 2005; George and Bettenhausen, 1990; LePine et al., 2002; O’Reilly and Chatman, 1986; Organ, 1988; Organ and Ryan, 1995).Finally, the third element of human capital we focus on iswillingness to change. A number of studies examine the role of employees’ willingness to change (e.g., to change the level of technology, skills and responsibility required to improve how work is done) in determining organizational success (Pulakos et al., 2000, 2002; van den Berg and van der Velde, 2006) and employees’ orientation towardinnovation (Montalvo et al., 2006). Willingness to change is found to influence the adoption or rejection of innovations (Agarwal and Prasad, 1998).Human resource systems connected to the motivationally-relevant elements of human capitalAlthough organizations cannot directly control the perceptions and attitudes of workers (Colarelli and Arvey, 2015), they can decisively influence these perceptions and attitudes by how they interact with their workforce. In particular, there is clear evidence (summarized below) that well-managed human resource systems have a strong effect on the probability of employees being satisfied, committed, and willing to make the changes, take the risks, and exert the extra effort that innovation requires.Human resource systems in organizations deal with recruiting, hiring, training, evaluating, rewarding, and sometimes sanctioning workers (e.g., through redundancies, disciplinary processes, and terminations). These systems provide important information to employees, ranging from orientation and organizational socialization to performance feedback (Cascio, 2012). This information, together with other outcomes of these human resource processes (e.g., rewards), influence the perceptions and attitudes of employees.A substantial body of research links the quality of human resourcesystems with employee attitudes, perceptions, and beliefs. For example, there is evidence that human resource systems that provide timely performance feedback enhance employees’ a) success at adapting to changing conditions and b) their willingness to adapt and change their workplace behavior to create new products and processes (Pulakos et al., 2000, 2002). Piening et al. (2013) note that when organizations provide incentives to employees (e.g., training, opportunities for salary increase and advancement), they are likely to respond with favourable perceptions and behaviors. If implemented effectively, well-constructed human resource programs and practices are likely to cause employees to view themselves as operating a social exchange relationship characterized by mutual trust, respect, and support (Evans and Davis, 2005; Kehoe and Wright, 2013). In turn, this positive relationship is likely to motivate employees to engage in a range of behaviors that encourage and support innovation.Human resource practices that provide information and support to employees appear to contribute especially to the encouragement of innovation. Cohen and Levinthal (1990) refer to the importance of absorptive capacity, which includes the contributions made by individuals and also an organization’s capacity to exploit these contributions. Such high-involvement practice is of growing interest in the organizational performance and human resource management literatures (Böckerman etal., 2012). There is evidence linking aspects of high-quality human resource systems to specific work attitudes, including job satisfaction (Gould-Williams, 2003), organizational commitment (Allen et al., 2003; Meyer and Smith, 2000; Whitener, 2001), and willingness to change (Pulakos et al., 2000, 2002).ConclusionsIn this paper, we examined empirically-supported public policy interventions that can help firms develop and enhance motivationally- relevant (will do) elements of human capital, elements that are required to support firm-level innovation. Public policy targeted at increasing human capital traditionally concerns itself with the can do attributes of human capital (usually knowledge and skills), resulting in interventions that involve education and training. The development of will do attributes, such as attitudes and perceptions influencing employees’ willingness to innovate, require different public policy interventions.Our analysis, based on information retrieved from the Irish National Centre for Partnership and Performance Workplace Survey (NCPP, 2009), reports that firms providing human resource systems, such as greater use of proactive work practices and greater levels of consultation with employee-managers are associated with an increased probability of job satisfaction, organizational commitment, and willingness to change of such managers. We also report that bonus schemes (as part of pay andconditions) are linked to motivationally-relevant human capital, as measured by job satisfaction and willingness to change. It would be remiss however, not to acknowledge that some of the human resource systems variables reveal mixed results. For example, we report that greater frequency of information, job share and flexitime (part of work arrangements) have no significant relationship to the majority of the will do elements of human capital. In some cases, human resource systems variables such as the receipt of share options as part of pay and conditions have a negative impact.By boundary-spanning the economics, innovation, and organizational science literatures, our research provides valuable contributions to theory, practice, and policy. From a theoretical perspective, our research makes two key contributions. First, our research extends the understanding of human capital and its supports, with the ultimate objective being that of driving firm-level innovation. Our findings concur with Cowling (2016) on the importance of building firm- level capabilities in support of innovation activity. Our findings help to bring some specificity to this literature by highlighting specific human resource management policies and practices that can be empirically linked to the motivational components of innovation.Second, our research highlights the need to consider the role of public investment in supporting the will do, motivationally-relevantelements of human capital as a driver of firm-level innovation. In particular, we outline a program for developing and implementing interventions that give organizations the tools and knowledge needed to support their employees’ motivation to innovate. We affirm Bell’s (2009, p.50) call for greater focus on “broad magnitudes a nd trends of the more important non-R&D components of innovative activity”, and policy discussion “about the kind of innovation capability that is created and accumulated”.From the perspective of practice at the level of the organization, our research sug gests that firms’ innovation activity may benefit from human resource systems such as proactive work practices, consultation and bonus schemes (part of pay and conditions). These systems motivate employees and support positive work attitudes such as job satisfaction, organizational commitment and willingness to change in the workplace. Interestingly, one of the human resource systems we measure, frequency of information, does not appear to have much impact on the probability of will do traits. This may suggest that among potentially useful human resource systems, some appear to be more closely linked to will do traits than others.From the perspective of policy implication, our research suggests that public policy can support the development of elements of human capital that have heretofore been largely ignored in debates about how tosupport innovation in organizations. Literature examining the role of public policy in human capital development has focused almost exclusively on can do elements of human capital –specifically, knowledge and skills (Becker, 1964; de Rassenfosse et al., 2011). Our results suggest that public policies can aid firms as they identify, implement, and monitor particular human resource management policies. These are the policies that are empirically shown to enhance and develop both the attitudes and beliefs necessary to support firm-level innovation.A total overhaul of current programs is unnecessary. Instead, policy should recognize the value of the will do elements of human capital and the importance of human resource systems in their development. Market and systemic failures may also justify public support for the will do elements of human capital.Public policy interventions can help by making the necessary resources, expertise, and knowledge available to organizations, so that the organizations can focus on the will do elements of human capital. A role for government exists in terms of minimizing the risks (whether real or perceived) associated with firms investing in the will do elements of human capital in their organizations. The ultimate goal is to drive firm- level innovation.We propose a new policy program offer (InnovativePeople4Growth) to support the will do, motivationally-relevant elements of human capital。
Dynamic equivalence (DE) equivalence is the central problem in translating practice. Nida distinguishes two types of equivalence, formal and dynamic. By formal equivalence, he “focuses attention on the message itself, in both formand content”(Nida, 1964: 159) with aims to allow readers to understand as much of the SL context as possible.Dynamic equivalence emphasizes more on the effect the receiver receives the message with the aim to “relate thereceptor to modes of behavior relevant within the context of his own culture” (ibid). Later, realizing that there is noabsolute symmetry between languages (Snell-Hornby, 2002: 13-22; Wilss, 2002: 134-157), he prefers the term“functional equivalence” in the sense that “equivalence can be understood in terms of proximity, i.e. on the basis ofdegrees of closeness to functional identity” (Nida, 2003: 87). This viewof functional equivalence implies differentdegrees of adequacy from minimal to maximal effectiveness on the basis of both cognitive and experiential factorsDynamic equivalence is a translation principle which was described by the Bible translation statesman Eugene Nida. With this principle a translator seeks to translate the meaning of the original in such a way thatthe target language wording will trigger the same impact in its hearers that the original wording had upon its hearers. As some have mistakenly concluded, Nida never pitted "meaning" against "impact" (or reader "response", as he called it). Nida, as do all informed translators, understood that meaning is a totality ("bundle") which includes meanings of parts of words (morphemes), words themselves, how words connect to each other (syntax, grammar), words in communication contexts (pragmatics), connotation, etc. We always want a hearer to understand the same meaning as did hearers of the source text. That, essentially, is what Nida was saying.But dynamic equivalence, as a concept, puts an overly narrow focus upon the response of hearers, perhaps sometimes at the expense of other factors which are also crucial to adequate Bible translation, such as accuracy of the message, the uniqueness of the original historical setting, etc. The term dynamic equivalence has often been mischaracterized. Because of this, and also because most translators recognize that translation adequacy calls for attention to a multiplicity of factors, most translators today do not use the term. Instead, as they characterize how it is often necessary to use different FORMS of the target language to encode the same MEANING as the original, they prefer to use terms which are easier to understand such as idiomatic translation, meaning-based translation, closest natural equivalent, and functional equivalence. A lay term used bysome people is thought-for-thought translation. None of these terms is exactly the same as dynamic equivalence, although, like dynamic equivalence, all focus upon preservation of meaning, rather than form, when there is tension between the two.Functional equivalenceFunctional equivalence translation is a subcategory of what many call idiomatic translation.A newer theory of translation is function-equivalent translation (often inaccurately called paraphrasing). In this type of translation, the translator tries to make the English function the same way the original language functioned for the original readers.However, in trying to make the translation easy to read, the translator can omit concepts from the original text that don't seem to have corresponding modern English equivalents. Such a translation can produce a readable text, but that text can convey the wrong meaning or not enough meaning. Furthermore, function-equivalent translations attempt to make some books readable on levels at which they were not intended. For instance, Song of Songs was not written for children. Paul's letter to the Ephesians is very sophisticated and not intended for novices.。
外文文献翻译原文及译文(节选重点翻译)组织学习与企业创新产出外文文献翻译中英文文献出处:International Journal of Innovation Studies,Volume 4, Issue 1, March 2020, Pages 16-26译文字数:6000 多字英文Influence of organization learning on innovation output in manufacturingfirms in KenyaIsaac Gachanja, Stephen Nga’nga’, Lucy Kiganane AbstractKnowledge entrepreneurship is increasingly becoming important in driving innovation for high levels of competitiveness. The purpose of this study was to investigate the relationship between Organization Learning (OL) and Innovation Output (IO) for improved performance in manufacturing firms in Kenya. The theoretical underpinnings on this study are the Schumpeter’s (1934) innovation theory of and the Gleick (1987) complexity theory. The methodology used was mixed method research because it provides a more holistic understanding of a thematic area. The research design that was used is cross-sectional design because it allows for making observations on different characteristics that exist within a group at a particular time. The target population was manufacturing firms across the country. Multi-stage sampling strategy was used to sample 303 respondents from 101 firms. Primary and secondary data were used to collect both qualitative and quantitative data. The questionnaire, interview schedule and a checklist of key informants were used to collect data. Content validity was used to ascertain the credibility of the research procedure and internal consistency techniquewas used to test for reliability. Correlation and linear regression were used to determine the relationship between OL and IO. Work disruptions were avoided by making prior arrangements and appointments. The findings indicate that OL has a significant influence on IO. It is recommended that lifelong learning, management support and risk tolerance should be encouraged to improve creativity. High creativity is important in raising the capacity to integrate internal and external knowledge for greater levels of IO. Further research should be carried out to find how customers and suppliers information can be utilized to enriched OL.Keywords: Organization learning, Innovation output, Competitiveness, Lifelong learning, Risk tolerance1. IntroductionInnovation utilizes knowledge which is important in raising creativity and capacity development for enterprise prosperity. Many countries have developed their National Innovation Systems (NIS) and have a comprehensive innovation policy framework, but most firms have not leveraged on these opportunities to raise their Innovation Output (IO). This has been contributed by the disjointed relationship between research institutions and industry. The situation has been brought about by multiplicity of new institutions that have become a barrier to knowledge sharing and thus firms are shying away from intense collaboration withresearch institutions and universities which has led to declining knowledge absorption, creation and diffusion which are key components of innovation performance (Cornell University, INSEAD & WIPO, 2015). The situation can be addressed by rallying firms to develop their knowledge capacities by focusing on Organization Learning (OL) for greater IO.Previous researchers have not managed to unravel the puzzle of how to transform knowledge into innovation output that improves competitiveness in the manufacturing sector. This has been partly attributed to by the failure of incorporating local knowledge in the innovation process (Sambuli & Whitt, 2017). The complexity of blending internal and external knowledge and reconfiguring new insight for greater innovation has also not been adequately addressed in Kenya. Furthermore, the linkages within the innovation system are weak and the manufacturing sector has the highest abandoned innovation activities at about 40% (Ongwae, Mukulu, & Odhiambo, 2013). The quagmire of striking a balance between sharing knowledge, guarding against knowledge leakages, diffusion of tension and mistrust that emanates from the competition while interacting with the NIS to improve IO has not been resolved. The study will attempt to address these gaps by investigating the influence of OL on IO.The objective of the study is therefore to examine the influence ofOL on IO in manufacturing firms in Kenya. The null hypothesis is that OL has no significant influence on IO in manufacturing firms in Kenya while the alternative hypothesis is that OL has a significant influence on IO. The hypothesis will be subjected for a test. The study will contribute to the value of OL on the firms’ competitiveness. It will provide insights on how firms can blend internal and external knowledge in the process of OL to improve IO which contributes to their competitive advantage.2. Literature reviewThis section begins with review of previous empirical work on OL and IO. Theoretical underpinnings are then discussed leading to a development of conceptual framework.2.1. Innovation outputInnovation output is the end product of an innovation activity. The end products of innovation are; new products, new process, new enterprise and new markets. Andreeva and Kianto (2011) believe that IO is the degree to which enterprises develop novelty in terms of processes, management and marketing. Innovation output can therefore be defined as the increase in novel products, creative processes, and development of new ventures and discovery of new markets.Innovation output depicts the result of an innovation effort. It can be measured as the summation of increased new products as a result of innovation, patents acquired, new innovation process and uniqueenterprises created to cater for innovation activities. Innovation output can be enhanced by improving the innovation capacity of a firm.Innovation capacity is paramount in realizing and identifying the need for change, thus leading to new ideas. It provides the capability of seizing up opportunities (Teece, 2009) leading into a new business configuration which helps in attaining and maintaining high competitive levels (Saenz & Perez-Bouvier, 2014). Innovation capacity can be optimized through OL which leads to continuous improvement in firm performance particularly in the manufacturing sector. Manufacturing firms are faced with myriad of challenges such as; the ever-changing taste and preferences of customers, rapid change in technology, increasing competitions, dynamic operating environment and changing global trends. This calls for OL for firms to adequately navigate in the turbulence.2.2. Organization LearningOrganization Learning is one of the key aspects of knowledge entrepreneurship which is crucial in determining innovation output. Desai (2010) defined OL as the process of acquiring, absorbing, sharing, modifying and transferring knowledge within an entity. The context in which OL is used in this study is a mechanism for discovering new ways of improving operations through knowledge acquisition, absorption, sharing and transfer for improved performance. The salient feature that distinguishes OL from learning organization is its diversity andextensiveness. This forms the bases of generating internal knowledge that is peculiar to an Organization.The capacities developed in OL provide an opportunity for the integration of internal and external knowledge. This requires collective input and knowledge sharing (Granerud & Rocha, 2011). Organization learning therefore involves development of internal knowledge capacities that integrates external knowledge from other organizations within and without the sector. This is beneficial to the firm because it allows continuous improvement, adaptability and value addition Granerud and Rocha (2011) argues that OL is the foundation from which the base of improved practices is laid.Organization Learning can be measured in different ways. Jain and Moreno (2015) posited that the factors attributing to OL are; collaboration, teamwork, performance management, autonomy and freedom, reward, recognition and achievement orientation. The Global Innovation Index utilizes Knowledge absorption, creation, impact and diffusion which can be measured by the level of royalties, patents, number of new firms, royalties and license fees receipts or web presence respectively in measuring OL (Cornell University, INSEAD & WIPO, 2016). Tohidi and Jabbari (2012) believe that the strategic elements of OL are experimentation, knowledge transfer, developing learning capacity, teamwork and problem-solving. Chiva and Alegre (2007) are ofthe opinion that development of learning capacity can be enhanced by empowerment to generate new ideas, managerial commitment to support creativity, continued learning, openness and interaction with external environment and risk tolerance.The study thus adopted and improved on the measures of OL used by Chiva and Alegre (2007) and Tohidi and Jabbari (2012) because the parameters are more comprehensive in measuring OL. This was done by incorporating openness and knowledge integration on OL. The measures that were used to measure OL in this study are therefore; liberty of experiment, empowerment to generate new ideas, managerial commitment to support creativity, knowledge transfer and integration, openness and interaction with external environment, continued learning and risk tolerance.Nevertheless, absorptive capacity is important in OL because it improves the ability of the human resource within the firm to acquire and assimilate new and external knowledge for improved performance. Supportive Learning Environment (SLE) increases the absorptive capacity of the firm thus enhancing OL while a turbulent learning environment lowers the OL (Cohen & Levinthal, 1990). The SLE therefore moderates the influence of OL on IO. The SLE provides a conducive atmosphere for employee to engage each other and with the management freely and constructively which may lead to review of firmsoperations and processes (Garvin, Edmondson, & Gino, 2008).The appropriate SLE promotes OL and enhances the innovative ability of a firm. The parameters for measuring SLE are availability of accelerators and incubators, trade organization support and business services (Majava, Leviakangas, Kinnumen, Kess, & Foit, 2016). These parameters facilitates dynamic networking within an economy and accelerates technological spill over which is important in boastering innovation.2.3. Relationship between organization leaning and innovation outputThere have been several attempts to highlight the relationship between OL and IO. To begin with, Hung, Lien, Yang, Wu, and Kuo (2011) found that an analysis of OL and IO model showed the goodness of fit and a significantly positive relationship, thus promoting a culture of sharing and trust which is necessary for enterprise success. However, there is a gap in linking learning process and IO in empirical studies (Lau & Lo, 2015). The study addressed this gap by demonstrating the aspects of OL that influences IO and which do not. Calisir, Gumussoy, and Guzelsov (2013) found that open-mindedness in OL has a positive association with innovation output. Open-mindedness is one of the measures of OL which is incorporated in this study as openness. Zhou, Hu, and Shi (2015) found that OL significantly influences innovationoutput. Furthermore, Ghasemzadeh, Nazari, Farzaneh, and Mehralian (2019) found a significant influence of OL on IO. This study replicated those studies in manufacturing firms in Kenya. The study was anchored on two theories.2.4. Theoretical underpinningsThe first theory that is relevant in this study is Schumpeter’s (1934) theory of innovation. The theory is of the view that the transformation of the economy comes through innovations which bring about creative destructions which lead to improved performance. The dimensions of this theory are the creation of novelties which includes new products, new process, new enterprises, new raw materials and new markets. However, the theory failed to address the required organizational capacity to innovate. This necessitated the adoption of a theory that has a more holistic approach and takes cognizant of the OL as an input in the innovation process. This can be addressed by interrogation of complexity theory.The second theory that is related to this study is Gleick (1987) complexity theory. The theory recognizes the intricacies involved in developing innovation capacity. It advocates for an emergent learning that transcends from the industrial era to the knowledge era that produces ideas that provide complex interplay of different interactions. The complex interactions of internal and external knowledge bring about OLwhich is crucial in enhancing IO. This led to the development of a conceptual framework.3. MethodologyCross-sectional design was used because it helps in making observations on characteristics that exist within a group. The target population was 828 manufacturing firms. The sampling frame was the listed companies in Kenya Association of Manufacturer as of 2018.The multi-stage sampling strategy was used. Purposive sampling was used select the major industrial counties in Kenya. Random sampling was then applied to sample 101 firms from the major industrial counties according to their proportionate representation in terms of location and sub-sector. Purposive sampling was later used to select 3 respondents who were from the sampled 101 firms. The respondents comprised of section heads from operations, marketing and innovation. The total sample size of the respondent was therefore 303. Primary and secondary data were used to collect both qualitative and quantitative data. The questionnaire with Likert scale items o OL and IO, interview schedule and a checklist of key informants were used to collect data.The items which could have had a VIF of more than 10 could be deleted since that is the recommended upper limit (Creswell, 2014), but in this case no item was deleted since the VIF were less than 10. This test was important in authenticating the findings.Validity of the data collected was tested through content validity method. This is where the criteria used to access quality regarding the procedure and results to enhance credibility, transferability, dependability and conformability was addressed by constructing the measuring scale in line with the literature and pre-testing the research instruments during piloting. The questionnaire was designed in line with the constructs and parameters of OL and IO as brought out in literature review.4. Results and discussionsThis implies more new products were manufactured as opposed to other forms of novelty. This means that the general form of innovation in manufacturing firms in Kenya is the creation of new products relative to other forms of innovations such as new processes and enterprises. However, the maximum number of new product was 13 while those that were patented were only 5. It means that majority of new products were not patented. Manufacturing firms should therefore strive to register their patent rights to avoid escalation of counterfeits.The notable new products brought about by innovation were; nitrocellulose paints, hydro-pool, computerized painting machines, nova legs, sodium hypo-chloride, Clorox bleach, adjustable pallet racking and castellated beam for constructing cranes. New products had also a higher standard deviation as compared to other forms of novelty. This implies that there was a widespread of new products created within themanufacturing firms hence a low level of uniformity in new products created and thus a low degree of homogeneity across the firms.This implies that there were innovation activities that generated innovation output. It means that the outcome of innovation activities was observable and can be quantified. The standard deviation of 6.2 implies that there was a widespread within the manufacturing firms. This means that there was a low level of uniformity in innovation output across manufacturing firms and thus a low degree of homogeneity in the sample.This implies that there were more novelties created in the plastics and rubber sub-sector than any other. It means that on average, there were more new products, patents, new process and new enterprises created in the textile and apparels sub-sector. The highest standard deviation of 7.250 was recorded in vehicle assemblers and accessories sub-sector. This implies that the spread of novelties was widest in vehicle assemblers and accessories than other sub-sectors. This means that there was a high variety of IO produced and thus a low level of uniformity in novelties in vehicle assemblers and accessories sub-sector and thus low degree of homogeneity.The highest innovation output in the plastics and rubber sub-sector implies that the sector has more innovation activities as compared to other sub-sector, but innovation efforts were concentrated more on new products. The highest innovation intensity in the food and beverages sub-sector means that there were concerted innovation efforts that were spread across the four novelties and thus diversified IO. This is important because diversified innovation mitigates the risk of over reliance on single or few innovations that may that can be rendered absolute with emergence of other superior innovations.The study has thus established that OL has a significant influence on IO in manufacturing firms in Kenya. Manufacturing firms should, therefore, inculcate a culture of OL for grater IO for improved competitiveness. The findings are consistent with those of Hung et al. (2011) who found that OL have a significant influence on IO. Manufacturing firms should, therefore, embrace OL for utilization of scarce resources to provide value and provide society solutions sustainably. The findings are also in tandem with those Of Calisir et al. (2013) who found that firms with an Organizational practice that promote OL have higher value and IO levels. Higher IO is an indicator that a firm is generating novelties according to the changing needs of the market and hence the likelihood of being competitive leading to improved performance. The findings also concur with those of Hofstetter and Harpez (2015) who found that OL has an immense influence on firm’s IO. Increased IO can lead to improved competitiveness of a firm within the industry, in the economy, the region and the global market. The findings are also in line with Cassiman and Veugelers (2006); Chen,Vanhaverbeke, and Du (2016); Radicic and Balavac (2019); Antonelli & Fassio, 2016 who found that internal and external learning has a positive influence on IO. It is therefore imperative that OL is promoted in the manufacturing sector in Kenya for greater outcomes in IO for enhanced competitiveness locally and internationally.5. Conclusions and recommendationsIt is concluded that the various aspects of OL which include; liberty of experimentation, empowerment to generate new ideas, managerial commitment to support creativity, risk tolerance, knowledge transfer and integration, openness and interaction with the external environment and continuous learning contributes to development of new products, patents acquired, new process and new enterprises. It is also observed that SLE has a significant moderating effect between OL and IO.Management in manufacturing sector should, therefore, nurture and encourage OL for greater IO. Leaders in manufacturing firms should provide freedom to their employees to come up with new ideas and support them to try them out while at the same time be patient to accommodate failures that come with trials. They should also be receptive to divergent viewpoints, encourage problem solving and knowledge transfer. Leaders in manufacturing firms should also set up a robust Research and Development (R&D) by developing the policies that will enhance assimilation of external with internal knowledge for highercapacity to innovate. Policy makers and other relevant stakeholders such as government agencies, research institutions and investor lobby groups and associations should work jointly to address the bottlenecks in SLE.The study enriches the theoretical understanding of how OL influences IO by contributing to new knowledge on how manufacturing firms can improve their competitiveness in Kenya and other parts of sub Saharan Africa.It is recommended that lifelong learning should be encouraged because it improves creativity and develops the capacity to integrate internal and external knowledge which increases the level of IO. Management should also create an enabling culture for promoting creativity and risk tolerance to enhance IO. Manufacturing firms in Kenya should also set clear policies on R&D to enhance OL for increased innovation activities and thus higher IO.Further research should be carried to determine ways in which customers and suppliers information can be utilized to enrich OL. Customers and suppliers are major stakeholders in manufacturing firms. Their input in OL is essential in improving the IO. Further study should also be carried out to examine how networking influences IO. The challenges of mitigating the risks that comes with experimentation and failure tolerance is also a futile ground for further study.组织学习对肯尼亚制造业公司创新产出的影响艾萨克·加尚嘉,史蒂芬·纳加,露西·基加纳摘要知识企业家精神对于推动创新以提高竞争力具有越来越重要的作用。
Generic Problem StatementIntroductionIn the field of research, it is often necessary to identify and define problems in order to propose effective solutions. The generic problem statement is a crucial tool that enables researchers to clearlyarticulate the problem they are addressing and identify the gaps in existing knowledge. This statement acts as a starting point for research and helps guide the subsequent steps.Purpose of the Generic Problem StatementThe generic problem statement serves multiple purposes, which include: 1. Clearly stating the problem: The statement provides a concisedescription of the problem under investigation. 2. Identifying the research gap: By highlighting the limitations of existing knowledge and practices, the statement helps to identify the research gap that needsto be filled. 3. Defining the scope of the research: The statement delineates the boundaries within which the research will be conducted. 4. Guiding the research process: The statement acts as a compass, directing researchers towards the specific questions they need to address and the objectives they need to achieve.Components of a Generic Problem StatementA well-constructed generic problem statement typically consists of the following components:1. Problem StatementThis section succinctly describes the problem that is being investigated. It is important to clearly define the context and scope of the problem without going into unnecessary detail. The problem statement should be specific, measurable, achievable, realistic, and time-bound.2. Research GapIn this section, the existing knowledge and practices related to the problem identified are discussed. The goal is to highlight the gaps and limitations in current understanding or solutions. This helps to establish the significance and relevance of the proposed research.3. ObjectivesThe objectives section outlines the specific goals that the research aims to achieve. These objectives should be aligned with the problem statement and should address the identified research gap. Objectives should be SMART (Specific, Measurable, Achievable, Relevant, and Time-bound) to ensure clarity and focus.4. Research QuestionsResearch questions provide a framework for investigation and guide the direction of the research. They are derived from the problem statement and objectives and help to organize the research process. Research questions should be clear, concise, and actionable.5. MethodologyThe methodology section outlines the approach and methods that will be employed to address the research questions and achieve the objectives. It describes the data collection, analysis techniques, and tools that will be used. The methodology should be appropriate and rigorous to ensure the validity and reliability of the research findings.6. Expected OutcomesThis section presents the anticipated outcomes or outputs of the research. It may include expected findings, insights, recommendations, or practical applications. This helps to demonstrate the potential value and impact of the research.ConclusionThe generic problem statement is an essential component of research asit provides a clear and concise description of the problem, identifies the research gap, and guides the research process. By incorporating the problem statement, research questions, objectives, methodology, and expected outcomes, researchers can focus their efforts and contribute to the body of knowledge in their respective fields. It serves as apowerful tool for both researchers and readers to understand the purpose, scope, and significance of the research.。
外文文献:Adaptive Dynamic Programming: AnIntroductionAbstract: In this article, we introduce some recent research trends within the field of adaptive/approximate dynamic programming (ADP), including the variations on the structure of ADP schemes, the development of ADP algorithms and applications of ADP schemes. For ADP algorithms, the point of focus is that iterative algorithms of ADP can be sorted into two classes: one class is the iterative algorithm with initial stable policy; the other is the one without the requirement of initial stable policy. It is generally believed that the latter one has less computation at the cost of missing the guarantee of system stability during iteration process. In addition, many recent papers have provided convergence analysis associated with the algorithms developed. Furthermore, we point out some topics for future studies.IntroductionAs is well known, there are many methods for designing stable control for nonlinear systems. However, stability is only a bare minimum requirement in a system design. Ensuring optimality guarantees the stability of the nonlinear system. Dynamic programming is a very useful tool in solving optimization and optimal control problems by employing the principle of optimality. In [16], the principle of optimality is expressed as: “An optimal policy has the property that whatever the initial state and initial decision are, the remaining decisions must constitute an optimal policy with regard to the state resulting from the first decision.” Ther e are several spectrums about the dynamic programming. One can consider discrete-time systems or continuous-time systems, linear systems or nonlinear systems, time-invariant systems or time-varying systems, deterministic systems or stochastic systems, etc.We first take a look at nonlinear discrete-time (timevarying) dynamical (deterministic) systems. Time-varying nonlinear systems cover most of the application areas and discrete-time is the basic consideration for digital computation. Supposethat one is given a discrete-time nonlinear (timevarying) dynamical systemwhere nu R∈denotes the ∈represents the state vector of the system and mx Rcontrol action and F is the system function. Suppose that one associates with this system the performance index (or cost)where U is called the utility function and g is the discount factor with 0 , g # 1. Note that the function J is dependent on the initial time i and the initial state x( i ), and it is referred to as the cost-to-go of state x( i ). The objective of dynamic programming problem is to choose a control sequence u(k), k5i, i11,c, so that the function J (i.e., the cost) in (2) is minimized. According to Bellman, the optimal cost from time k is equal toThe optimal control u* 1k2 at time k is the u1k2 which achieves this minimum, i.e.,Equation (3) is the principle of optimality for discrete-time systems. Its importance lies in the fact that it allows one to optimize over only one control vector at a time by working backward in time.In nonlinear continuous-time case, the system can be described byThe cost in this case is defined asFor continuous-time systems, Bellman’s principle of optimality can be applied, too. The optimal cost J*(x0)5min J(x0, u(t)) will satisfy the Hamilton-Jacobi-Bellman EquationEquations (3) and (7) are called the optimality equations of dynamic programming which are the basis for implementation of dynamic programming. In the above, if the function F in (1) or (5) and the cost function J in (2) or (6) are known, the solution of u(k ) becomes a simple optimization problem. If the system is modeled by linear dynamics and the cost function to be minimized is quadratic in the state and control, then the optimal control is a linear feedback of the states, where the gains are obtained by solving a standard Riccati equation [47]. On the other hand, if the system is modeled by nonlinear dynamics or the cost function is nonquadratic, the optimal state feedback control will depend upon solutions to the Hamilton-Jacobi-Bellman (HJB) equation [48] which is generally a nonlinear partial differential equation or difference equation. However, it is often computationally untenable to run true dynamic programming due to the backward numerical process required for its solutions, i.e., as a result of the well-known “curse of dimensionality” [16], [28]. In [69], three curses are displayed in resource management and control problems to show the cost function J , which is the theoretical solution of the Hamilton-Jacobi- Bellman equation, is very difficult to obtain, except for systems satisfying some very good conditions. Over the years, progress has been made to circumvent the “curse of dimensionality” by building a system, called“critic”, to approximate the cost function in dynamic programming (cf. [10], [60], [61], [63], [70], [78], [92], [94], [95]). The idea is to approximate dynamic programming solutions by using a function approximation structure such as neural networks to approximate the cost function. The Basic Structures of ADPIn recent years, adaptive/approximate dynamic programming (ADP) has gainedmuch attention from many researchers in order to obtain approximate solutions of the HJB equation,cf. [2], [3], [5], [8], [11]–[13], [21], [22], [25], [30], [31], [34], [35], [40], [46], [49], [52], [54], [55], [63], [70], [76], [80], [83], [95], [96], [99], [100]. In 1977, Werbos [91] introduced an approach for ADP that was later called adaptive critic designs (ACDs). ACDs were proposed in [91], [94], [97] as a way for solving dynamic programming problems forward-in-time. In the literature, there are several synonyms used for “Adaptive Critic Designs” [10], [24], [39], [43], [54], [70], [71], [87], including “Approximate Dynamic Programming” [69], [82], [95], “Asymptotic Dynamic Programming” [75], “Adaptive Dynamic Programming”[63], [64], “Heuristic Dynamic Programming” [46],[93], “Neuro-Dynamic Programming” [17], “Neural Dynamic Programming” [82], [101], and “Reinforcement Learning” [84].Bertsekas and Tsitsiklis gave an overview of the neurodynamic programming in their book [17]. They provided the background, gave a detailed introduction to dynamic programming, discussed the neural network architectures and methods for training them, and developed general convergence theorems for stochastic approximation methods as the foundation for analysis of various neuro-dynamic programming algorithms. They provided the core neuro-dynamic programming methodology, including many mathematical results and methodological insights. They suggested many useful methodologies for applications to neurodynamic programming, like Monte Carlo simulation, on-line and off-line temporal difference methods, Q-learning algorithm, optimistic policy iteration methods, Bellman error methods, approximate linear programming, approximate dynamic programming with cost-to-go function, etc. A particularly impressive success that greatly motivated subsequent research, was the development of a backgammon playing program by Tesauro [85]. Here a neural network was trained to approximate the optimal cost-to-go function of the game of backgammon by using simulation, that is, by letting the program play against itself. Unlike chess programs, this program did not use lookahead of many steps, so its success can be attributed primarily to the use of a properly trained approximation of the optimal cost-to-go function.To implement the ADP algorithm, Werbos [95] proposed a means to get aroundthis numerical complexity by using “approximate dynamic program ming” formulations. His methods approximate the original problem with a discrete formulation. Solution to the ADP formulation is obtained through neural network based adaptive critic approach. The main idea of ADP is shown in Fig. 1.He proposed two basic versions which are heuristic dynamic programming (HDP) and dual heuristic programming (DHP).HDP is the most basic and widely applied structure of ADP [13], [38], [72], [79], [90], [93], [104], [106]. The structure of HDP is shown in Fig. 2. HDP is a method for estimating the cost function. Estimating the cost function for a given policy only requires samples from the instantaneous utility function U, while models of the environment and the instantaneous reward are needed to find the cost function corresponding to the optimal policy.In HDP, the output of the critic network is J^, which is the estimate of J in equation (2). This is done by minimizing the following error measure over timewhere J^(k)5J^ 3x(k), u(k), k, WC4 and WC represents the parameters of the critic network. When Eh50 for all k, (8) implies thatDual heuristic programming is a method for estimating the gradient of the cost function, rather than J itself. To do this, a function is needed to describe the gradient of the instantaneous cost function with respect to the state of the system. In the DHP structure, the action network remains the same as the one for HDP, but for the second network, which is called the critic network, with the costate as its output and the state variables as its inputs.The critic network’s training is more complicated than that in HDP since we need to take into account all relevant pathways of backpropagation.This is done by minimizing the following error measure over timewhere 'J^ 1k2 /'x1k2 5'J^ 3x1k2, u1k2, k, WC4/'x1k2 and WC represents theparameters of the critic network. When Eh50 for all k, (10) implies that2. Theoretical DevelopmentsIn [82], Si et al summarizes the cross-disciplinary theoretical developments of ADP and overviews DP and ADP; and discusses their relations to artificial intelligence, approximation theory, control theory, operations research, and statistics.In [69], Powell shows how ADP, when coupled with mathematical programming, can solve (approximately) deterministic or stochastic optimization problems that are far larger than anything that could be solved using existing techniques and shows the improvement directions of ADP.In [95], Werbos further gave two other versions called “actiondependent critics,” namely, ADHDP (also known as Q-learning [89]) and ADDHP. In the two ADP structures, the control is also the input of the critic networks. In 1997, Prokhorov and Wunsch [70] presented more algorithms according to ACDs.They discussed the design families of HDP, DHP, and globalized dual heuristic programming (GDHP). They suggested some new improvements to the original GDHP design. They promised to be useful for many engineering applications in the areas of optimization and optimal control. Based on one of these modifications, they present a unified approach to all ACDs. This leads to a generalized training procedure for ACDs. In [26], a realization of ADHDP was suggested: a least squares support vector machine (SVM) regressor has been used for generating the control actions, while an SVM-based tree-type neural network (NN) is used as the critic. The GDHP or ADGDHP structure minimizes the error with respect to both the cost and its derivatives. While it is more complex to do this simultaneously, the resulting behavioris expected to be superior. So in [102], GDHP serves as a reconfigurable controller to deal with both abrupt and incipient changes in the plant dynamics due to faults. A novel fault tolerant control (FTC) supervisor is combined with GDHP for the purpose of improving the performance of GDHP for fault tolerant control. When the plant is affected by a known abrupt fault, the new initial conditions of GDHP are loaded from dynamic model bank (DMB). On the other hand, if the fault is incipient, the reconfigurable controller maintains performance by continuously modifying itself without supervisor intervention. It is noted that the training of three networks used to implement the GDHP is in an online fashion by utilizing two distinct networks to implement the critic. The first critic network is trained at every iterations while the second one is updated with a copy of the first one at a given period of iterations.All the ADP structures can realize the same function that is to obtain the optimal control policy while the computation precision and running time are different from each other. Generally speaking, the computation burden of HDP is low but the computation precision is also low; while GDHP has better precision but the computation process will take longer time and the detailed comparison can be seen in [70]. In [30], [33] and [83], the schematic of direct heuristic dynamic programming is developed. Using the approach of [83], the model network in Fig. 1 is not needed anymore. Reference [101] makes significant contributions to model-free adaptive critic designs. Several practical examples are included in [101] for demonstration which include single inverted pendulum and triple inverted pendulum. A reinforcement learning-based controller design for nonlinear discrete-time systems with input constraints is presented by [36], where the nonlinear tracking control is implemented with filtered tracking error using direct HDP designs. Similar works also see [37]. Reference [54] is also about model-free adaptive critic designs. Two approaches for the training of critic network are provided in [54]: A forward-in-time approach and a backward-in-time approach. Fig. 4 shows the diagram of forward-intimeapproach. In this approach, we view J^(k) in (8) as the output of the critic network to be trained and choose U(k)1gJ^(k11) as the training target. Note that J^(k) and J^(k11) are obtained using state variables at different time instances. Fig. 5shows the diagram of backward-in-time approach. In this approach, we view J^(k11) in (8) as the output of the critic network to be trained and choose ( J^(k)2U(k))/g as the training target. The training ap proach of [101] can be considered as a backward- in-time ap proach. In Fig. 4 and Fig. 5, x(k11) is the output of the model network.An improvement and modification to the two network architecture, which is called the “single network adaptive critic(SNAC)” was presented in [65], [66]. This approach eliminates the action network. As a consequence, the SNAC architecture offers three potential advantages: a simpler architecture, lesser computational load (about half of the dual network algorithms), and no approximate error due to the fact that the action network is eliminated. The SNAC approach is applicable to a wide class of nonlinear systems where the optimal control (stationary) equation can be explicitly expressed in terms of the state and the costate variables. Most of the problems in aerospace, automobile, robotics, and other engineering disciplines can be characterized by the nonlinear control-affine equations that yield such a relation. SNAC-based controllers yield excellent tracking performances in applications to microelectronic mechanical systems, chemical reactor, and high-speed reentry problems. Padhi et al. [65] have proved that for linear systems (where the mapping between the costate at stage k11 and the state at stage k is linear), the solution obtained by the algorithm based on the SNAC structure converges to the solution of discrete Riccati equation.译文:自适应动态规划综述摘要:自适应动态规划(Adaptive dynamic programming, ADP) 是最优控制领域新兴起的一种近似最优方法, 是当前国际最优化领域的研究热点. ADP 方法利用函数近似结构来近似哈密顿{ 雅可比{ 贝尔曼(Hamilton-Jacobi-Bellman, HJB)方程的解, 采用离线迭代或者在线更新的方法, 来获得系统的近似最优控制策略, 从而能够有效地解决非线性系统的优化控制问题. 本文按照ADP 的结构变化、算法的发展和应用三个方面介绍ADP 方法. 对目前ADP 方法的研究成果加以总结, 并对这一研究领域仍需解决的问题和未来的发展方向作了进一步的展望。
本科毕业设计外文文献及译文文献、资料题目:Cloud Computing文献、资料来源:云计算概述(英文版)文献、资料发表(出版)日期:2009年5月院(部):专业:班级:姓名:学号:指导教师:翻译日期:外文文献:Cloud Computing1. Cloud Computing at a Higher LevelIn many ways, cloud computing is simply a metaphor for the Internet, the increasing movement of compute and data resources onto the Web. But there’s a difference: cloud computing represents a new tipping point for the value of network computing. It delivers higher efficiency, massive scalability, and faster, easier software development. It’s about new programming models, new IT infrastructure, and the enabling of new business models.For those developers and enterprises who want to embrace cloud computing, Sun is developing critical technologies to deliver enterprise scale and systemic qualities to this new paradigm:(1) Interoperability — while most current clouds offer closed platforms and vendor lock-in, developers clamor for interoperability. Sun’s open-source product strategy and Java™ principles are focused on providing interoperability for large-scale computing resources. Think of the existing cloud “islands” merging into a new, interoperable “Intercloud” where applications can be moved to and operate across multiple platforms.(2) High-density horizontal computing —Sun is pioneering high-power-density compute-node architectures and extreme-scale Infiniband fabrics as part of our top-tier HPC deployments. This high-density technology is being incorporated into our large-scale cloud designs.(3)Data in the cloud — More than just compute utilities, cloud computing is increasingly about petascale data. Sun’s Open Storage products offer hybrid data servers with unprecedented efficiency and performance for the emerging data-intensive computing applications that will become a key part of the cloud.These technology bets are focused on driving more efficient large-scale cloud deployments that can provide the infrastructure for next-generation business opportunities: social networks, algorithmic trading, continuous risk analysis, and so on.2. Why Cloud Computing?(1)Clouds: Much More Than Cheap ComputingCloud computing brings a new level of efficiency and economy to delivering IT resources on demand — and in the process it opens up new business models and market opportunities.While many people think of current cloud computing offerings as purely “pay by the drink” compute platforms, they’re really a convergence of two major interdependent IT trends: IT Efficiency — Minimize costs where companies are converting their IT costs from capital expenses to operating expenses through technologies such as virtualization. Cloud computing begins as a way to improve infrastructure resource deployment and utilization, but fully exploiting this infrastructure eventually leads to a new application development model.Business Agility — Maximize return using IT as a competitive weapon through rapid time to market, integrated application stacks, instant machine image deployment, and petascale parallel programming. Cloud computing is embraced as a critical way to revolutionize time to service. But inevitably these services must be built on equally innovative rapid-deployment-infrastructure models.To be sure, these trends have existed in the IT industry for years. However, the recent emergence of massive network bandwidth and virtualization technologies has enabled this transformation to a new services-oriented infrastructure.Cloud computing enables IT organizations to increase hardware utilization rates dramatically, and to scale up to massive capacities in an instant — without constantly having to invest in new infrastructure, train new personnel, or license new software. It also creates new opportunities to build a better breed of network services, in less time, for less money.IT Efficiency on a Whole New ScaleCloud computing is all about efficiency. It provides a way to deploy and access everything from single systems to huge amounts of IT resources — on demand, in real time, at an affordable cost. It makes high-performance compute and high-capacity storage available to anyone with a credit card. And since the best cloud strategies build on concepts and tools that developers already know, clouds also have the potential to redefine the relationship between information technology and the developers and business units that depend on it.Reduce capital expenditures — Cloud computing makes it possible for companies to convert IT costs from capital expense to operating expense through technologies such as virtualization.Cut the cost of running a datacenter — Cloud computing improves infrastructure utilizationrates and streamlines resource management. For example, clouds allow for self-service provisioning through APIs, bringing a higher level of automation to the datacenter and reducing management costs.Eliminate over provisioning — Cloud computing provides scaling on demand, which, when combined with utility pricing, removes the need to overprovision to meet demand. With cloud computing, companies can scale up to massive capacities in an instant.For those who think cloud computing is just fluff, take a closer look at the cloud offerings that are already available. Major Internet providers , Google, and others are leveraging their infrastructure investments and “sharing” their large-scale economics. Already the bandwidth used by Amazon Web Services (AWS) exceeds that associated with their core e-tailing services. Forward-looking enterprises of all types —from Web 2.0 startups to global enterprises — are embracing cloud computing to reduce infrastructure costs.Faster, More Flexible ProgrammingCloud computing isn’t only about hardware —it’s also a programming revolution. Agile, easy-to-access, lightweight Web protocols —coupled with pervasive horizontally scaled architecture — can accelerate development cycles and time to market with new applications and services. New business functions are now just a script away.Accelerated cycles — The cloud computing model provides a faster, more efficient way to develop the new generation of applications and services. Faster development and testing cycles means businesses can accomplish in hours what used to take days, weeks, or months.Increase agility —Cloud computing accommodates change like no other model. For example, Animoto Productions, makers of a mashup tool that creates video from images and music, used cloud computing to scale up from 50 servers to 3,500 in just three days. Cloud computing can also provide a wider selection of more lightweight and agile development tools, simplifying and speeding up the development process.The immediate impact will be unprecedented flexibility in service creation and accelerated development cycles. But at the same time, development flexibility could become constrained by APIs if they’re not truly open. Cloud computing can usher in a new era of productivity for developers if they build on platforms that are designed to be federated rather than centralized. But there’s a major shift underway in programming culture and the languages that will be used inclouds.Today, the integrated, optimized, open-source Apache, MySQL, PHP/Perl/Python (AMP) stack is the preferred platform for building and deploying new Web applications and services. Cloud computing will be the catalyst for the adoption of an even newer stack of more lightweight, agile tools such as lighttpd, an open-source Web server; Hadoop, the free Java software framework that supports data-intensive distributed applications; and MogileFS, a file system that enables horizontal scaling of storage across any number of machines.(2)Compelling New Opportunities: The Cloud EcosystemBut cloud computing isn’t just about a proliferation of Xen image stacks on a restricted handful of infrastructure providers. It’s also about an emerging ecosyst em of complementary services that provide computing resources such as on-ramps for cloud abstraction, professional services to help in deployment, specialized application components such as distributed databases, and virtual private datacenters for the entire range of IT providers and consumers.These services span the range of customer requirements, from individual developers and small startups to large enterprises. And they continue to expand the levels of virtualization, a key architectural component of the cloud that offers ever-higher abstractions of underlying services.(3) How Did Cloud Computing Start?At a basic level, cloud computing is simply a means of delivering IT resources as services. Almost all IT resources can be delivered as a cloud service: applications, compute power, storage capacity, networking, programming tools, even communications services and collaboration tools.Cloud computing began as large-scale Internet service providers such as Google, Amazon, and others built out their infrastructure. An architecture emerged: massively scaled, horizontally distributed system resources, abstracted as virtual IT services and managed as continuously configured, pooled resources. This architectural model was immortalized by George Gilder in his Oc tober 2006 Wired magazine article titled “The Information Factories.” The server farms Gilder wrote about were architecturally similar to grid computing, but where grids are used for loosely coupled, technical computing applications, this new cloud model was being applied to Internet services.Both clouds and grids are built to scale horizontally very efficiently. Both are built to withstand failures of individual elements or nodes. Both are charged on a per-use basis. But whilegrids typically process batch jobs, with a defined start and end point, cloud services can be continuous. What’s more, clouds expand the types of resources available—file storage, databases, and Web services — and extend the applicability to Web and enterprise applications.At the same time, the concept of utility computing became a focus of IT design and operations. As Nick Carr observed in his book The Big Switch, computing services infrastructure was beginning to parallel the development of electricity as a utility. Wouldn’t it b e great if you could purchase compute resources, on demand, only paying for what you need, when you need it?For end users, cloud computing means there are no hardware acquisition costs, no software licenses or upgrades to manage, no new employees or consultants to hire, no facilities to lease, no capital costs of any kind —and no hidden costs. Just a metered, per-use rate or a fixed subscription fee. Use only what you want, pay only for what you use.Cloud computing actually takes the utility model to the next level. It’s a new and evolved form of utility computing in which many different types of resources (hardware, software, storage, communications, and so on) can be combined and recombined on the fly into the specific capabilities or services customers require. From CPU cycles for HPC projects to storage capacity for enterprise-grade backups to complete IDEs for software development, cloud computing can deliver virtually any IT capability, in real time. Under the circumstances it is easy to see that a broad range of organizations and individuals would like to purchase “computing” as a service, and those firms already building hyperscale distributed data centers would inevitably choose to begin offering this infrastructure as a service.(4)Harnessing Cloud ComputingSo how does an individual or a business take advantage of the cloud computing trend? It’s not just about loading machine images consisting of your entire software stack onto a public cloud like AWS — there are several different ways to exploit this infrastructure and explore the ecosystem of new business models.Use the CloudThe number and quality of public, commercially available cloud-based service offerings is growing fast. Using the cloud is often the best option for startups, research projects, Web 2.0 developers, or niche players who want a simple, low-cost way to “load and go.”If you’re an Internet startup today, you will be mandated by your investors to keep you IT spend to aminimum. This is certainly what the cloud is for.Leverage the CloudTypically, enterprises are using public clouds for specific functions or workloads. The cloud is an attractive alternative for:Development and testing — this is perhaps the easiest cloud use case for enterprises (not just startup developers). Why wait to order servers when you don’t even know if the project will pass the proof of concept?Functional offloading —you can use the cloud for specific workloads. For example, SmugMug does its image thumbnailing as a batch job in the cloud.Augmentation — Clouds give you a new option for handling peak load or anticipated spikes in demand for services. This is a very attractive option for enterprises, but also potentially one of the most difficult use cases. Success is dependent on the statefulness of the application and the interdependence with other datasets that may need to be replicated and load-balanced across the two sites.Experimenting — Why download demos of new software, and then install, license, and test it? In the future, software evaluation can be performed in the cloud, before licenses or support need to be purchased.Build the CloudMany large enterprises understand the economic benefits of cloud computing but want to ensure strict enforcement of security policies. So they’re experimenting fir st with “private” clouds, with a longer-term option of migrating mature enterprise applications to a cloud that’s able to deliver the right service levels.Other companies may simply want to build private clouds to take advantage of the economics of resource pools and standardize their development and deployment processes.Be the CloudThis category includes both cloud computing service providers and cloud aggregators —companies that offer multiple types of cloud services.As enterprises and service providers gain experience with the cloud architecture model and confidence in the security and access-control technologies that are available, many will decide to deploy externally facing cloud services. The phenomenal growth rates of some of the publiccloud offerings available today will no doubt accelerate the momentum. Amazon’s EC2 was introduced only two years ago and officially graduated from beta to general availability in October 2008.Cloud service providers can:Provide new routes to market for startups and Web 2.0 application developersOffer new value-added capabilities such as analyticsDerive a competitive edge through enterprise-level SLAsHelp enterprise customers develop their own cloudsIf you’re building large datacenters today, you should proba bly be thinking about whether you’re going to offer cloud services.(5)Public, Private, and Hybrid CloudsA company may choose to use a service provider’s cloud or build its own — but is it always all or nothing? Sun sees an opportunity to blend the advantages of the two primary options: Public clouds are run by third parties, and jobs from many different customers may be mixed together on the servers, storage systems, and other infrastructure within the cloud. End users don’t know who else’s job may be me running on the same server, network, or disk as their own jobs.Private clouds are a good option for companies dealing with data protection and service-level issues. Private clouds are on-demand infrastructure owned by a single customer who controls which applications run, and where. They own the server, network, and disk and can decide which users are allowed to use the infrastructure.But even those who feel compelled in the short term to build a private cloud will likely want to run applications both in privately owned infrastructure and in the public cloud space. This gives rise to the concept of a hybrid cloud.Hybrid clouds combine the public and private cloud models. You own parts and share other parts, though in a controlled way. Hybrid clouds offer the promise of on-demand, externally provisioned scale, but add the complexity of determining how to distribute applications across these different environments. While enterprises may be attracted to the promise of a hybrid cloud, this option, at least initially, will likely be reserved for simple stateless applications that require no complex databases or synchronization.3. Cloud Computing Defined(1)Cornerstone TechnologyWhile the basic technologies of cloud computing such as horizontally scaled, distributed compute nodes have been available for some time, virtualization — the abstraction of computer resources —is the cornerstone technology for all cloud architectures. With the ability to virtualize servers (behind a hypervisor-abstracted operating system), storage devices, desktops, and applications, a wide array of IT resources can now be allocated on demand.The dramatic growth in the ubiquitous availability of affordable high-bandwidth networking over the past several years is equally critical. What was available to only a small percentage of Internet users a decade ago is now offered to the majority of Internet users in North America, Europe, and Asia: high bandwidth, which allows massive compute and data resources to be accessed from the browser. Virtualized resources can truly be anywhere in the cloud — not just across gigabit datacenter LANs and WANs but also via broadband to remote programmers and end users.Additional enabling technologies for cloud computing can deliver IT capabilities on an absolutely unprecedented scale. Just a few examples:Sophisticated file systems such as ZFS can support virtually unlimited storage capacities, integration of the file system and volume management, snapshots and copy-on-write clones, on-line integrity checking, and repair.Patterns in architecture allow for accelerated development of superscale cloud architectures by providing repeatable solutions to common problems.New techniques for managing structured, unstructured, and semistructured data can provide radical improvements in data-intensive computing.Machine images can be instantly deployed, dramatically simplifying and accelerating resource allocation while increasing IT agility and responsiveness.(2)The Architectural Services Layers of Cloud ComputingWhile the first revolution of the Internet saw the three-tier (or n-tier) model emerge as a general architecture, the use of virtualization in clouds has created a new set of layers: applications, services, and infrastructure. These layers don’t just encapsu late on-demand resources; they also define a new application development model. And within each layer ofabstraction there are myriad business opportunities for defining services that can be offered on a pay-per-use basis.Software as a Service (SaaS)SaaS is at the highest layer and features a complete application offered as a service, on demand, via multitenancy —meaning a single instance of the software runs on the provider’s infrastructure and serves multiple client organizations.The most widely known example of SaaS is , but there are now many others, including the Google Apps offering of basic business services such as e-mail. Of course, ’s multitenant application has preceded the definition of cloud computing by a few years. On the other hand, like many other players in cloud computing, now operates at more than one cloud layer with its release of , a companion application development environment, or platform as a service.Platform as a Service (PaaS)The middle layer, or PaaS, is the encapsulation of a development environment abstraction and the packaging of a payload of services. The archetypal payload is a Xen image (part of Amazon Web Services) containing a basic Web stack (for example, a Linux distro, a Web server, and a programming environment such as Pearl or Ruby).PaaS offerings can provide for every phase of software development and testing, or they can be specialized around a particular area, such as content management.Commercial examples include Google App Engine, which serves applications on Google’s infrastructure. PaaS services such as these can provide a great deal of flexibility but may be constrained by the capabilities that are available through the provider.Infrastructure as a Service (IaaS)IaaS is at the lowest layer and is a means of delivering basic storage and compute capabilities as standardized services over the network. Servers, storage systems, switches, routers, and other systems are pooled (through virtualization technology, for example) to handle specific types of workloads — from batch processing to server/storage augmentation during peak loads.The best-known commercial example is Amazon Web Services, whose EC2 and S3 services offer bare-bones compute and storage services (respectively). Another example is Joyent whose main product is a line of virtualized servers which provide a highly scalable on-demandinfrastructure for running Web sites, including rich Web applications written in Ruby on Rails, PHP, Python, and Java.中文译文:云计算1.更高层次的云计算在很多情况下,云计算仅仅是互联网的一个隐喻,也就是网络上运算和数据资源日益增加的一个隐喻。
Path Dependence and The Validation of Agent-based Spatial Models of Land UseDANIEL G. BROWN, SCOTT PAGE, RICK RIOLO, MOIRA ZELLNER and WILLIAM RAND In this paper, we identify two distinct notions of accuracy of land-use models and highlight a tension between them. A model can have predictive accuracy: its predicted land-use pattern can be highly correlated with the actual land-use pattern. A model can also have process accuracy: the process by which locations or land-use patterns are determined can be consistent with real world processes.To balance these two potentially conflicting motivations, we introduce the concept of the invariant region, i.e., the area where land-use type is almost certain,and thus path independent; and the variant region, i.e., the area where land use depends on a particular series of events, and is thus path dependent. We demonstrate our methods using an agent-based land-use model and using multi-temporal land-use data collected for Wash tenaw County, Michigan, USA. The results indicate that, using the methods we describe, researchers can improve their ability to communicate how well their model performs, the situations or instances in which it does not perform well, and the cases in which it is relatively unlikely to predict well because of either path dependence or stochastic un certainty.Keywords: Agent-based modeling; Land-use change; Urban sprawl; Model validation; Complex systems1.IntroductionThe rise of models that represent the functioning of complex adaptive systems has led to an increased awareness of the possibility for path dependency and multiple equilibria in economic and ecological systems in general and spatial land-use systems in particular (Atkinson and Oelson 1996, Wilson 2000,Balmann 2001). Path dependence arises from negative and positive feedbacks.Negative feedbacks in the form of spatial dis-amenities rule out some patterns of development and positive feedbacks from roads and other infrastructure and from service centers reinforce existing paths (Arthur 1988, Arthur 1989). Thus, a small random component in location decisions can lead to large deviations in settlement patterns which could not result were those feedbacks not present (Atkinson and Oleson 1996). Concurrent with this awareness of the unpredictability of settlement patterns has been an increased availability of spatial data within geographic information systems (GIS). This has led to greater emphasis on the validation of spatial land-use models (Costanza 1989, Pontius 2000, 2002, Kok et al. 2001). These two scientific advances, one theoretical and one empirical, have lead to two contradictory impulses in land-use modeling: the desire for increased accuracy of prediction and the recognition of unpredictability in the process. This paper addresses the balance between these two impulses: the desire for accuracy of prediction and accuracy of process.Accuracy of prediction refers to the resemblance of model output to data about theenvironments and regions they are meant to describe, usually measured as either aggregate similarity and spatial similarity. Aggregate similarity refers to similarities in statistics that describe the mapped pattern of land use such as the distributions of sizes of developed clusters, the functional relationship between distance to city center and density (Batty and Longley 1994, Makse et al. 1998; Andersson et al.2002, Rand et al. 2003), or landscape pattern metrics developed within the landscape ecology literature (e.g., McGarigal and Marks 1995) to measure the degree of fragmentation in the landscape (Parker and Meretsky 2004).Spatial similarity refers to the degree of match between land-use maps and a single run or summary of multiple runs of a land-use model. The most common approaches build on the basic error matrix approach (Congalton 1991), by which agreement can be summarized using the kappa statistic (Cohen 1960). Pontius(2000) has developed map comparison methods for model validation that partitions total errors into those due to the amounts of each land-use type and those due to their locations. Because models rely on generalizations of reality, spatial similarity measures must be considered in light of their scale; the coarser the partition, the easier the matching task becomes (Constanza 1989, Kok et al. 2001, Pontius 2002and Hagen 2003).Because spatial patterns contain more information than can be captured bya handful of aggregate statistics, validation using spatial similarity raises the empirical bar over aggregate similarity. However, as we shall demonstrate in this paper, demanding that modelers get the locations right may be asking too much.Human decision-making is rarely deterministic, and land-use models commonly include stochastic processes as a result (e.g., in the use of random utility theory;Irwin and Geoghegan 2001). Many models, therefore, produce varying results because of stochastic uncertainty in their processes. Further, to represent the feedback processes, land-use modelers are making increasing use of cellular automata (Batty and Xie 1994, Clarke et al. 1996, White and Engelen 1997) and agent-based simulation (Balmann 2001, Rand et al. 2002, Parker and Meretsky2004). These and other modeling approaches that can represent feedbacks can exhibit spatial path dependence, i.e., the spatial patterns that result can be very sensitive to slight differences in processes or initial conditions. How sensitive depends upon specific attributes of the model. Given the presence of path dependence and the effect it can have on magnifying uncertainties in land-use models, any model that consistently returns spatial patterns in which the locations of land uses are similar to the real world could be overfit, i.e., it may represent the outcomes of a particular case well but the description of the process may not be generalizable.We believe that the situation creates an imposing challenge: to make accurate predictions, but to admit the inability to be completely accurate owing to path dependence and stochastic uncertainty. If we pursue only the first part of the dictumat the expense of the second part, we encourage a tendency towards over fitting, in which the model is constrained by more and more information such that its ability to run in the absence of data (e.g., in the future) or to predict surprising results is reduced. If we emphasize the latter, then we abandon hope of predicting those spatial properties that are path or state invariant. Though it is reasonable to ask,‘‘can themodel predict past behavior?’’the answer to this question depends as muchon the dynamic feedbacks and non-linearities of the system itself as on the accuracy of the model. Therefore, the more important question is ‘‘are the mechanisms and parameters of the model correct?’’In this paper we describe and demonstrate an approach to model validation that acknowledges path dependence in land-use models. The invariant-variant method enables us to determine what we know and what we don’t know spatially. Although we can only make limited interpretations about the amount of path dependence that we see for any one model applied to a particular landscape, comparing across a wide range of models and landscape patterns should allow us to understand if a model contains an appropriate level of path dependence and/or stochasticity.2.An agent-based model of land developmentThe model we use to illustrate the validation methods was developed in Swarm(), a multipurpose agent-based modeling platform. In the model,agents choose locations on a heterogeneous two-dimensional landscape. The spatial patterns of development are the result of agent behaviors.We developed this simple model for the purposes of experimentation and pedagogy, and present it as a means to illustrate the validation methods. These concerns created an incentive for simplicity. We needed to be able to accomplish hundreds, if not thousands of runs, in a reasonable time period and to be able tounder stand the driving forces under different assumptions. We could not have a model with dynamics that were so complicated that neither we nor our readers could understand them intuitively. Thus, the modeling decisions have tended to err, ifany thing, on the side of parsimony. We describe each of the three primary parts of the model: the environment, the agents, and the agent’s interaction with the environment.Each location on the landscape (i.e., a lattice) has three characteristics: a score for aesthetic quality scaled to the interval [0, 1], the presence or absence of initial service centers, and an average distance to services, which is updated at each step. On our artificial landscapes we calculate service-center distance as Euclidean distance.When we are working with a real landscape, we incorporate the road network in to the distance calculation. We simplified the calculation of road distance by calculating, first, the straight-line distance to the nearest point on the nearest road,then the straight-line distance from that point to the nearest service center. This approach is likely to underestimate the true road distance, but provides a reasonable approximation that is much quicker to calculate and incorporates the most salient features of road networks.3.Validation methodsThis section describes the two primary approaches to validation that we demonstrate in this paper: aggregate validation with pattern metrics and thein variant-variant method. Each method is used to compare the agent-based model with a reference map and is demonstrated for several cases, which are described in Section 4.To perform statistical validation we make use of landscape pattern metrics,originally developed for landscape ecological investigations. These metrics are included for comparison with our new method. The primary appeal of landscapepattern metrics in validation is that they can characterize several different aspects of the global patterns that emerge from the model (Parker and Meretsky 2004), and they describe the patterns in a way that relates them to the ecological impacts of land-use change.In our approach, we distinguish between those locations that the model always predicts as developed or undeveloped –the invariant region –and those locations that sometimes get developed and sometimes do not –the variant region. Before describing how we construct these regions and their usefulness, we first describe a more standard approach to measuring spatial similarity in a restricted case.Suppose a run of a model locates a land-use type (e.g., development) at M sites among N possible sites, where M is also the number of sites at which the land use is found in the reference map. We could ask how accurately that model run predicted the exact locations. First count the number of the M developed locations predicted by the model that are correct (C). M - C locations that the model predicts are,therefore, incorrect. We can also partition the M developed locations in there ference map into two types: those predicted correctly (C) and those predicted incorrectly (M - C), and calculate user’s and producer’s accuracies for the developed class, which are identical in this situation, as C/M.4.Demonstrations of model validation methodsWe ran multiple experiments with our agent-based model to illustrate both the importance of path dependence and the utility of the validation methods. First, we created artificial landscapes as experimental situations in which the ‘‘true’’process and outcome are known perfectly, which is not possible using real-world data. Next,we used data on land-use change collected and analyzed over Washtenaw County,Michigan, which contains Ann Arbor and is immediately west of Detroit. The primary goal of the latter demonstration was to analyze the effects of different starting times on path dependence and model accuracy using real data.Our initial demonstrations, Cases 1.1 through 1.5 below, were designed to test for two influences on path dependence: agent behavior and the environmental features.For all of these demonstrations, we randomly selected a single run of the model as the reference map. We, therefore, compared the model to a reference map that, by definition, was generated by exactly the same process, i.e., a 100% correct model.Any differences between the model runs and the reference map were, therefore,indicative of inherent unpredictability of the system, due to either stochastic uncertainty or path dependence, and not of any flaw or weakness in the model.The next demonstrations (Cases 2.1 and 2.2) were intended to illustrate how too much focus on getting a strong spatial similarity between model patterns of land use and the reference map can lead one to construct an over fitted model. For both of these cases, we used the landscape with variable aesthetic quality in two peaks described above, and the parameter values listed in Table 1. In each case 10 residents entered per time step, with one new service center per 20 residents. Each run resulted in highly path dependent development, i.e., almost all development is on one peak or another, depending only on the choice of early settlers. We selected one run of this model as the reference map, deliberately choosing a run in which the peak of qx,y tothe northwest was developed. This selected run we designated as the ‘‘true history’’against which we wished to validate our model. The first comparison with this reference map (Case 2.1) was to assume, as before, that we knew the actual process generating the true history, i.e., we ran the same model multiple times with different random seeds.5.ResultsThe results from the first five cases (with parameters set as in table 1) indicated that the degree of predictability in the models was affected by both the behavior of the agents and the pattern of environmental variability (table 3). The landscape pattern metric values from any given case were never significantly different from the reference map, with the possible exception of MNN in Case 1.1. One striking result,however, given that the reference maps were created by the same models, is that the overall prediction accuracies were as low as 22 percent (Case 1.4), a result of the strongly path-dependent development exhibited in some of these cases. This accuracy level would probably be too low to convince referees or policy analysts to accept the model and yet the model is perfectly accurate.The overall prediction accuracy, and the size and accuracy of the invariant region,increased both when positive feedbacks were added to encourage development near existing development (Case 1.2) and when, in addition, the agents were responding to a variable pattern of aesthetic quality (Case 1.3). In addition to improving the size and predictability within the invariant regions, these changes had the effect of increasing the predictive ability of the model in the variant region as well (i.e., VC/VRD). This means that, where the model was less consistent in its prediction, it still made increasingly better predictions than random.6.Discussion and conclusionsIn this paper, we have introduced the invariant-variant method to assess the accuracy and variability of outcomes of spatial agent-based land-use models. This method advances existing techniques that measure spatial similarity. Most importantly, it helps us come to terms with a fundamental tension in land-use modeling –the emphasis on accurate prediction of location and the recognition of path dependence and stochastic uncertainty. The methods described here should apply to any land-use models that have the potential to generate multiple outcomes.They would not apply to models that are deterministic, and therefore make a single prediction of settlement patterns. By definition, deterministic models can not generate path dependence unless one considers the impact of interventions. In that case, our approach would be applicable with the invariant region being that portion of the region that is developed regardless of the policy intervention.Our proposed distinction between invariant and variant regions is a crude measure, but one that allows researchers to better understand the processes that lead to accurate (or inaccurate) predictions by their models. With it we can distinguish between models that always get something right, and those that always get different things right. And that difference matters. It may be possible to further develop the statistical properties of the most useful of these and similar measures. Such measures will enable us to categorize environments and actors who create systems for which anyaccurate model will have low predictive accuracy and those who create systems for which we should demand high accuracy.We expect that, over time and by comparing across models, we can understand what landscape attributes and behavioral characteristics lead to greater or lesser predictability as captured by the relative size of the invariant region. For example,homogeneity in the environment increases unpredictability because the number of paths becomes unwieldy. Admittedly, size of the invariant region is not the only possible measure of predictability, but it is a useful one. A large invariant region suggests a predictable settlement pattern. A small invariant region implies that history or even single events matter.Our analysis emphasized path dependence as opposed to stochastic uncertainty because of our interest, and that of many land-use modelers, in policy intervention.Stochastic uncertainty, like the weather, is something we can all complain about but not affect. Path dependence, at least in theory, offers the opportunity for intervention. If we know that two paths of development patterns are possible, then we might be able to influence the process, through policy and the use of what Holland called ‘‘lever points,’’such that the most desirable path, on some measure,emerges (Holland 1995, Glad well 2000). Path dependence makes fitting a model more difficult and may tempt modelers to overfit the data, since often the one actual path of development depends on specific details that influence the choices of early settlers. On the other hand, path dependence creates the possibility of policy leverage.The results of running the model from multiple starting times in the history (and pseudo-history) of Washtenaw County, Michigan, seem somewhat counter intuitive at first, in that the overall match of the locations of newly settled agents with those in the 1995 map decreased with increasing information (i.e., later starting times).However, the additional metrics tell more of the story. The model actually improved with later starting times when the matches were compared with the numbers that would be expected at random. Fewer agents entering the landscape at the later times means relatively more possible combinations of places they can locate in the undeveloped part of the map. The model does reasonably well at predicting the aggregate patterns, matching three of the four metrics, partially because much of the aggregate pattern is predetermined in the initial maps. The fact that three of the mean pattern-metric values were statistically indistinguishable from the 1995 values when starting at even the earliest dates, however, suggests that the match is not only due to the initial map information. The size of the invariant developed region declined with later starting times, but became more accurate., When we located fewer residents, we were much less likely to see them locate consistently, rightly or wrongly. Further, within the variant region, the model located residents less well than would be expected with simply random location. This suggests that some features were missing or structurally wrong in our model. Two possibilities are that our map of aesthetic quality in the outlying areas does not accurately reflect preferences, or that soil qualities or some other willingness-to-sell characteristic of locations contributes to where settlements occur.The comparison between Cases 2.1 and 2.2 highlights the importance of recognizing path dependence in land-use change processes and the dangers of over fitting the model to data in the modeling processes. This danger, i.e., that the model will match the outcome of a particular case well but misrepresent the process,is endemic to land-use change models. Many models of land-use change are developed through calibration and statistical fitting to observed changes, derived from remotely sensed and GIS data sets. This rather extreme example makes the point that, even though the outcomes of the model may match the reference map in meaningful ways, e.g., both statistically and spatially, we cannot necessarily conclude that the processes contained in the model are correct. If the processes are not well represented, of course, then we possess limited ability to evaluate policy outcomes, for example by changing incentives or creating zones that limit certain activities on the landscape.In the context of the foregoing discussion, it is useful to reflect on how to proceed with model development. If we use the results from Washtenaw County as an indication of the validity of the model and wish to improve its validity, what should be our next steps? There are a number of factors that we did not include in our model that could be included in agent decision-making. These include the price of land, zoning, the different kinds of residential, commercial, and industrial developments, a different representation of roads and distances, and the presence of areas restricted for development (like parks). Any of these factors could be included in the model in a way that would improve the fit of the output to the 1995map. But, each new factor we add will have associated with it parameters that need to be set. As soon as we start fitting these parameters according to the values that produce outputs that best fit the data, we run the risk of losing control of the process-based understanding that models of this sort helps us grapple with. As we proceed, the question becomes: are we interested in fitting the data or understanding the process?基于空间模型使用的路径依赖研究作者:丹尼尔·g·布朗;斯科特·佩奇;里克·瑞路;莫伊拉;威廉·兰德在本文的研究中,我们通过两个明确的、却截然不同的概念模型,准确而明显的凸显了这两者之间的紧张关系。
2.2激励实例
举一个关于“赡养问题”的简单例子,它为我们提供了一个背景,来说明在我们系统中动态委托代理模型和激励模型的主要步骤。
这个例子在其描述中,我们放弃了很多的有利于专业词汇的通用语言,反映了我们在委托代理公司探索的兴趣,它并没有词汇的限制。
然而,它限制了我们的模型和结果的适用性,在本节结束时提供指导文献,来研究我们现有的模型和结果。
初步方案制定:考虑一台关键的机器设备,有两种可能的状态,一种是运行状态,另一种是没有运行的状态,只有那种处于运行状态的机器能够给他的所有者(我们称之为“她”)带来利润,所有者者委托经营者(我们称之为“他”)对设备进行维修的职责,管理者会去维修处于运行状态的机器。
无论机器是处于高效运行还是低效运行,这种战略把这些机器设备在每个时期的运行情况作为一种维护策略。
当成本维护转移概率较高时,这些机器设备就会高负荷运转,那些处于运行状态的,管理者会进行非自由裁量权预防性维护活动。
作为所有者不会显示管理者在非操作状态下的努力,但是她会观察机器的状态。
因此,她可能会面临以下的问题:设计一个基于绩效的薪酬计划,将激励基于对管理者的绩效的实际观察,这将激励他付出积极的行动,最大限度地实现所有者期望的预利润能超过最低限额这一目标。
我们制定和分析所有者的问题之前,我们必须首先对双方的爱好指定一个模型。
管理者假设选择一些维护策略去解决他工作过程中的消耗磨损,这个问题类似于金融经济中的消费和投资问题。
它主要的功能如下:在每一时期,管理者的行为和消耗基于给他提供有价值的信息,以争取他的预期效用最大化,管理者的实用工具是随着时间的推移指数和加法可分离,消耗方面的增加和在修复方面的减少。
此外,管理者尽可能确保他银行账户的收入和在同一账户用于消耗金之间的金额转移消耗,这个账户的利息在每个周期是固定的、彼此平衡的负数。
反映贷款,或是积极的,或迎合所有者的喜好,这个模型假设是一个经典的预期利润最大化。
明确代表代理似乎是消耗和投资活动不必要的细节,无论怎样, 代理是暴露风险的收入流动,因为他的支付与机器的性能密切相关。
史密斯在1998提出代理喜好准确地表述这些风险收入流,并且准确地表述需要完善资本市场的消费模式。
除此之外,福登伯格、霍姆斯特姆和米格罗姆提出通过验证的动态来规划分析问题。
基于此,福登伯格、霍姆斯特姆和米格罗姆的理论贡献将会是研究的重要参考文献。
最优承包问题:这样就完成了维修问题模型的一个正式纲要接下来我们需要
就所有者的问题提供一个正式的说明,这将被称之为就有承包问题。
所有者移动的第一位和给经营者提供了一个长期的补偿计划,补偿计划在奖励的基础上,依据以往可信赖的经验,通过观察经营者的业绩,经营者决定是否去接受补偿计划,假设他接受了这个计划。
接下来,他将决定采取什么样的维护策略,接下来的程序在下面的图2.1有详细的说明。
O表示处于没有运行状态下的设备,H代表处于高效运转下的设备,而L表示处于低效运行下的设备,假设设备在开始一段时间后处于1阶段,在其他两段时期内所有者没有观察经营者的劳动绩效,但她可以了解到企业的运营情况。
随着时间的推移,由各个节点代表双方的活动由图形的顶端向图形的末端变化,图形后面的部分就是这个问题最关键的部分,对于补
偿计划,作为管理者有两种决定,他必须确保实现他所期望的最大回报,激励约束机制将会起到积极作用,通过发挥他的积极性,使他得到的补偿最大化,管理者将会选择预期效用最大化的维护策略,然后随时间推移,他必须决定是否去接受这份补偿计划,假设市场对于经营者来说充满竞争并且给他提供了一个可以选择的就业机会,这就为他发挥自己的能力提供了一个舞台,这一决定会产生参与约束。
如果这份计划不低于他的预期计划,管理者将会同意这份补偿计划,随后我们去分析所用者方面的问题。
所有可能的补偿计划都满足激励相容和参与约束,她将会选择最能体现预期最大利润的那个,分析所有者问题产生最佳赔偿方案和最佳维护策略会被经营者自愿采用,最佳赔偿方案和最佳维护策略所催生的结果被称之为最佳长期合同。
主要分析和结论的观点:对所有者问题进行分析的收获,在借鉴斯坦克伯格
对博弈的两个阶段逻辑的描述,格罗斯曼和哈特在1983年最先倡导这种逻辑,首先,管理者可以通过考虑每一个可能的维护计划,派生各种的最低成本补偿计划,回应那些能够接受的,其次选择一种可以解释步骤派生的补偿计划的策略,将会最大限度地实现所有者的利润诉求,2.4章将会对这个问题进行详细的讲解。
通过第一步的分析可以了解这是充分考虑记忆的补偿计划,,因此,人们不必担心从历史依赖所产生的复杂性,当在规定的期限内计划规定要高效工作,补偿方案将根据他的付出对他进行补偿。
如果业绩优秀会承诺奖励分红,这种奖励分红能够激励管理者的积极性,奖励分红的承诺在每一个期间会使管理者付出积极的劳动,而且在不同的期间内对管理者进行业绩的回报补偿。
另一方面,如果管理者的经验业绩较差,只对管理者的付出进行补偿,而没有任何的奖励分红。
这种结果也反映了所用者一直对经营者制定补偿计划,将会使他们满足于现状,缺乏高效的工作状态,让经营者一直发挥积极的工作热情不是一直可行的,在特殊意义上,如果经营者的工作状态没有科学的过渡,让激励计划去激励经营者的工作热情是不现实的。
经过第一步的分析,所有者付出各种奖励去激励经营者的工作积极性,所有者在各个相同时间内的付出,希望能够产生积极的效果。
这意味着可以运用标准的动态响应,使补偿计划最优化,这种补偿计划会促使建设性记忆付款的使用,利用重新谈判,派生长期合同,在任何时期,任何一方都可以的通过重新谈判修改合同,
文献指南:委托代理模式在与管理相关的几个领域中的应用有着悠久的历史,
对现有机构模型的研究提供了很多方便,并且它与我们的学习密切相关,我们现在对一些很重要的文件做出总结。
我们考虑到这些文件对委托代理模式理论的发展作出了重要贡献,以及提供了卓有成效的应用程序和实证结果,哈特和霍姆斯关于机构提供了一个有力的结论,福登伯格霍姆斯特姆和米格罗姆在1990年提出以一个长期的合同增值的战略承诺为主体,只有当代理人不能通过顺利借贷和储蓄来应对他的收入波动。
在这个模型中,委托人和代理人对于实用的程序结构和系统激励有相同的认知,此外,代理处于一个比较成熟完善的市场,代理只取决于他的激励补偿计划的净现值,而不是付款时间。
作者的中心观点是,在长期委托合同下,委托人可以重新制定补偿计划,不能以其他的借口改变计划规定赋
予委托人的净现值。
因此代理人会在每一时期的开始自愿继续履行合同,通过执行一系列的单期合同而形成最佳的合同,在这里长期战略性的承诺是显得不可或缺,我们在实际研究中需要使用完整的编程和采用经济框架去开发一个完整的特征,系统处于动态的同时还可以形成一个最佳的长期合同
在相关的研究中,斯贝尔和斯瑞维斯塔夫在1987年关于最佳合同提出了一个动态规划特征。
与此同时,代理剂量和政府机构的约束机制的作用还没有在资本市场当中体现。
霍斯特罗姆和米格罗姆在1987年提出关于委托代理问题,通过计算代理控制漂移率和布朗运动,可知它的效用是收入较少的努力成本的指数函数,。
虽然主要观察了这个理论演变的全过程,但他们仍然提出来了最佳支付是一个简单的总位移的线性函数的结论。
承包供应链问题也是与委托代理有关的主要范式,蔡在1998年提出一个观点在这个研究领域的最先进的观点,传统的合同安排的分析思维是往往过于复杂,适合多期分析,依赖于大量文献分析基础上的MDP也取代了传统的风险中性目标和风险敏感。
研究MDP模式的读者可以参考普特曼与贝特赛卡斯的著作,MDP 的风险灵敏度的研究可以参考怀特的观点,关于MDP的多个决策研究也引起了相当的重视,菲拉和威尔深受马科夫思想的启发并在1996年提出转换的竞争态势的竞争治疗,不管怎么样,他们的模型虽没有捕捉到委派控制系统的实质,但是我们的动态委托代理模型的重点参考。