Rigorous Analysis of (Distributed) Simulation Results, submitted to Distributed Systems Wor
- 格式:pdf
- 大小:40.25 KB
- 文档页数:16
2012年中国海洋大学英语翻译基础真题试卷(题后含答案及解析) 题型有:1. 词语翻译 2. 英汉互译词语翻译英译汉1.EQ正确答案:情商(Emotional Quotient)2.A/P正确答案:付款通知(advice and pay)3.IMF正确答案:国际货币基金组织(International Monetary Fund)4.LAN正确答案:局域网(Local Area Network)5.GMO正确答案:转基因生物(Genetically Modified Organism)6.ISS正确答案:工业标准规格(Industry Standard Specifications)7.ICRC正确答案:国际红十字委员会(International Committee of the Red Cross)8.UNEP正确答案:联合国环境规划署(United Nations Environment Programme)9.TARGET正确答案:泛欧实时全额自动清算系统(The Trans一European AutomatedReal-time Gross Settlement Express Transfer) 10.carbon footprint正确答案:碳足迹11.Church of England正确答案:英国国教12.fine arts正确答案:美术13.multi-language vendor正确答案:多语种供应商14.liberal arts education正确答案:博雅教育15.Standard & Poor’s Composite Index正确答案:标准普尔综合指数汉译英16.《论语》正确答案:The Analects of Confucius17.脸谱正确答案:Facebook18.安乐死正确答案:euthanasia19.核威慑正确答案:nuclear deterrence20.概念文化正确答案:concept culture21.教育公平正确答案:education equality22.国际结算正确答案:international settlement23.经济适用房正确答案:economically affordable house24.文化软实力正确答案:cultural soft power25.行政问责制正确答案:administrative accountability system26.保税物流园区正确答案:bonded logistics park27.中国海关总署正确答案:General Administration of Customs of the People’s Republic of China 28.黑社会性质组织正确答案:underworld organization29.和平共处五项原则正确答案:the Five Principles of Peaceful Coexistence30.《国家中长期人才发展规划纲要(2010一2020)》正确答案:National Medium and Long-term Talent Development Plan(2010-2020)英汉互译英译汉31.The current limitations of internet learning are actually those of publishing world: who creates a quality product that offers a coherent analysis of the world we live in? The answer has to lie in a group of people, organized in some way both intellectually and technologically. In the past this has usually been through books and articles. Some of the learning successes of the internet illustrate just how this can work in practice. A classic example is Wikipedia, an online encyclopedia created on a largely voluntary basis by contributors. The underlying mechanism of Wikipedia are technological; you can author an article by following hyperlinks—and the instructions. There are intellectual mechanisms built in, looking at the quality of what is submitted. This does not mean that the articles are equally good, or equal in quality to those encyclopedias created by expert, paid authors. However, there is no doubt that the service is a useful tool, and a fascinating demonstration of the power of distributed volunteer networks. A commercial contrast—which is also free—is the very rigorous Wolfram mathematics site, which has definitions and explanations of many key mathematical concepts. For students who use them with the same academic, critical approach they should apply to any source of information, such resources are useful tools, especially when supplemented by those of national organizations such as the Library of Congress, the National Science Foundation and other internationally recognized bodies. There are, of course, commercially available library services that offer electronic versions of printed media, such as journals, for both professional and academic groups, and there is already a fundamental feature of higher and professional education. Regardless of the medium through which they learn, people have to be critical users of information, but at the same time the information has to be appealing and valuable to the learner.(From Making Minds by Pal Kelley. 2008. Pp. 127-128)正确答案:目前限制网络学习的实际上是出版界:分析我们这个世界的优秀作品是由谁来创作的呢?答案就在一群智力和技术上都有条理的人士身上。
In the realm of mathematics, solving intricate problems often necessitates more than mere application of formulas or algorithms. It requires an astute understanding of underlying principles, a creative perspective, and the ability to analyze problems from multiple angles. This essay will delve into a hypothetical complex mathematical problem and outline a multi-faceted approach to its resolution, highlighting the importance of analytical reasoning, strategic planning, and innovative thinking.Suppose we are faced with a challenging combinatorial optimization problem – the Traveling Salesman Problem (TSP). The TSP involves finding the shortest possible route that visits every city on a list exactly once and returns to the starting point. Despite its deceptively simple description, this problem is NP-hard, which means there's no known efficient algorithm for solving it in all cases. However, we can explore several strategies to find near-optimal solutions.Firstly, **Mathematical Modeling**: The initial step is to model the problem mathematically. We would represent cities as nodes and the distances between them as edges in a graph. By doing so, we convert the real-world scenario into a mathematical construct that can be analyzed systematically. This phase underscores the significance of abstraction and formalization in mathematics - transforming a complex problem into one that can be tackled using established mathematical tools.Secondly, **Algorithmic Approach**: Implementing exact algorithms like the Held-Karp algorithm or approximation algorithms such as the nearest neighbor or the 2-approximation algorithm by Christofides can help find feasible solutions. Although these may not guarantee the absolute optimum, they provide a benchmark against which other solutions can be measured. Here, computational complexity theory comes into play, guiding our decision on which algorithm to use based on the size and characteristics of the dataset.Thirdly, **Heuristic Methods**: When dealing with large-scale TSPs, heuristic methods like simulated annealing or genetic algorithms can offerpractical solutions. These techniques mimic natural processes to explore the solution space, gradually improving upon solutions over time. They allow us to escape local optima and potentially discover globally better solutions, thereby demonstrating the value of simulation and evolutionary computation in problem-solving.Fourthly, **Optimization Techniques**: Leveraging linear programming or dynamic programming could also shed light on the optimal path. For instance, using the cutting-plane method to iteratively refine the solution space can lead to increasingly accurate approximations of the optimal tour. This highlights the importance of advanced optimization techniques in addressing complex mathematical puzzles.Fifthly, **Parallel and Distributed Computing**: Given the computational intensity of some mathematical problems, distributing the workload across multiple processors or machines can expedite the search for solutions. Cloud computing and parallel algorithms can significantly reduce the time needed to solve large instances of TSP.Lastly, **Continuous Learning and Improvement**: Each solved instance provides learning opportunities. Analyzing why certain solutions were suboptimal can inform future approaches. This iterative process of analysis and refinement reflects the continuous improvement ethos at the heart of mathematical problem-solving.In conclusion, tackling a complex mathematical problem like the Traveling Salesman Problem involves a multi-dimensional strategy that includes mathematical modeling, selecting appropriate algorithms, applying heuristic methods, utilizing optimization techniques, leveraging parallel computing, and continuously refining methodologies based on feedback. Such a comprehensive approach embodies the essence of mathematical thinking – rigorous, adaptable, and relentlessly curious. It underscores that solving math problems transcends mere calculation; it’s about weaving together diverse strands of knowledge to illuminate paths through the labyrinth of numbers and logic.Word Count: 693 words(For a full 1208-word essay, this introduction can be expanded with more detailed explanations of each strategy, case studies, or examples showcasing their implementation. Also, the conclusion can be extended to discuss broader implications of the multi-faceted approach to problem-solving in various fields beyond mathematics.)。
2015年山东大学211翻译硕士英语考研真题(总分100, 做题时间180分钟)Vocabulary and grammarDirections: Beneath each sentence there are four words or phrases marked A, B, Cand D. Choose the answer that **pletes the sentence. Mark your answers on your ANSWER SHEET.1.We‟ll be very careful and keep what you‟ve told us strictly ________.SSS_SINGLE_SELAprivateBrigorousCmysteriousDconfidential该问题分值: 1.5答案:D句意:我们会很小心,把你告诉我们的话严格保密。
confidential保密的。
private私人的,私有的。
rigorous严格的,严厉的。
mysterious神秘的。
2.Before every board meeting, it is customary for the ________ of the previous meeting to be read out.SSS_SINGLE_SELAminutesBprécisCnotesDprotocol该问题分值: 1.5答案:A句意:每次董事会召开之前,通常都要宣读上次的会议记录。
minutes会议记录。
précis摘要。
notes笔记。
protocol法案,议案。
3.He was barred from the club for refusing to ________ with the rules.SSS_SINGLE_SELAconformBabideCadhereDcomply该问题分值: 1.5答案:D句意:他因拒绝遵守规则被赶出了俱乐部。
imrd常用句式在撰写学术论文或报告时,使用IMRD(Introduction, Methods, Results, Discussion)结构是非常常见的。
以下是一些常用的句式,可以帮助你更好地组织和表达你的观点:1. 引言部分(Introduction):- 介绍研究领域的背景和重要性: "The field of [research area] has been gaining increasing attention due to its potential impact on [specific aspect]."- 提出研究问题或目标: "This study aims to investigate the relationship between [variable 1] and [variable 2] in order to shed light on [research question]."- 概述研究方法和结构: "The following sections will present the methodology, results, and discussion of the study, providing a comprehensive analysis of the research findings."2. 方法部分(Methods):- 描述研究设计和数据收集过程: "A quantitative research design was employed, with data collected through surveys distributed to a sample of [number] participants."- 说明实验步骤和操作: "Participants were randomly assigned to either the control group or the experimental group, and were instructed to complete a series of tasks within a specified time frame."- 讨论研究的可靠性和有效性:"The study’s methodology was carefully designed to minimize bias and ensure the validity of the results, with rigorous data analysis procedures implemented to enhance the reliability of the findings."3. 结果部分(Results):- 描述研究结果的主要发现: "The analysis revealed a significant positive correlation between [variable 1] and [variable 2], supporting the hypothesis that [research question]."- 展示数据和图表以支持结论: "Figure 1 illustrates the distribution of responses to the survey questions, indicating a clear trend towards [specific outcome]."- 强调结果的重要性和意义: "These findings have important implications for future research in the field, suggesting potential areas for further investigation and development."4. 讨论部分(Discussion):- 分析研究结果与现有研究的关系: "The results of this study are consistent with previous research findings, highlighting the robustness of the relationship between [variables]."- 探讨研究结果的可能解释和影响: "It is likely that the observed differences in [variable] were due to [potential factors], which should be considered in future studies to enhance the understanding of the phenomenon."- 提出建议和展望未来研究方向: "Future research should focus on exploring the mechanisms underlying the observed patterns, as well as investigating the long-term effects of [intervention] on [outcome]."通过使用以上常用的句式,你可以更清晰地组织和呈现你的研究成果,使读者更容易理解和接受你的观点和结论。
论述机器人在未来会扮演着什么角色英语作文全文共3篇示例,供读者参考篇1The Role of Robots in Our Future SocietyAs technology advances at a breakneck pace, it's becoming increasingly clear that robots will play a major role in shaping our future society. The integration of artificial intelligence and robotics is already transforming various industries, and the implications of this technological revolution are both exciting and daunting. As a student studying computer science, I find myself both fascinated and apprehensive about the potential impact of robots on our daily lives.One area where robots are poised to make a significant impact is in the workforce. Many routine and repetitive tasks that have traditionally been performed by humans are now being automated by robots. This trend is already visible in manufacturing plants, where robotic arms and assembly lines have replaced human workers on the production line. While this has led to concerns about job losses, it's important to recognizethat robots are designed to augment human capabilities rather than replace them entirely.In the future, we can expect robots to take on an even broader range of tasks, from construction and maintenance to healthcare and transportation. Imagine a world where robots are deployed to repair infrastructure, clean hazardous waste sites, or even assist in surgical procedures. The potential benefits of such applications are vast, including increased efficiency, reduced costs, and minimized risks to human workers.However, the widespread adoption of robots also raises ethical and social concerns. One of the most pressing issues is the potential for job displacement and its impact on the workforce. As robots become more advanced and capable of performing complex tasks, many traditional jobs could become obsolete. This could lead to widespread unemployment and economic disruption if appropriate measures are not taken to retrain and upskill workers for new roles.Another concern is the potential for robots to perpetuate biases and discrimination. As artificial intelligence systems are trained on data that may reflect societal biases, there is a risk that robots could inadvertently discriminate against certain groups or make decisions that reinforce existing prejudices. It iscrucial that the development and deployment of robots be guided by ethical principles and rigorous testing to ensure fairness and accountability.Despite these challenges, the potential benefits of robotics are too significant to ignore. In the field of healthcare, for example, robots could revolutionize patient care and medical research. Robotic assistants could help elderly or disabled individuals with daily tasks, enabling them to maintain their independence and quality of life. In addition, robotic surgeons could perform complex procedures with unprecedented precision, reducing the risk of human error and improving patient outcomes.Education is another area where robots could play a transformative role. Robotic tutors and interactive learning platforms could personalize the educational experience for each student, adapting to their individual needs and learning styles. This could help bridge the gap in educational opportunities and ensure that all students have access to high-quality education, regardless of their socioeconomic background or geographic location.Furthermore, robots could contribute to scientific exploration and environmental conservation efforts. Roboticprobes and rovers could explore distant planets and extreme environments that are inaccessible or too dangerous for human explorers. Meanwhile, robotic systems could be deployed to monitor and protect endangered species, track environmental changes, and assist in disaster response and recovery efforts.As we look to the future, it's clear that robots will become increasingly integrated into our daily lives. However, it's crucial that we approach this technological revolution with a balanced and responsible mindset. We must address the ethical and social implications of robotics, ensure that the benefits are distributed equitably, and prioritize the development of robust safeguards and regulations.At the same time, we must embrace the transformative potential of robotics and harness it to tackle some of the world's greatest challenges. By working collaboratively with robots, we can enhance human capabilities, increase productivity, and unlock new frontiers of knowledge and innovation.As a student, I am both excited and humbled by the prospects of a robotic future. It is our responsibility to shape this technological revolution in a way that serves the greater good of humanity. We must approach robotics with critical thinking,ethical reasoning, and a deep commitment to creating a more equitable, sustainable, and prosperous society for all.In conclusion, the role of robots in our future society is multifaceted and complex. While they offer tremendous potential for improving various aspects of our lives, we must also confront the challenges and implications of their widespread adoption. By fostering a robust public discourse, promoting ethical and responsible development, and embracing a spirit of collaboration between humans and machines, we can harness the power of robotics to create a better world for generations to come.篇2The Role of Robots in the FutureAs technology continues its rapid advancement, one area that is becoming increasingly prevalent is robotics. Robots are no longer just confined to science fiction movies and novels –they are very much a reality in the modern world. From manufacturing plants to operating rooms, robots are being utilized in a wide variety of fields and industries. However, their role is only set to grow larger and more significant in the coming years and decades. As a student looking towards the future, it isfascinating to consider the ways in which robots may shape and define our world going forward.One area where robots are likely to have a major impact is in the workforce. There are already examples of robots taking over manual labor and repetitive tasks in factories and warehouses. Their ability to work tirelessly around the clock with a high degree of accuracy makes them ideal for such roles. As artificial intelligence and machine learning capabilities improve, robots will become smarter and more adaptable, allowing them to take on increasingly complex jobs. This could displace human workers in certain fields, leading to social and economic upheaval. However, it could also free up humans to pursue more creative, intellectually-stimulating professions. Rather than spending their days on assembly lines, people may be able to focus on innovation, entrepreneurship, science, and the arts.Of course, developing smarter robots with advanced decision-making abilities raises ethical concerns that will need to be carefully navigated. How much autonomy should robots be given? What safeguards need to be put in place? These are just a few of the thorny questions that will arise as robots play a bigger role in society.Another sphere where robots could be transformative is in healthcare and elder care. Robots may be enlisted to help care for the sick and elderly – roles that are often physically and emotionally demanding for human caregivers. Robotic nurses could monitor patients, dispense medication, and help with mobility with a high degree of efficiency and precision. For the elderly who wish to live independently, companion robots could provide company, reminder services, and assistance with daily tasks. Japan, which has an aging population, is already a leader in developing care robots for this purpose.The military applications of robotics are also significant. Unmanned drones are already used extensively for surveillance and airstrikes, keeping human pilots out of harm's way. In the future, autonomous robots could play a larger role on the battlefield, raising ethical issues around the use of AI to make decisions impacting human life. That said, robots could also be used to dispose of landmines and other unexploded ordnance, saving many lives.More broadly, robots may become mainstream consumer products that we interact with regularly at home, work, and in public spaces. Personal assistant robots could help with household chores like cleaning and cooking. Office robots mayschedule meetings and take notes. Shopping mall robots could help customers find stores and promotions. City robots could monitor infrastructure, direct traffic, and keep spaces clean and safe. The possibilities are vast once you begin to imagine how robots could be integrated into our daily lives and environments.For those pursuing careers in science, technology, engineering, and mathematics (STEM fields), the rise of robotics offers particularly exciting prospects. There will likely be a huge demand for robot designers, programmers, and technicians as robots proliferate across industries and public spaces. Developing improved artificial intelligence, strengthening cybersecurity, and enhancing human-robot interactions are just a few of the challenges that will need to be tackled. Bright minds entering these cutting-edge fields could help shape the future of robotics.With such a meteoric rise on the horizon, it seems robots are destined to go from science fiction to an inescapable part of everyday reality. They have the potential to enhance human productivity and quality of life in innumerable ways, yet they also present risks that will require careful ethics guidelines and regulations. As a student today, I find it both exciting and somewhat daunting to ponder the robot-filled world that mayawait us. Perhaps one day, robots could even be helping students like me with homework and studying! One thing is for certain – staying ahead of the curve on robotics and other rapidly evolving technologies will be key to thriving in the world of tomorrow.篇3The Role of Robots in the FutureAs technology continues to advance at a breakneck pace, one area that is witnessing tremendous growth and innovation is robotics. Robots are becoming increasingly prevalent in various sectors, from manufacturing and healthcare to space exploration and entertainment. As a student studying this fascinating field, I cannot help but wonder what role robots will play in the future and how they will shape our world.To begin with, robots are expected to play a significant role in the manufacturing industry. Automation has already revolutionized the way goods are produced, and robots have become an integral part of assembly lines. However, the robots of the future will be even more advanced, with enhanced capabilities for precision, speed, and efficiency. They will be able to perform complex tasks with minimal human intervention,reducing costs and increasing productivity. Additionally, these robots will be able to adapt to changing conditions and learn from their experiences, making them even more valuable assets in the manufacturing process.Another area where robots will likely have a profound impact is healthcare. Medical robots are already being used for surgical procedures, rehabilitation, and drug delivery. In the future, robots may be able to perform even more complex surgeries with greater accuracy and precision than human surgeons. They could also assist in the care of the elderly and people with disabilities, providing companionship and support. Furthermore, robots could play a crucial role in telemedicine, allowing patients in remote areas to receive quality healthcare services.Robots will also be instrumental in space exploration. NASA and other space agencies have already been using robotic rovers and probes to explore other planets and celestial bodies. In the future, more advanced robots could be sent to establish colonies on Mars and other planets, paving the way for human habitation. These robots would be capable of constructing habitats, extracting resources, and performing various tasks in harsh extraterrestrial environments.Another area where robots could have a significant impact is disaster response and search and rescue operations. Robots could be designed to navigate through rubble and debris, locate survivors, and provide essential supplies in the aftermath of natural disasters or other emergencies. They could also be used to defuse bombs and handle hazardous materials, protecting human lives in dangerous situations.In the realm of entertainment, robots could revolutionize the way we experience movies, video games, and theme parks. Imagine watching a movie where robots could create realistic 3D environments and characters that interact with the audience. Or imagine playing a video game where robots could create dynamic, ever-changing environments that adapt to your gameplay. Theme parks could also feature robotic attractions that provide immersive and interactive experiences for visitors.However, as exciting as these prospects may seem, there are also concerns about the potential impact of robots on employment and job displacement. As robots become more capable and efficient, they could potentially replace human workers in various industries, leading to job losses and economic disruptions. This is a valid concern that needs to be addressed by policymakers, educators, and society as a whole.One potential solution could be to focus on retraining and reskilling programs, ensuring that workers have the necessary skills to adapt to the changing job market. Additionally, new job opportunities could arise in fields related to robotics, such as programming, maintenance, and design. Governments and companies could also explore ways to implement universal basic income or other safety nets to support those displaced by automation.Furthermore, it is crucial to consider the ethical implications of advanced robotics. As robots become more autonomous and intelligent, we must grapple with questions of accountability, privacy, and the potential for misuse or unintended consequences. We must establish clear guidelines and regulations to ensure that robots are developed and used in a responsible and ethical manner.In conclusion, the role of robots in the future is poised to be vast and far-reaching. They will likely revolutionize various industries, from manufacturing and healthcare to space exploration and entertainment. However, their impact will also bring challenges and disruptions that need to be addressed proactively. As a student studying this field, I am excited to be part of this technological revolution and contribute to shapingthe responsible development and deployment of robots for the betterment of humanity.。
第29卷 第1期2007年3月西 北 地 震 学 报NOR T HWESTERN SEISMOLO GICAL J OU RNALVol.29 No.1March,2007三维粘弹介质地震波场有限差分并行模拟①王德利,雍运动,韩立国,廉玉广(吉林大学地球探测科学与技术学院,吉林长春 130026)摘 要:有限差分法在三维粘弹性复杂介质正演模拟地震波的传播中对计算机内存和计算速度要求比较高,单个PC机或工作站只能计算较少网格内短时间的波场。
本文介绍一种基于M PI的并行有限差分法,可在PCCluster上模拟较大规模三维粘弹性复杂介质中地震波传播时的波场;可预测地震波在此类条件下传播时的运动学和动力学性质。
对于更好地理解波动传播现象,解释实际地震资料及反问题的解决等均具有重要的理论与实际意义。
关键词:三维粘弹性介质;地震波场模拟;并行计算;MPI;有限差分中图分类号:P315.3+1,P631.4 文献标识码:A 文章编号:1000-0844(2007)01-0030-05Parallel Simulation of Finite Difference for Seismic W avef ieldModeling in32D Viscoelastic MediaWAN G De2li,YON G Yun2dong,HAN Li2guo,L IAN Yu2guang(College Of Geoex ploration S cience and Technolog y,J ili n Universit y,Changchun 130026,China)Abstract:When finite difference(FD)met hod is used in modeling t he p ropagation of seismic wave in32D viscoelastic complex media,it consumes vast quantities of comp utational resources.So on single PC or workstation,32D calculations are still limited to small grid sizes and short seismic wave traveltimes.In t his paper,t he parallel FD algorit hm which based on a message passing in2 terface(M PI)is introduced to solve above p roblem properly.U sing PCCluster we can calculate t he wavefield of t he large32D viscoelastic complex models,f urt hermore p redict and understand t he kinematic and dynamic p roperties of seismic waves propagating t hrough t he models of t he crust.It help s us in every stage of a seismic investigation.K ey w ords:32D viscoelastic media;Seismic w avef ield simulation;Parallel computing;MPI;Finite difference0 引言地震正演模拟是地震勘探和地震学的重要基础,不但在石油、天然气、煤、金属和非金属等矿产资源勘探及工程和环境地球物理研究中发挥着重要作用,而且在地震灾害预测、地震区带划分以及地壳构造和地球内部结构研究中也得到相当广泛的应用。
Research Approach and MethodologyThis project began with a simple question: Could scientific research help us understand and perhaps measure spiritual growth? In other words, could the same research tools used in the marketplace to measure consumer attitudes and behaviors also be used by local churches to measure the spiritual beliefs and behaviors of their congregations? We believed the answer was yes.We have refined our research over the course of four years, more than 200 churches and 80,000 individual surveys. While we’re still in the early phases of our work, we feel confident that the research survey tool and analysis has proven capable of producing valid and valuable insight for church leaders.Here is a brief overview of our research approach and methodology. APPROACHOur approach focused on three key areas and questions related to those areas: •Segments: What are the different groups/segments of people the church might be looking to serve?•Needs: What spiritual growth needs are being met, not being met well or not being met at all for each segment?•Drivers and Barriers: What are the drivers of spiritual growth, and what are the barriers to spiritual growth?These three areas provided the framework around which we organized the information we collected.METHODOLOGYBroadly speaking, there are two types of research methodology: qualitative and quantitative. We used both qualitative and quantitative methodologies, and then employed analytical techniques and processes to review the data.Qualitative (Gathering Insights)This is typically a one-on-one process in which a researcher poses questions directly to an individual. The questions often ask not only for information and opinions but also allow the interviewer to probe the richness of emotions and motivations related to the topic. Researchers use qualitative data to help clarify hypotheses, beliefs, attitudes and motivations. Qualitative work is often a first step because it enables a researcher to fine-tune the language that will be used in quantitative tools.Quantitative (Establishing Statistical Reliability)This process utilizes detailed questionnaires often distributed to large numbers of people. Questions are typically multiple choice and participants choose the most appropriate response among those listed for each question. Quantitative research collects a huge amount of data, which can often be generalized to a larger population and allowfor direct comparisons between two or more groups. It also provides statisticians with a great deal of flexibility in analyzing the results.Analytical Process and Techniques (Quantifying Insights and Conclusions) Quantitative research is followed by an analytical plan designed to process the data for information and empirically-based insights. Three common analytical techniques were used in our three research phases:•Correlation Analysis: Measures whether or not, and how strongly, two variables are related. This does not mean that one variable causes the other; it means theytend to follow a similar pattern of movement.•Discriminate Analysis: Determines which variables best explain the differences between two or more groups. This does not mean the variables cause thedifferences to occur between the groups; it means the variables distinguish onegroup from another.•Regression Analysis: Used to investigate relationships between variables. This technique is typically utilized to determine whether or not the movement of adefined (or dependent) variable is caused by one or more independent variables.We used both qualitative and quantitative methods in 2004 when we focused exclusively on Willow Creek Community Church and also in our 2007-2008 research involving hundreds of churches. Here is a summary of the methodology used in our most recent work.Qualitative Phase (December 2006)•One-on-one interviews with sixty-eight congregants. We specifically recruited people in the more advanced stages of spiritual growth. Our goal was to capturelanguage and insights to help guide the development of our survey questionnaire.•Interview duration: 30-45 minutes•Focused on fifteen topics. Topics included spiritual life history, church background, personal spiritual practices, spiritual attitudes and beliefs, etc. Quantitative PhasesPhase 1 (January-February 2007)•E-mail survey fielded with seven churches diverse in geography, size, ethnicity and format•Received 4,943 completed surveys, resulting in 1.4 million data points•Utilized fifty-three sets of questions on topics such as:o Attitudes about Christianity and one’s personal spiritual lifeo Personal spiritual practices, including statements about frequency of Bible reading, prayer, journaling, etc.o Satisfaction with the role of the church in spiritual growtho Importance and satisfaction of specific church attributes (e.g. helps me understand the Bible in depth) related to spiritual growtho Most significant barriers to spiritual growtho Participation and satisfaction with church activities, such as weekend service, small groups, youth ministries and servingPhase 2 (April-May 2007)•E-mail survey fielded with twenty-five churches diverse in geography, size, ethnicity and format•Received 15,977 completed surveys•Utilized refined set of questions based on Phase 1Phase 3 (October-November 2007 and January-February 2008)•E-mail survey fielded with 487 churches diverse in geography, size, ethnicity and format, including ninety-one churches in seventeen countries•Received 136,547 completed surveys•Utilized refined set of questions based on Phase 2 researcho Expanded survey to include twenty statements about core Christian beliefs and practices from The Christian Life Profile Assessment Tool Training Kit.* o Added importance and satisfaction measures for specific attributes related to weekend services, small groups, children’s and youth ministries, and servingexperiences.Analytical Process and ResourcesEach phase of our research included an analytical plan executed by statisticians and research professionals. These plans utilized many analytical techniques, including correlation, discriminate and regression analyses. In FOLLOW ME, our observations about the predictability of spiritual factors are derived primarily from extensive discriminate analysis. To put our analytical approach into perspective, here are three points of explanation about the nature of our research philosophy.1.Our research is a “snapshot” in time.Because this research is intentionally done at one point in time—like a snapshot—it is impossible to determine with certainty that a given variable, such as “reflection on Scripture,” distinguishes one segment from another (for example, Growing in Christ compared with Close to Christ). To accomplish this, we would have to assess the spiritual development of the same people over a period of time (longitudinalresearch).However, the fact that increased levels of reflection on Scripture occur in the Close to Christ segment compared with the Growing in Christ segment stronglysuggests that reflection on Scripture does influence spiritual movement between these segments (Movement 2). While it does not determine conclusively that a givenvariable “causes” movement, discriminate analysis identifies the factors that are the most differentiating characteristics between the two segments. So we infer from its findings that certain factors are more “predictive,” and consequently more influential to spiritual growth.Our ultimate goal is to measure the same people over multiple points in time (longitudinal research) in order to more clearly understand the causal effects of* Randy Frazee, The Christian Life Profile Assessment Tool Training Kit (Grand Rapids, MI: Zondervan, 2005).spiritual growth. However, even then we know there will be much left to learn, and much we will never understand about spiritual formation. The attitudes and behaviors we measure today should not be misinterpreted as defining spiritual formation.Instead they should be considered instruments used by the Holy Spirit to open our hearts for his formative work.2.The purpose of this research is to provide a diagnostic tool for local churches.Our intent is to provide a diagnostic tool for churches that is equivalent to the finest marketplace research tool at a fraction of the marketplace cost. This is “applied”research rather than “pure” research, meaning that its intent is to provide actionable insights for church leaders, not to create social science findings for academic journals.In a nutshell, while we intend to reinforce our research base with longitudinal studies, we chose to draw conclusions about the predictability and the influence of spiritual attitudes and behaviors based on point-in-time research evaluated through discriminate analysis. This approach meets the most rigorous standards of marketresearch that routinely influence decision-making at some of the most respected and successful organizations in the country.3.Research is an art as well as a science.While the data underlying our findings is comprehensive and compelling as science, we have also benefited from the art of experts whose judgment comes from years of experience. The two research experts closest to this work represent almost fifty years of wide-ranging applied research projects. Eric Arnson began his career inquantitative consumer science at Proctor & Gamble, and ultimately became the North American brand strategy director for McKinsey and Company. Terry Schweizer spent twenty years with the largest custom-market research organization in the world,running its Chicago office before contributing full-time to REVEAL’s finaldevelopment phase. Eric and Terry poured the benefit of their expertise andjudgment into every finding in this book, which gives us confidence that the artcomponent of our research is on very solid ground.A Note about the Top Five Catalysts for Each MovementYou may have noticed that the order of most influential factors shifts slightly between the four independent categories of spiritual catalysts and the lists of “top five catalysts” for Movements 1 and 2. For example, chart 2-7 shows that reflection on Scripture is the most influential personal spiritual practice for each movement. But when we list the top five catalysts for Movements 1 and 2 (charts 3-5 and 3-9), prayer appears to be more influential than reflection on Scripture. The apparent discrepancies are a function of the discriminate analysis.The top five catalysts for each movement were determined by evaluating all fifty-plus spiritual factors through the discriminate lens, which at times recalibrates the predictability of one factor versus another. That happens when a portion of one factor’s predictive power is shared by another. For example, as noted, reflection on Scripture was more highly predictive of spiritual movement than prayer when we looked at personal spiritual practices across the three movements. However, when reflection on Scripture was analyzed alongside all the fifty-plus catalysts, its level of influence was shared tosome extent with another factor, possibly the belief in salvation by grace. In this case, because the discriminate analysis is looking for the best combination of top five catalysts to explain differences between two segments, it’s possible that reflection on Scripture ranked lower than prayer because part of its predictive power is explained by the salvation by grace factor.Confused? One way to think about this is to consider the Food Pyramid, which includes five basic food groups: grains, vegetables, fruits, dairy and meat. Each food group could list its most nutritious foods in order. But when you pool all possible foods together looking for the best food plan for a young child, it’s possible that not all the top-ranked nutritious foods are on the list. Two reasons account for this. First, when looking for the best combination of nutrients, some foods will be more necessary for a young child than others; that influences the list. Second, some of those foods will have vitamins and nutrients that are redundant with others, so that affects which grains, vegetables, and other foods are on the best food plan. So the best combination of foods for a young child won’t necessarily include the most nutritious foods in each one of the food group categories, and the rank order of “best” foods could vary as well.This is analogous to our efforts to find the best combination of spiritual catalysts for the three movements of spiritual growth. The bottom line is that pouring all the spiritual catalysts into one discriminate analysis bucket can shake up the order of most influential (the top five) because the predictive power of all the factors have to recalibrate in relation to each other.In summary, we have employed the highest applied research standards available, including a robust qualitative process and three waves of quantitative surveys across hundreds of diverse churches. While there is much more work yet to do, we are confident that the insights and findings in Follow Me reflect a very high level of research excellence.。
总体设计General design设计依据Design basis建筑红线Building line建筑系数Coefficient of building occupation 界区Battery limited有效面积Effective area使用面积Usable area结构面积Structural area建筑面积Building area建筑面积密度Density of building area竖向布置Vertical planning高程Altitude等高线Contour line相对标高Relative elevation绝对标高Absolute altitude地震基本烈度Basic earthquake intensity抗震设防烈度Fortification intensity风玫瑰Wind rose建筑朝向Building orientation 建筑间距Building distance公顷Hectare指北针North arrow测北Geophical north建北Plant north气象Meteorology日照Sunshine天然采光Natural lighting人工照明Artificant light; artificial illumination 照度Degree of illumination通风Ventilation正压通风Positive ventilating噪声Noise隔声Sound insulation吸声Sound absorption保温隔热Heat insulation露点Dewpoint冷桥Cold-bridge遮阳Sunshade恒温恒湿Constant temperature & constant humidity 消声Noise elimination; noise reduction防振antivibration4.2 建筑一般词汇Conventional terms of architecture方案Scheme; draft草图Sketch透视图Perspective drawing建筑构图Architectural composition坐标Coordinate纵坐标Ordinate横坐标Abscissa跨度Span开关Bay进深Depth层高Floor height净高Clear height; headroom横数Module; modulus裙房Skirt building 楼梯Stair梯段Stair slab楼梯平台Stair landing安全出口Safety exit疏散楼梯Escape staircase楼梯间Stair well封闭楼梯间Enclosure staircase防烟楼梯间Smoke prevention stair well消防电梯间Emergency elevators well自动扶梯Escalator中庭Atrium疏散走道Escape corridor; escape way耐火等级Fire-resistive grade生产类别Classification of production耐火极限Duration of fire-resistance防火间距File-break distance泄压面积Area of pressure release闷顶Mezanine; mezzanine砖标号Grade of brick; strength of brick承重墙Bearing wall非承重墙Non-bearing wall挡土墙Retaining wall填充墙Filler wall围护墙Curtain wall; cladding wall; enclosure wall 女儿墙Parapet wall隔墙Partition(浴室、厕所)隔断Stall窗间墙Pier墙垛Pillar过梁Lintel圈梁Gird; girt; girth防潮层Damp-proof course勒脚Plinth横梁Transverse beam纵横Longitudinal wall山墙Gable; gable wall防火墙Fire wall压顶Coping 勾缝Pointing砖砌平拱Brick flat arch预埋件Embedded inserts直抓梯Ladder栏杆Railing防腐蚀Corrosion resistant; Anticorrosion 化学溶蚀Chemical erosion膨胀腐蚀Expansion corrosion化学腐蚀Chemical corrosion电化学腐蚀Electro-chemical corrosion晶间腐蚀Intergranular corrosion气相腐蚀Gaseous corrosion液相腐蚀Liquidoid corrosion固相腐蚀Solid corrosion腐蚀裕度Allowance for corrosion锈蚀rusting4.3 建筑材料Building materials级配砂石Graded sand & gravel素土夯实Rammed earth灰土Lime-soil素混凝土Plain concrete钢筋混凝土Reinforced concrete(R.C)细石混凝土Fine aggregate concrete轻质混凝土Lightweight concrete加气混凝土Aerocrete; Aerocrete concrete 陶粒混凝土Ceramsite concrete水泥膨胀珍珠岩Cement & expended pearlite 岩棉Mineral wool; mine wool沥青asphalt卷材Felt玛蹄脂Asphalt mastic粘土砖Clay brick釉面砖Porcelain enamel brick空心砖Hollow brick砌块Block缸砖guarry brick锦砖mosac地面砖paving brick 防滑地砖non-slip brick耐酸砖(板)acidbrick (acidtile)胶泥mastic粘土瓦Cloy tile玻璃瓦enamelled tile波形镀锌钢板galvanized corrugate steel sheet玻璃钢瓦glass-fiber reinforced plastic tile彩色压型钢(铅)板coloured corrugate steel (aluminium) plat; profiuing coloured stccl (aluminium) plat 石棉水泥瓦ashestos-cement sheet水磨石terrajjo水刷石granite plaster花岗石granite磨光花岗石polished granite剁斧石artificial stone大理石marble水泥砂浆抹面cement plaster石灰砂浆抹面lime plaster水泥石灰砂浆抹面cement-lime plaster刀灰打底hemp-cut and lime as base原木Log方木square timber板材plank胶合板plywood三夹板3plywood五夹板5plywood平板玻璃flat glass浮法玻璃float-process glass磨砂玻璃ground glass; frosted glass起玻璃prism glass夹丝玻璃Wire glass夹层玻璃sandwich glass中空玻璃hollow glass钢化玻璃reinforced glass镀膜玻璃coating glass有机玻璃organic glass4.4 建筑构造及配件Building construction & component 铺砌paving 地坪grade基层bedding素土夯实rammed earth垫层base结合层bonding course面层Covering隔离层insulation course活动地板raising floor; movable floor; access floor篦子板grating地面提示块ground prompt遮阳板sunshade窗套Window moulding护角curb guard防水层water-proof course找平层Leveling course隔热层heat insulation course保温层thermal insulation course檩条purlin天窗skylight天棚ceiling吊顶suspending ceiling吊顶龙骨ceiling joist雨水口drain gulley水斗Lead head雨水管Leader; downspout天沟Valley挑檐overhanging eave檐口eave泛水flashing分水线watershed檐沟eave gutter汇水面积catchment area雨罩canopy散水apron坡道ramp台阶entrance steps保温门thermal insulation door 隔声门(窗)sound insulation door (window); acoustical door (window)防火门(窗)fire door (window)冷藏门freezer door安全门exit door防护门(窗)protection door (window)屏蔽门(窗)Shield door (window)防风砂门radiation resisting door (window) 密闭门(窗)weather tight door泄压门(窗)pressure release door (window) 壁柜门closet door变压器间门transformer room door围墙门gate车库门garage door保险门safe door引风门ventilation door检修门access door平开门(窗)side-hung door推拉门(窗)sliding door弹簧门swing door折迭门folding door卷帘门rolling door转门revolving door夹板门plywood door拼板门FLB door (framed, Ledged and battened door);matchboard door实拼门solid door镶板门panel door镶玻璃门glazed door玻璃门glass door钢木门steel & wooden door百页门shutter door连窗门door with side window传递窗delivery window观察窗observation window换气窗vent sast上悬窗top-hung window中悬窗center-pivoted window下悬窗bottom-hung window 立转窗vertically pivoted window固定窗fixed window单层窗single window双层窗double window百页窗shutter带形窗continuous window子母扇窗attached sash window组合窗composite window落地窗French window玻璃幕墙glazed curtain wall门(窗)框door (window) frame拼樘料transom (横),mullion (竖)门(窗)扇door (window) leaf(平开窗扇casement sash)纱扇screen sash亮子transom, fanlight门心板pancl披水板weather board贴脸板trim筒子板Lining窗台板sill plate防护铁栅barricade; iron grille; grating 合页butts; hinges; butt hinges执手knob撑档catch滑道sliding track插销bolt拉手pull推板push plate门锁mortice lock; door lock执手锁mortice lock with knob4.5 建筑结构Building structure4.5.1 荷载Load活荷载Live load静荷载,恒载Dead load静力荷载Static load移动荷载Moving load动力荷载Dynamic load冲击荷载impact load附加荷载Superimposed load规定荷载(又称标准荷载)Specified Load集中荷载Concentrated Load分布荷载Distributed Load设计荷载Design Load轴向荷载Axial Load偏心荷载Eccentric Load风荷载Wind Load风力、风压Wind force、Wind Pressure 雪荷载Snow Load屋面积灰荷载Roof ash Load(吊车)最大轮压Maximum Wheel Load吊车荷载Crane Load安装荷载Erection Load施工荷载Construction Load不对称荷载Unsymmetrical Loading重复荷载Repeated Load刚性荷载Rigid Load柔性荷载Flexible Load临界荷载Critical Load容许荷载Admissible Load, Allowable Load safe Load 极限荷载Ultimate Load条形荷载Strip Load破坏荷载Failure Load, Load at failure地震荷载Seismic Load; Earthquake load荷载组合Combination of load4.5.2 地基和基础Soil and foundation地基(Bed)soil天然地基Natural ground人工地基Artificial ground混凝土基础Concrete foundation; Concrete footing毛石基础Rubble masonry footing;(Rubble) stone footing砖基础Brickwork footing桩基础Pile foundation设备基础Equipment foundation机器基础Machine foundation独立基础Individual footing, Isolated foundation Pad foundation联合基础Combined footing大块式基础Massive foundation条形基础Strip foundation; Strip footingStrap footing; Continuous footing方形基础Square footing杯形基础Footing socket板式基础Slab-foundation; Mat footing阶梯形基础Stepped foundation; Benched foundation (Stepped footing)扩展基础spread footing扩底基础Under-reamed foundation浮伐基地Raft foundation; Buoyant foundation Floating foundation沉箱(井)基础Caisson foundation构架式基础Frame foundation深基础Deep foundation基础Footing Foundation基槽Foundation ditch; Foundation trench基坑Foundation pit基础板Foundation slab基础梁Foundation beam Footing beam基础底面Foundation base基础底板Foundation mat基础垫层Foundation-bed基础埋置深度Depth of embedment foundation地下连续墙Undelground continuous wall打桩Pile driving打桩机Pile engine; Pile driver; Ram mactuine 钢桩Steel pile水桩Wood pile; Timber pile钢筋混凝土桩Reinforced concrete pile砂桩Sand pile石灰桩Lime pile; Lime column; Quicklime pile 单桩Single pile群桩Pile group; Pile cluster斜桩Batter pile预制桩precast pile灌注桩In-situ pile; Cast-in-place pileCast-in-situ pile; Filling pile板桩sheet pile拆密桩Compaction pile挤密砂桩Sand compaction pile灰土挤密桩Line-soil compaction pile钻孔桩Bored pile打入桩Driuen pile摩擦桩Friction pile; Buoyant pile端承桩End bearing pile; Point bearing pile; Column pile (柱桩)支承桩Bearing pile抗拔桩Tension pile; Uplift pile;抗滑桩Anti-slicle pile桩承台Pile cap桩帽Pile cap; pile couel桩头Pile crown; pile head桩端Pile tip桩身Pile shaft桩距Pile spacing桩钢筋笼Pile cage试桩Test pile桩荷载试验Pile load test桩的动荷载试验Dynamic load test of pile桩的侧向荷载试验Lateral pile load test桩的极限荷载Ultimate pile load桩承载能力Pile capacity; Bearing capacity of a pile Carrying capacity of a pile土压力Earth pressure主动土压力Active earth pressure被动土压力Passive earth pressure静止土压力Earth pressure at rest容许地耐力Allowable bearing strength冰冻深度Frozen depth; Frost depth;Frost penetration depth of frost penetration 防冻深度Frost-proof depth粘土类土Clayey soil轻亚粘土Sandy loam亚粘土Sandy clay; Loam砂质土Sandy soil砂砾石Sandy gravel stratum膨胀土Expansive soil 硬质土,硬盘岩,硬土层Hard pan液限Liquid limit塑限Plastic limit塑料指数Index of plasticity松软土Mellow soil; spongy soil回填土Backfill Refilling杂填土Miscellaneous fill地表水Surface water;地下水Groundwater地下水位Groundwater eltualion; Groundwater level; Groundwater table 容重Unit weight干容重Dry unit weight湿容重Wet unit weight饱和容重Saturated unit weight不均匀沉降Unequal settlement;Differential settlement地基处理Ground treatment地基加固Ground stabilization; soil improvement 土质查勘Soil exploration地质勘察Geological exploration沉陷Settlement倾斜obliquity; Inclination滑移Sliding夯实土Compacted soil夯实填土Compacted fill夯实回填土Tamped backfill夯实分层厚度Compacted lift持力层Bearing stratum; Supporting; Course 管道与基础相碰Pipeline interferes with foundation 4.5.3 一般结构用语Terms for general structures建筑结构Building structure建筑物Building结构形式Structural type混凝土(砼)结构Concrete structure砌体结构Masonry structure砖砌结构Brick structure石砌结构Stone structure砖砼结构Brick and concrete structure 钢结构Steel structure结构型钢Shape steel木结构Timber structure组合结构Composite structure框架结构Frame structure梁板结构Beam and slab structure 构件Member承重构件Load-bearing member 结构构件Structure member肋形屋盖Ribbed roof肋形楼板Ribbed floor slab无梁楼板Flat plate; Flat slab桁架;屋架Truss; Roof truss三角形屋(桁)架Triangular truss梯形屋(桁)架Trapezoidal truss拱形屋架Arch roof truss折线形屋架Segmental roof truss弓形桁架Bowstring truss框架Frame排架Bent frame刚架Rigid frame门架Portal frame抗风构架Wind frame抗震构架Aseismic frame梁Beam; Girder大梁Girder主梁Principal beam; Primary beam 次梁Secondary beam加腋梁Haunched beam筒支梁Simply beam固端梁Fixed beam悬臂梁Cantilever beam连续梁Continuous beam托梁Spandrel圈梁girth过梁Lintel曲梁Curved beam bow beam基础梁Foundation beam 吊车梁Crane girderT形梁T-beam柱Column组合柱Combination column立柱Post吊车柱Crane column抗风柱End panel column墙wall板slab plate承重墙Bearing walls柱网Column grid支撑系统Brace system柱间支撑Portal bracing between columns 屋石支撑Roof bracing垂直支撑Vertical bracing水平支撑Horizontal bracing剪刀撑Cross bracing临时顶撑,支撑Shoring桁架式支撑Trussed bracing斜撑Kneel-brace上弦Top chord下弦Bottom chord节间Panel节点Panel point压杆Compression member; strut 拉杆Tension member; Tie-beam 腹杆Web member斜杆Diagonal member斜腹杆Diagonal web member吊杆Hanged rod; sag rod系杆,拉杆Tie bar; sag rod天窗架Skylight frame天窗Monitor; Skylight托座、牛腿Bracket檀条Purlin连接Connection接点Joint铰接点Hinged point固接点Fixed Joint安装接点Erection joint拼接接点Splice joint节点板、连接板Gusset plate; connecting plate 加劲板Stiffener plate支撑板Bearing plate填隙板Filler plate梁柱接头Beam-column connections截面Cross section拱Arch壳Shell伸缩缝Expansion joint沉降缝Settlement joint施工缝Construction joint防震缝Aseismic joint4.5.4 结构理论用语Terms for theory of slructures a) 设计方法术语Terms for design method结构设计Structure design按极限强度设计Ultimate strength design按许可应力设计Working stress design按承载能力设计Loading capacity design按稳定性设计Design according to stability按变形设计Design according to deformation 结构分析Structural analysis结构计算Structural calculation静定结构Statically determinate structures 超静定结构Statically indeterminste structures 精确计算Rigorous calculation近似计算Approximate calculation安全等级Safety classes极限状态Limit states摩擦系数Coefficient of friction质量密度Mass density重力密度Weight (Force) densityb) 结构的作用效应术语轴向力Normal force剪力Shear force弯矩Bending moment扭矩Torque水平推力Horizontal thrust水平拉力Horizontal pull水平(垂直)分力Horizontal (vertical) component合力Resultant正应力Normal stress剪应力Shear stress主应力Principal stress次应力Secondary stress预应力Prestress位移Displacement挠度Deflection变形Deformation弯曲Bending; Flexure扭转Torsionc) 材料性能、结构抗力术语Terms of property of material and resistamle of structure抗力Resistancc抵抗力矩Resistante moment强度Strength刚度Stiffness; Rigidity抗裂度Crack resistance抗压强度Compressiue strength抗拉强度Tensile strength抗剪强度Shear strength抗变强度Flcxural strength抗扭强度Torsional strength抗裂强度Cracking strength屈服强度Yield strength疲劳强度Fatigue strength弹性模量Modulus of elasticity剪切模量Shear modulus变形模量Modulus of deformation稳定性Stabilith泊松比Poisson ratiod) 几何参数术语Terma of geometry parameter 截面高度Height of section 截面有效高度Effective depth of section载面宽度Breadth of section截面厚度Thickness of section截面面积Area of section截面面积矩First moment of area截面惯性矩Second moment of area截面抵抗矩模量规范中用词Section modulus回转半径Radius of gyration偏心距eccentricity长度Length跨度Span矢高Rvse长细比Slenderness ratio4.5.5 砖石结构Masonry construction砖砌体Brickwork砌筑Laying砖标号Grade of brick; Strength of brick 毛石砌体Rubble masonry毛石砼砌体Grouted rubble masonry水泥砂浆Cement mortar石灰砂浆Lime mortar水泥石灰砂浆cement-lime mortar砂浆找平Mortar leveling4.5.6 钢筋砼结构Reinforced concrete structure素混凝土Plain concrete钢筋混凝土Reinforced concrete预应力砼Prestressed concrete砼标号Grade of concrete钢号Grade of steel整体式结构Monolithic structure装配式结构Assembled structure预制构件Precast element; Fabricated element 现浇Cast-in-site; Placed-in-site浇灌砼Concreting; Costion; Pouring; Placing 一次浇灌At one pouring分二次浇灌Pours in two operations二次灌浆Final grouting钢筋保护层Reinforcement protective course 模板Formwork; shuttering拆模Form stripping预埋件Embedded inserts预留槽(洞)Groove (Hole) to be provided垫层Bedding course找平层Leveling course; Trowelling course 4.5.7 配筋Reinforcement配筋率Percentage of reinforcement主筋Main reinforlement分布筋Distributing reinforcement腹筋Web reinforcement开口箍筋U-stirrups闭口箍筋Closed stirrups环筋Hoops弯起钢筋Bent-up bar附加钢筋Additional reinforcement搭接接头Lapped splice焊接接头Welded splice钢筋间距Spacing of reinforcement吊筋Suspender双向配筋Two-way reinforced螺旋筋Spiral bar温度筋Temperature reinforcement锚固长度Anchor length埋入长度Built-in length螺孔直径Diameter of bolt hole螺孔中心线Center line of bolt hole锚栓、地脚螺栓Anchor bolt基础螺栓Foundation bolt安装螺栓Erection bolt普通螺栓Common bolt高强螺栓High strength bolt4.5.8 焊接Welding电弧焊Electric arc welding气焊,氧—乙炔焊Gas welding; oxy-acetylene welding 焊条Welding electrode; Welding-rod手工焊Manual welding自动焊Automatic welding 车间焊接Shop-welding现场焊接(工地焊接)Field-welded满焊Full weld搭接焊Lap weld贴角焊Fillet weld点焊Spot weld对接焊Butt weld仰焊Overhead weld双面贴角焊Flat fillet weld in front and back 连续贴角焊Continuous fillet weld间断贴角焊Intermittent fillet weld安装焊缝Erection weld单V形对接焊Single V butt weld双V形对接焊Double V butt weld4.5.9 特种结构Special structures水塔Water tower高烟筒Tall chimney冷却水塔Cooling tower油罐Oil tank蓄水池Water reservoir 管廊Pipe rack管架Pipe support球罐Spherical tank 裂解炉cracking furnace 起重机、吊车Crane; Hoist。
THEORY OF MODELING AND SIMULATIONby Bernard P. Zeigler, Herbert Praehofer, Tag Gon Kim2nd Edition, Academic Press, 2000, ISBN: 0127784551Given the many advances in modeling and simulation in the last decades, the need for a widely accepted framework and theoretical foundation is becoming increasingly necessary. Methods of modeling and simulation are fragmented across disciplines making it difficult to re-use ideas from other disciplines and work collaboratively in multidisciplinary teams. Model building and simulation is becoming easier and faster through implementation of advances in software and hardware. However, difficult and fundamental issues such as model credibility and interoperation have received less attention. These issues are now addressed under the impetus of the High Level Architecture (HLA) standard mandated by the U.S. DoD for all contractors and agencies.This book concentrates on integrating the continuous and discrete paradigms for modeling and simulation. A second major theme is that of distributed simulation and its potential to support the co-existence of multiple formalisms in multiple model components. Prominent throughout are the fundamental concepts of modular and hierarchical model composition. These key ideas underlie a sound methodology for construction of complex system models.The book presents a rigorous mathematical foundation for modeling and simulation. It provides a comprehensive framework for integrating various simulation approaches employed in practice, including such popular modeling methods as cellular automata, chaotic systems, hierarchical block diagrams, and Petri Nets. A unifying concept, called the DEVS Bus, enables models to be transparently mapped into the Discrete Event System Specification (DEVS). The book shows how to construct computationally efficient, object-oriented simulations of DEVS models on parallel and distributed environments. In designing integrative simulations, whether or not they are HLA compliant, this book provides the foundation to understand, simplify and successfully accomplish the task.MODELING HUMAN AND ORGANIZATIONAL BEHAVIOR: APPLICATION TO MILITARY SIMULATIONSEditors: Anne S. Mavor, Richard W. PewNational Academy Press, 1999, ISBN: 0309060966. Hardcover - 432 pages.This book presents a comprehensive treatment of the role of the human and the organization in military simulations. The issue of representing human behavior is treated from the perspective of the psychological and organizational sciences. After a thorough examination of the current military models, simulations and requirements, the book focuses on integrative architectures for modeling theindividual combatant, followed by separate chapters on attention and multitasking, memory and learning, human decision making in the framework of utility theory, models of situation awareness and enabling technologies for their implementation, the role of planning in tactical decision making, and the issue of modeling internal and external moderators of human behavior.The focus of the tenth chapter is on modeling of behavior at the unit level, examining prior work, organizational unit-level modeling, languages and frameworks. It is followed by a chapter on information warfare, discussing models of information diffusion, models of belief formation and the role of communications technology. The final chapters consider the need for situation-specific modeling, prescribe a methodology and a framework for developing human behavior representations, and provide recommendations for infrastructure and information exchange.The book is a valuable reference for simulation designers and system engineers.HANDBOOK OF SIMULATOR-BASED TRAININGby Eric Farmer (Ed.), Johan Reimersma, Jan Moraal, Peter JornaAshgate Publishing Company, 1999, ISBN: 0754611876.The rapidly expanding area of military modeling and simulation supports decision making and planning, design of systems, weapons and infrastructure. This particular book treats the third most important area of modeling and simulation – training. It starts with thorough analysis of training needs, covering mission analysis, task analysis, trainee and training analysis. The second section of the book treats the issue of training program design, examining current practices, principles of training and instruction, sequencing of training objectives, specification of training activities and scenarios, methodology of design and optimization of training programs. In the third section the authors introduce the problem of training media specification and treat technical issues such as databases and models, human-simulator interfaces, visual cueing and image systems, haptic, kinaesthetic and vestibular cueing, and finally, the methodology for training media specification. The final section of the book is devoted to training evaluation, covering the topics of performance measurement, workload measurement, and team performance. In the concluding part the authors outline the trends in using simulators for training.The primary audience for this book is the community of managers and experts involved in training operators. It can also serve as useful reference for designers of training simulators.CREATING COMPUTER SIMULATION SYSTEMS:An Introduction to the High Level Architectureby Frederick Kuhl, Richard Weatherly, Judith DahmannPrentice Hall, 1999, ISBN: 0130225118. - 212 pages.Given the increasing importance of simulations in nearly all aspects of life, the authors find that combining existing systems is much more efficient than building newer, more complex replacements. Whether the interest is in business, the military, or entertainment or is even more general, the book shows how to use the new standard for building and integrating modular simulation components and systems. The HLA, adopted by the U.S. Department of Defense, has been years in the making and recently came ahead of its competitors to grab the attention of engineers and designers worldwide. The book and the accompanying CD-ROM set contain an overview of the rationale and development of the HLA; a Windows-compatible implementation of the HLA Runtime Infrastructure (including test software). It allows the reader to understand in-depth the reasons for the definition of the HLA and its development, how it came to be, how the HLA has been promoted as an architecture, and why it has succeeded. Of course, it provides an overview of the HLA examining it as a software architecture, its large pieces, and chief functions; an extended, integrated tutorial that demonstrates its power and applicability to real-world problems; advanced topics and exercises; and well-thought-out programming examples in text and on disk.The book is well-indexed and may serve as a guide for managers, technicians, programmers, and anyone else working on building simulations.HANDBOOK OF SIMULATION:Principles, Methodology, Advances, Applications, and Practiceedited by Jerry BanksJohn Wiley & Sons, 1998, ISBN: 0471134031. Hardcover - 864 pages.Simulation modeling is one of the most powerful techniques available for studying large and complex systems. This book is the first ever to bring together the top 30 international experts on simulation from both industry and academia. All aspects of simulation are covered, as well as the latest simulation techniques. Most importantly, the book walks the reader through the various industries that use simulation and explains what is used, how it is used, and why.This book provides a reference to important topics in simulation of discrete- event systems. Contributors come from academia, industry, and software development. Material is arranged in sections on principles, methodology, recent advances, application areas, and the practice of simulation. Topics include object-oriented simulation, software for simulation, simulation modeling,and experimental design. For readers with good background in calculus based statistics, this is a good reference book.Applications explored are in fields such as transportation, healthcare, and the military. Includes guidelines for project management, as well as a list of software vendors. The book is co-published by Engineering and Management Press.ADVANCES IN MISSILE GUIDANCE THEORYby Joseph Z. Ben-Asher, Isaac YaeshAIAA, 1998, ISBN 1-56347-275-9.This book about terminal guidance of intercepting missiles is oriented toward practicing engineers and engineering students. It contains a variety of newly developed guidance methods based on linear quadratic optimization problems. This application-oriented book applies widely used and thoroughly developed theories such LQ and H-infinity to missile guidance. The main theme is to systematically analyze guidance problems with increasing complexity. Numerous examples help the reader to gain greater understanding of the relative merits and shortcomings of the various methods. Both the analytical derivations and the numerical computations of the examples are carried out with MATLAB Companion Software: The authors have developed a set of MATLAB M-files that are available on a diskette bound into the book.CONTROL OF SPACECRAFT AND AIRCRAFTby Arthur E. Bryson, Jr.Princeton University Press, 1994, ISBN 0-691-08782-2.This text provides an overview and summary of flight control, focusing on the best possible control of spacecraft and aircraft, i.e., the limits of control. The minimum output error responses of controlled vehicles to specified initial conditions, output commands, and disturbances are determined with specified limits on control authority. These are determined using the linear-quadratic regulator (LQR) method of feedback control synthesis with full-state feedback. An emphasis on modeling is also included for the design of control systems. The book includes a set of MATLAB M-files in companion softwareMATHWORKSInitial information MATLAB is given in this volume to allow to present next the Simulink package and the Flight Dynamics Toolbox, providing for rapid simulation-based design. MATLAB is the foundation for all the MathWorks products. Here we would like to discus products of MathWorks related to the simulation, especially Code Generation tools and Dynamic System Simulation.Code Generation and Rapid PrototypingThe MathWorks code generation tools make it easy to explore real-world system behavior from the prototyping stage to implementation. Real-Time Workshop and Stateflow Coder generate highly efficient code directly from Simulink models and Stateflow diagrams. The generated code can be used to test and validate designs in a real-time environment, and make the necessary design changes before committing designs to production. Using simple point-and-click interactions, the user can generate code that can be implemented quickly without lengthy hand-coding and debugging. Real-Time Workshop and Stateflow Coder automate compiling, linking, and downloading executables onto the target processor providing fast and easy access to real-time targets. By automating the process of creating real-time executables, these tools give an efficient and reliable way to test, evaluate, and iterate your designs in a real-time environment.Real-Time Workshop, the code generator for Simulink, generates efficient, optimized C and Ada code directly from Simulink models. Supporting discrete-time, multirate, and hybrid systems, Real-Time Workshop makes it easy to evaluate system models on a wide range of computer platforms and real-time environments.Stateflow Coder, the standalone code generator for Stateflow, automatically generates C code from Stateflow diagrams. Code generated by Stateflow Coder can be used independently or combined with code from Real-Time Workshop.Real-Time Windows Target, allows to use a PC as a standalone, self-hosted target for running Simulink models interactively in real time. Real-Time Windows Target supports direct I/O, providing real-time interaction with your model, making it an easy-to-use, low-cost target environment for rapid prototyping and hardware-in-the-loop simulation.xPC Target allows to add I/O blocks to Simulink block diagrams, generate code with Real-Time Workshop, and download the code to a second PC that runs the xPC target real-time kernel. xPC Target is ideal for rapid prototyping and hardware-in-the-loop testing of control and DSP systems. It enables the user to execute models in real time on standard PC hardware.By combining the MathWorks code generation tools with hardware and software from leading real-time systems vendors, the user can quickly and easily perform rapid prototyping, hardware-in-the-loop (HIL) simulation, and real-time simulation and analysis of your designs. Real-Time Workshop code can be configured for a variety of real-time operating systems, off-the-shelf boards, and proprietary hardware.The MathWorks products for control design enable the user to make changes to a block diagram, generate code, and evaluate results on target hardware within minutes. For turnkey rapid prototyping solutions you can take advantage of solutions available from partnerships between The MathWorks and leading control design tools:q dSPACE Control Development System: A total development environment forrapid control prototyping and hardware-in-the-loop simulation;q WinCon: Allows you to run Real-Time Workshop code independently on a PC;q World Up: Creating and controlling 3-D interactive worlds for real-timevisualization;q ADI Real-Time Station: Complete system solution for hardware-in-the loopsimulation and prototyping.q Pi AutoSim: Real-time simulator for testing automotive electronic control units(ECUs).q Opal-RT: a rapid prototyping solution that supports real-time parallel/distributedexecution of code generated by Real-Time Workshop running under the QNXoperating system on Intel based target hardware.Dynamic System SimulationSimulink is a powerful graphical simulation tool for modeling nonlinear dynamic systems and developing control strategies. With support for linear, nonlinear, continuous-time, discrete-time, multirate, conditionally executed, and hybrid systems, Simulink lets you model and simulate virtually any type of real-world dynamic system. Using the powerful simulation capabilities in Simulink, the user can create models, evaluate designs, and correct design flaws before building prototypes.Simulink provides a graphical simulation environment for modeling dynamic systems. It allows to build quickly block diagram models of dynamic systems. The Simulink block library contains over 100 blocks that allow to graphically represent a wide variety of system dynamics. The block library includes input signals, dynamic elements, algebraic and nonlinear functions, data display blocks, and more. Simulink blocks can be triggered, enabled, or disabled, allowing to include conditionally executed subsystems within your models.FLIGHT DYNAMICS TOOLBOX – FDC 1.2report by Marc RauwFDC is an abbreviation of Flight Dynamics and Control. The FDC toolbox for Matlab and Simulink makes it possible to analyze aircraft dynamics and flight control systems within one softwareenvironment on one PC or workstation. The toolbox has been set up around a general non-linear aircraft model which has been constructed in a modular way in order to provide maximal flexibility to the user. The model can be accessed by means of the graphical user-interface of Simulink. Other elements from the toolbox are analytical Matlab routines for extracting steady-state flight-conditions and determining linearized models around user-specified operating points, Simulink models of external atmospheric disturbances that affect the motions of the aircraft, radio-navigation models, models of the autopilot, and several help-utilities which simplify the handling of the systems. The package can be applied to a broad range of stability and control related problems by applying Matlab tools from other toolboxes to the systems from FDC 1.2. The FDC toolbox is particularly useful for the design and analysis of Automatic Flight Control Systems (AFCS). By giving the designer access to all models and tools required for AFCS design and analysis within one graphical Computer Assisted Control System Design (CACSD) environment the AFCS development cycle can be reduced considerably. The current version 1.2 of the FDC toolbox is an advanced proof of concept package which effectively demonstrates the general ideas behind the application of CACSD tools with a graphical user- interface to the AFCS design process.MODELING AND SIMULATION TERMINOLOGYMILITARY SIMULATIONTECHNIQUES & TECHNOLOGYIntroduction to SimulationDefinitions. Defines simulation, its applications, and the benefits derived from using the technology. Compares simulation to related activities in analysis and gaming.DOD Overview. Explains the simulation perspective and categorization of the US Department of Defense.Training, Gaming, and Analysis. Provides a general delineation between these three categories of simulation.System ArchitecturesComponents. Describes the fundamental components that are found in most military simulations.Designs. Describes the basic differences between functional and object oriented designs for a simulation system.Infrastructures. Emphasizes the importance of providing an infrastructure to support all simulation models, tools, and functionality.Frameworks. Describes the newest implementation of an infrastructure in the forma of an object oriented framework from which simulation capability is inherited.InteroperabilityDedicated. Interoperability initially meant constructing a dedicated method for joining two simulations for a specific purpose.DIS. The virtual simulation community developed this method to allow vehicle simulators to interact in a small, consistent battlefield.ALSP. The constructive, staff training community developed this method to allow specific simulation systems to interact with each other in a single joint training exercise. HLA. This program was developed to replace and, to a degree, unify the virtual and constructive efforts at interoperability.JSIMS. Though not labeled as an interoperability effort, this program is pressing for a higher degree of interoperability than have been achieved through any of the previous programs.Event ManagementQueuing. The primary method for executing simulations has been various forms of queues for ordering and releasing combat events.Trees. Basic queues are being supplanted by techniques such as Red-Black and Splay trees which allow the simulation store, process, and review events more efficiently than their predecessors.Event Ownership. Events can be owned and processed in different ways. Today's preference for object oriented representations leads to vehicle and unit ownership of events, rather than the previous techniques of managing them from a central executive.Time ManagementUniversal. Single processor simulations made use of a single clocking mechanism to control all events in a simulation. This was extended to the idea of a "master clock" during initial distributed simulations, but is being replaced with more advanced techniques in current distributed simulation.Synchronization. The "master clock" too often lead to poor performance and required a great deal of cross-simulation data exchange. Researchers in the Parallel Distributed Simulation community provided several techniques that are being used in today's training environment.Conservative & Optimistic. The most notable time management techniques are conservative synchronization developed by Chandy, Misra, and Bryant, and optimistic synchronization (or Time Warp) developed by David Jefferson.Real-time. In addition to being synchronized across a distributed computing environment, many of today's simulators must also perform as real-time systems. These operate under the additional duress of staying synchronized with the human or system clock perception of time.Principles of ModelingScience & Art. Simulation is currently a combination of scientific method and artistic expression. Learning to do this activity requires both formal education and watching experienced practitioners approach a problem.Process. When a team of people undertake the development of a new simulation system they must follow a defined process. This is often re-invented for each project, but can better be derived from experience of others on previous projects.Fundamentals. Some basic principles have been learned and relearned by members of the simulation community. These have universal application within the field and allow new developers to benefit from the mistakes and experiences of their predecessors.Formalism. There has been some concentrated effort to define a formalism for simulation such that models and systems are provably correct. These also allow mathematical exploration of new ideas in simulation.Physical ModelingObject Interaction. Military object modeling is be divided into two pieces, the physical and the behavioral. Object interactions, which are often viewed as 'physics based', characterize the physical models.Movement. Military objects are often very mobile and a great deal of effort can be given to the correct movement of ground, air, sea, and space vehicles across different forms of terrain or through various forms of ether.Sensor Detection. Military object are also very eager to interact with each other in both peaceful and violent ways. But, before they can do this they must be able to perceive each other through the use of human and mechanical sensors.Engagement. Encounters with objects of a different affiliation often require the application of combat engagement algorithms. There are a rich set of these available to the modeler, and new ones are continually being created.Attrition. Object and unit attrition may be synonymous with engagement in the real world, but when implemented in a computer environment they must be separated to allow fair combat exchanges. Distributed simulation systems are more closely replicating real world activities than did their older functional/sequential ancestors, but the distinction between engagement and attrition are still important. Communication. The modern battlefield is characterized as much by communication and information exchange as it is by movement and engagement. This dimension of the battlefield has been largely ignored in previous simulations, but is being addressed in the new systems under development today.More. Activities on the battlefield are extremely rich and varied. The models described in this section represent some of the most fundamental and important, but they are only a small fraction of the detail that can be included in a model.Behavioral ModelingPerception. Military simulations have historically included very crude representations of human and group decision making. One of the first real needs for representing the human in the model was to create a unique perception of the battlefield for each group, unit, or individual.Reaction. Battlefield objects or units need to be able to react realistically to various combat environments. These allow the simulation to handle many situations without the explicit intervention of a human operator.Planning. Today we look for intelligent behavior from simulated objects. Once form of intelligence is found in allowing models to plan the details of a general operational combat order, or to formulate a method for extracting itself for a difficult situation.Learning. Early reactive and planning models did not include the capability to learn from experience. Algorithms can be built which allow units to become more effective as they become more experienced. They also learn the best methods for operating on a specific battlefield or under specific conditions.Artificial Intelligence. Behavioral modeling can benefit from the research and experience of the AI community. Techniques of value include: Intelligent Agents, Finite State Machines, Petri Nets, Expert and Knowledge-based Systems, Case Based Reasoning, Genetic Algorithms, Neural Networks, Constraint Satisfaction, Fuzzy Logic, and Adaptive Behavior. An introduction is given to each of these along with potential applications in the military environment.Environmental ModelingTerrain. Military objects are heavily dependent upon the environment in which they operate. The representation of terrain has been of primary concern because of its importance and the difficulty of managing the amount of data required. Triangulated Irregular Networks (TINs) are one of the newer techniques for managing this problem. Atmosphere. The atmosphere plays an important role in modeling air, space, and electronic warfare. The effects of cloud cover, precipitation, daylight, ambient noise, electronic jamming, temperature, and wind can all have significant effects on battlefield activities.Sea. The surface of the ocean is nearly as important to naval operations as is terrain to army operations. Sub-surface and ocean floor representations are also essential for submarine warfare and the employment of SONAR for vehicle detection and engagement.Standards. Many representations of all of these environments have been developed.Unfortunately, not all of these have been compatible and significant effort is being given to a common standard for supporting all simulations. Synthetic Environment Data Representation and Interchange Specification (SEDRIS) is the most prominent of these standardization efforts.Multi-Resolution ModelingAggregation. Military commanders have always dealt with the battlefield in an aggregate form. This has carried forward into simulations which operate at this same level, omitting many of the details of specific battlefield objects and events.Disaggregation. Recent efforts to join constructive and virtual simulations have required the implementation of techniques for cross the boundary between these two levels of representation. Disaggregation attempts to generate an entity level representation from the aggregate level by adding information. Conversely, aggregation attempts to create the constructive from the virtual by removing information.Interoperability. It is commonly accepted that interoperability in these situations is best achieved though disaggregation to the lowest level of representation of the models involved. In any form the patchwork battlefield seldom supports the same level of interoperability across model levels as is found within models at the same level of resolution.Inevitability. Models are abstractions of the real world generated to address a specific problem. Since all problems are not defined at the same level of physical representation, the models built to address them will be at different levels. The modeling an simulation problem domain is too rich to ever expect all models to operate at the same level. Multi-Resolution Modeling and techniques to provide interoperability among them are inevitable.Verification, Validation, and AccreditationVerification. Simulation systems and the models within them are conceptual representations of the real world. By their very nature these models are partially accurate and partially inaccurate. Therefore, it is essential that we be able to verify that the model constructed accurately represents the important parts of the real world we are try to study or emulate.Validation. The conceptual model of the real world is converted into a software program. This conversion has the potential to introduce errors or inaccurately represent the conceptual model. Validation ensures that the software program accurately reflects the conceptual model.Accreditation. Since all models only partially represent the real world, they all have limited application for training and analysis. Accreditation defines the domains and。
FDA新药审批流程简述(中英⽂)FDA新药审批流程美国的新药审批可以说是世界上最严格和规范的,作为⼀个公司通常需要花费5亿美元资⾦,⽤ 12到15年的时间才能将⼀个新药从试验室⾛⼊市场。
在5000个临床前化合物中⼤约只有5个化合物可以进⼊临床试验(Clinical Trials),⽽这5个化合物中只有⼀个才能被批准⽤于临床治疗病⼈,成为真正的药物。
从⼀个实验室发现的新化合物发展成为⼀个治疗疾病的药物,需要经过如下开发阶段:⼀、临床前试验将⼀个新发现的化合物经过实验室和动物试验,证明该化合物针对特定⽬标疾病具有⽣物活性,并且要评估该化合物的安全性。
⼆、新药临床研究申请当⼀个化合物通过了临床前试验后,需要向FDA提交新药临床研究申请,以便可以将该化合物应⽤于⼈体试验。
如果在提交申请后30天内FDA没有驳回申请,那么该新药临床研究申请即被视为有效,可以进⾏⼈体试验。
新药临床研究申请需要提供先前试验的材料;以及计划将在什么地⽅,由谁以及如何进⾏临床试验的说明;新化合物的结构;投药⽅式;动物试验中发现的所有毒性情况;该化合物的制造⽣产情况。
所有临床⽅案必须经过机构审评委员会(Institutional Revuew Board,IRB)的审查和通过。
每年必须向FDA 和IRB 汇报⼀次临床试验的进程和结果。
三、⼀期临床试验这⼀阶段的临床试验⼀般需要征集20-100名正常和健康的志愿者进⾏试验研究。
试验的主要⽬的是提供该药物的安全性资料,包括该药物的安全剂量范围。
同时也要通过这⼀阶段的临床试验获得其吸收、分布、代谢和排泄以及药效持续时间的数据和资料。
四、⼆期临床试验这⼀期的临床试验通常需要征集100-500名相关病⼈进⾏试验。
其主要⽬的是获得药物治疗有效性资料。
五、三期临床试验这⼀期的临床试验通常需1000-5000名临床和住院病⼈,多在多个医学中⼼进⾏,在医⽣的严格监控下,进⼀步获得该药物的有效性资料和鉴定副作⽤,以及与其他药物的相互作⽤关系。
ai给人类带来的机遇和挑战英语作文The Opportunities and Challenges of AI for HumanityArtificial Intelligence (AI) has been a topic of fascination and debate for decades, as it holds the potential to revolutionize various aspects of our lives. As we delve deeper into the realm of AI, we find ourselves confronted with a myriad of opportunities and challenges that will shape the future of humanity.One of the most promising opportunities presented by AI is its ability to enhance and augment human capabilities. AI-powered systems can perform tasks with unparalleled speed, accuracy, and efficiency, freeing up human time and resources for more complex and creative endeavors. In the field of healthcare, AI-driven diagnostics and personalized treatment plans have the potential to revolutionize the way we approach medical care, leading to earlier detection of diseases and more effective interventions. Similarly, in the realm of scientific research, AI can sift through vast amounts of data, identify patterns, and generate novel hypotheses that could accelerate the pace of discovery and innovation.Furthermore, AI-powered automation has the potential to streamlinevarious industries, reducing the burden of tedious and repetitive tasks. This could lead to increased productivity, cost savings, and the reallocation of human labor towards more fulfilling and meaningful work. Imagine a future where AI-powered robots handle the majority of manual labor, freeing up individuals to pursue their passions, engage in lifelong learning, and contribute to the betterment of society in ways that are uniquely human.Another significant opportunity presented by AI is its ability to enhance our decision-making processes. AI-powered systems can analyze vast amounts of data, identify trends and patterns, and provide insights that can inform our decision-making. This could lead to more informed and data-driven decisions in areas such as public policy, finance, and resource allocation, ultimately leading to more efficient and equitable outcomes for society.However, the rise of AI also presents a number of challenges that must be addressed. One of the primary concerns is the potential displacement of human workers due to automation. As AI-powered systems become more capable and efficient, there is a risk that certain job roles may become obsolete, leading to widespread unemployment and economic disruption. This challenge will require a comprehensive approach to workforce retraining, education, and the creation of new job opportunities that leverage the unique capabilities of both humans and AI.Another significant challenge is the ethical and societal implications of AI. As AI systems become more advanced and integrated into our daily lives, questions arise around issues such as privacy, data security, algorithmic bias, and the potential for AI to be used for malicious purposes. Addressing these challenges will require the development of robust ethical frameworks, transparent and accountable AI governance, and the active involvement of diverse stakeholders to ensure that the benefits of AI are equitably distributed.Additionally, the development and deployment of AI systems must be accompanied by a deep understanding of the potential risks and unintended consequences. AI systems, if not designed and implemented with great care, could lead to catastrophic outcomes, such as the amplification of existing biases, the erosion of human agency, and the potential for AI systems to spiral out of control. Mitigating these risks will require ongoing research, rigorous testing, and the development of safety protocols to ensure that AI systems remain aligned with human values and priorities.In conclusion, the emergence of AI presents both tremendous opportunities and significant challenges for humanity. As we navigate this rapidly evolving landscape, it is crucial that we approach the development and deployment of AI with a balancedand thoughtful approach. By harnessing the power of AI to enhance human capabilities, while addressing the ethical and societal implications, we can unlock a future where AI and humans work in harmony to create a better world for all.。
RDieHarder:An R interface to the Die Harder suite of RandomNumber Generator TestsDirk EddelbuettelDebian**************Robert G.Brown Physics,Duke University ************.eduInitial Version as of May2007Rebuilt on January12,2023using RDieHarder0.2.51IntroductionRandom number generators are critically important for computational statistics.Simulation methods are becoming ever more common for estimation;Monte Carlo Markov Chain is but one approach.Also,simu-lation methods such as the Bootstrap have long been used in inference and are becoming a standard part of a rigorous analysis.As random number generators are at the heart of the simulation-based methods used throughout statistical computing,`good'random numbers are therefore a crucial aspect of a statistical,or quantitative,computing environment.However,there are very few tools that allow us to separate`good' from`bad'random number generators.Based on work that started with the random package(Eddelbuettel,2007)(which provides functions that access a non-deterministic random number generator(NDRNG)based on a physical source of randomness), we wanted to compare the particular NDRNG to the RNGs implemented in GNU R(R Development Core Team,2007)itself,as well as to several RNGs from the GNU GSL(Galassi et al.,2007),a general-purpose scienti c computing library.Such a comparison is possible with the Die Harder test suite by Brown(2007) which extends the DieHard test suite by Marsaglia.From this work,we became interested in making Die Harder directly accessible from GNU R.The RDieHarder package presented here allows such access.This paper is organized as follows.Section2describes the history and design of the Die Harder suite. Section3describes the RDieHarder package facilities,and section4shows some examples.Section5discusses current limitations and possible extensions before section6concludes.2Die HarderDie Harder is described at length in Brown(2006).Due to space limitations,this section cannot provide as much detail and will cover only a few key aspects of the DieHarder suite.2.1DieHardDie Harder reimplements and extends George Marsaglia's Diehard Battery of Tests of Randomness(Marsaglia, 1996).Due to both its robust performance over a wide variety of RNGs,as well as an ability to discern numerous RNGs as weak,DieHard has become something close to a`gold standard'for assessing RNGs.However,there are a number of drawbacks with the existing DieHard test battery code and implementa-tion.First,Marsaglia undertook a large amount of the original work a number of years ago when computing resources were,compared to today's standards,moderately limited.Second,neither the Fortran nor the (translated)C sources are particularly well documented,or commented.Third,the library design is not1modular in a way that encourages good software engineering.Fourth,and last but not least,no licensing statement is provided with the sources or on the support website.This led one of the authors of this paper (rgb)to a multi-year e ort of rewriting the existing tests from DieHard in a)standard C in a modular and extensible format,along with extensive comments,and to b)relicense it under the common and understood GNU GPL license (that is also used for GSL,R,the Linux kernel,and numerous other projects)allowing for wider use.Moreover,new tests from NIST were added (see next subsection)and some genuinely new tests were developed (see below).2.2STSThe National Institute of Standards and Technology (NIST)has developed its own test suite,the 'Statistical Test Suite'(STS).These tests are focussed on bit-level tests of randomness and bit sequences.Currently,three tests based on the STS suite are provided by Die Harder :STS Monobit ,STS Runs and STS Block .2.3RGB extensionsThree new tests have been developed by rgb.A fourth 'test'is a timing function:for many contexts,not only do the mathematical properties of a generator matter,but so does computational cost measured in computing time that is required for a number of draws.2.4Basic methodologyLet us suppose a random number generator can provides a sequence of N uniform draws from the range [0,1).As the number of draws increases,the mean of the sum of all these values should,under the null hypothesis of a proper generator,converge closer and closer to µ=N/2.Each of these N draws forms one experiment.If N is su ciently large,then the means of all experiments should be normally distributed with a standard deviation of σ= N/12.1Given this asymptotic result,we can,for any given experiment i ∈1,...,M transform the given sum x i of N draws into a probability value p i using the inverse normal distribution.2The key insight is that,under the null hypothesis of a perfect generator,these p i values should be uni-formly ing our set of M probability values,we can compute one 'meta-test'of whether we can reject the null of a perfect generator by rejecting that our M probability values are not uniformly distributed.One suitable test is for example the non-parametric Kolmogorov-Smirnov (KS)3statistic.Die Harder uses the Kuiper 4variant of the KS test which uses the combination D ++D −of the maximum and minimum distance to the alternative distribution,instead of using just one of these as in the case of the KS test.This renders the test more sensitive across the entire test region.2.5GSL frameworkDie Harder is primarily focussed on tests for RNGs.Re-implementing RNGs in order to supply input to the tests is therefore not an objective of the library.The GNU Scienti c Library (GSL),on the other hand,provides over 1000mathematical functions,including a large number of random number ing the GSL 1.9.0release,the following generators are de ned 5:1Thisis known as the Irwin-Hall distribution,see /wiki/Irwin-Hall_distribution .2Running print(quantile(pnorm(replicate(M,(sum(runif(N))-N/2)/sqrt(N/12))),seq(0,1,by=0.1))*100,digits=2)performs a Monte Carlo simulation of M experiments using N uniform deviates to illustrate this.Suitable values are e.g.N <-1000;M <-500.3C.f.the Wikipedia entry /wiki/Kolmogorov-Smirnov_test .4C.f.the Wikipedia entry /wiki/Kuiper%27s_test .5This is based on the trailing term in each identi er de ned in /usr/include/gsl/gsl_rng.h .2borosh13coveyou cmrg fishman18fishman20fishman2x gfsr4knuthran knuthran2knuthran2002lecuyer21minstd mrg mt19937mt19937_1999mt19937_1998r250ran0ran1ran2ran3rand rand48random128_bsd random128_glibc2random128_libc5random256_bsd random256_glibc2random256_libc5random32_bsd random32_glibc2random32_libc5random64_bsd random64_glibc2random64_libc5random8_bsd random8_glibc2random8_libc5random_bsd random_glibc2random_libc5randu ranf ranlux ranlux389 ranlxd1ranlxd2ranlxs0ranlxs1ranlxs2ranmar slatec taus taus2taus113transputer tt800uni uni32vax waterman14zufThe GNU GSL,a well-known and readily available library of high quality,therefore provides a natural tfor Die Harder.All of these generators are available in Die Harder via a standardized interface in which a generator is selected,parameterized as needed and the called via the external GSL library against which Die Harder is linked.Beyond these GSL generators,Die Harder also provides two generators based on the`devices'/dev/random and/dev/urandom that are commonly available on Unix.They provide non-deterministic random-numbers based on entropy generated by the operating system.Die Harder also o ers a text and a raw le input stly,a new algorithmic generator named'ca'that is based on cellular automata has recently been added as well.2.6R random number generatorsTo assess the quality of the non-deterministic RNG provided in the GNU R add-on package random,bench-mark comparisons with the generators provided by the R language and environment have been a natural choice.To this end,one of the authors(edd)ported the R generator code(taken from R2.4.0)to the GNU GSL random number generator framework used by Die Harder.This allows a direct comparison of the random generator with those it complements in R.It then follows somewhat naturally that the other generators available in Die Harder,as well as the Die Harder tests,should also be available in R.This provided the motivation for the R package presented here.2.7Source code and building Die HarderRecent versions of Die Harder use the GNU autotools.On Unix system,the steps required to build and install Die Harder should only be the familiar steps configure;make;sudo make install.For Debian,initial packages have been provided and are currently available at http://dirk.eddelbuettel. com/code/tmp.Within due course,these packages should be uploaded to Debian,and thus become part ofthe next Debian(and Ubuntu)releases.Die Harder is also expected to be part of future Fedora Core(and other RPM-based distribution)releases.On Windows computers and other systems,manual builds should also be possible given that the source code is written in standard C.3RDieHarderThe RDieHarder package provides one key function:dieharder.It can be called with several arguments. The rst one is the name of the random number generator,and the second one is the name of the test to be applied.For both options,the textual arguments are matched against internal vectors to obtain a numeric argument index;alternatively the index could be supplied directly.The remaining arguments(currently) permit to set the number of samples(i.e.the number of experiments run,and thus the sample size for thenal Kolmogorov-Smirnov test),the random number generator seed and whether or not verbose operationis desired.The returned object is of class dieharder,inheriting from the standard class htest common for all hypothesis tests.The standard print method for htest is used;however not all possible slots are being lled (as there is for example no choice of alternative hypothesis).3A custom summary method is provided that also computes the Kolmogorov-Smirnov and Wilcoxon tests in R and displays a simple stem-and-leaf stly,a custom plot method shows both a histogram and kernel density estimate,as well as the empirical cumulative distribution function.4ExamplesThe possibly simplest usage of RDieHarder is provided in the examples section of the help page.The code dh <-dieharder;summary(dh);plot(dh)simply calls the dieharder function using the default arguments,invokes a summary and then calls plot on the object.6A more interesting example follows below.We select the 2dsphere test for the generators ran0and mt19937with a given seed.The results based on both the Kuiper KS test and the KS test suggest that we would reject ran0but not mt19937,which is in accordance with common knowledge about the latter (the Mersenne Twister)being a decent RNG.It is worth nothing that the Wilcoxon test centered on µ=0.5would not reject the null at conventional levels for ran0.Histogram and Density estimated e n s i t y0.00.20.40.60.81.00.01.02.0.00.20.40.60.8 1.00.00.40.8ECDFDiehard Minimum Distance (2d Circle) TestCreated by RNG ‘ran0' with seed=2, sample of size 100T est p−values: 0.0099 (Kuiper−K−S), 0.0056 (K−S), 0.3506 (Wilcoxon)Histogram and Density estimated e n s i t y0.00.20.40.60.81.00.00.51.01.50.00.20.40.60.8 1.00.00.40.8ECDFDiehard Minimum Distance (2d Circle) TestCreated by RNG ‘mt19937' with seed=2, sample of size 100T est p−values: 0.2449 (Kuiper−K−S), 0.199 (K−S), 0.1696 (Wilcoxon)Figure 1:Comparison of ran0and mt19937under test 2dsphereA programmatic example follows.We de ne a short character vector containing the names of the six R RNGs,apply the Die Harder function to each of these,and then visualize the resulting p -values in simple qqplot.All six generators provide p -value plots that are close to the ideal theoretical outcome (shown in gray).Unsurprisingly,p -values for the Kuiper KS test also show no support for rejecting these generators.5Current Limitations and Future ResearchThe implementation of RDieHarder presented here leaves a number of avenues for future improvement and research.Some of these pertain to Die Harder itself adding new,more sophisticated,more systematic tests including those from the STS suite and tests that probe bitlevel randomness in unique new ways.Others pertain more to the integration of Die Harder with R,which is the topic of this work.6Weomit the output here due to space constraints.4>rngs <-c("R_wichmann_hill","R_marsaglia_multic",+"R_super_duper","R_mersenne_twister",+"R_knuth_taocp","R_knuth_taocp2")>if (!exists("rl"))rl <-lapply(rngs,function(rng)dieharder(rng,"diehard_runs",seed=12345))>oldpar <-par(mfrow=c(2,3),mar=c(2,3,3,1))>invisible(lapply(rl,function(res){+qqplot(res$data,seq(0,1,length.out=length(res$data)),+main=paste(res$generator,":",round(res$p.value,digits=3)),+ylab="",type="S")+abline(0,1,col='gray ')+}))>par(oldpar)#reset graph defaults>0.00.20.40.60.8 1.00.00.20.40.60.81.0R_wichmann_hill : 0.1420.00.20.40.60.81.00.00.20.40.60.81.0R_marsaglia_multic. : 0.8460.00.20.40.60.8 1.00.00.20.40.60.81.0R_super_duper : 0.8680.00.20.40.60.8 1.00.00.20.40.60.81.0R_mersenne_twister : 0.8690.00.20.40.60.81.00.00.20.40.60.81.0R_knuth_taocp : 0.7150.00.20.40.60.8 1.00.00.20.40.60.81.0R_knuth_taocp2 : 0.715Figure 2:Comparing six GNU R generators under the runs test5Not all of Die Harder's features are yet supported in this initial port.In the near future we expect to add code to deal with tests that support extra parameters,or that return more than one p-value per instance of a test.Ultimately,RDieHarder should support the full set of options of the the command-line version of Die Harder.There is no direct interface from the R generators to the RDieHarder module for evaluation;rather, the'ported'R generators are called from the libdieharder library.This could introduce coding/porting errors,and also prevents the direct use of user-added generators that R supports.It would be worthwhile to overcome this by directly letting RDieHarder call back into R to generate draws.On the other hand,the current setup corresponds more closely to the command-line version of Die Harder.Next,the R generators in Die Harder may need to be updated to the2.5.0code.The GSL RNGs provided by libdieharder may as well be exported to R via RDieHarder given that the GSL library is already linked in.Indeed,it would be worthwhile to integrate the two projects and both avoid needless code duplication and ensure even more eyes checking both the quality and accuracy of the code in both.It could be useful to also build RDieHarder with an`embedded'libdieharder rather than relying on an externally installed libdieharder.This may make it easier to build RDieHarder for systems without libdieharder(and on Windows).Likewise,it is possible to reorganize the Die Harder front-end code into a common library to avoid duplication of code with RDieHarder.Lastly,on the statistical side,an empirical analysis of size/power between KS,Wilcoxon and other alternatives for generating a nal p-value from the vector of p-values returned from Die Harder tests suggests itself.Similarly,empirical comparisons between the resolving power of the various tests(some of which may not actually be terribly useful in the sense that they yield new information about the failure modes of any given RNG)could be stly,there is always room for new generators,new tests,and new visualizations.One thing that one should remember while experimenting with Die Harder is that there really is no such thing as a random number generator.It is therefore likely that all RNGs will fail any given(valid)test if one cranks up the resolution up high enough by accumulating enough samples per p-value,enough p-values per run.It is also true that a number of Marsaglia's tests have target distributions that were computed empirically by simulation(with the best RNGs and computers available at the time).Here one has to similarly remember that one can do in a few hours of work what it would have taken him months if not years of simulation to equal back when the target statistics were evaluated.It is by no means unlikely that a problem that Die Harder eventually resolves is not not the quality of the RNG but rather the accuracy of the target statistics.These are some of the things that are a matter for future research to decide.A major motivation for writing Die Harder and making it open source,and integrating it with R,is to facilitate precisely this sort of research in an easy to use,consistent testing framework.We welcome the critical eyes and constructive suggestions of the statstical community and invite their participation in examining the code and algorithms used in Die Harder.6ConclusionThe RDieHarder package presented here introduces several new features.First,it makes the Die Harder suite (Brown,2007)available for interactive use from the GNU R environment.Second,it also exports Die Harder results directly to R for further analysis and visualization.Third,it adds adds additional RNGs from GNU R to those from GNU GSL that were already testable in Die Harder.Fourth,it provides a re-distribution of the Die Harder`test engine'via GNU R.ReferencesRobert G.Brown.Die Harder:A Gnu public licensed random number tester.Draft paper included as le manual/dieharder.tex in the dieharder st version dated20Feb2006.,2006.6Robert G.Brown.dieharder:A Random Number Test Suite,2007.URL / ~rgb/General/dieharder.php.C program archive dieharder,version2.24.3.Dirk Eddelbuettel.random:True random numbers using ,2007.URL http://cran.r-project. org/src/contrib/Descriptions/random.html.R package random,current version0.1.2.Mark Galassi,Brian Gough,Gerald Jungman,James Theiler,Jim Davies,Michael Booth,and Fabrice Rossi. The GNU Scienti c Library Reference Manual,2007.URL /software/gsl.ISBN 0954161734;C program archive gsl,current version1.9.0.George Marsaglia.The Marsaglia random number CDROM including the diehard battery of tests of ran-domness.Also at /pub/diehard.,1996.R Development Core Team.R:A Language and Environment for Statistical Computing.R Foundation for Statistical Computing,Vienna,Austria,2007.URL .ISBN3-900051-07-0.7。
On the Shockley-Read-Hall Model:Generation-Recombination in SemiconductorsThierry Goudon1,Vera Miljanovi´c2,Christian Schmeiser3January17,2006AbstractThe Shockley-Read-Hall model for generation-recombination of electron-hole pairs insemiconductors based on a quasistationary approximation for electrons in a trappedstate is generalized to distributed trapped states in the forbidden band and to ki-netic transport models for electrons and holes.The quasistationary limit is rigorouslyjustified both for the drift-diffusion and for the kinetic model.Keywords:semiconductor,generation,recombination,drift-diffusion,kinetic modelAMS subject classification:Acknowledgment:This work has been supported by the European IHP network“HYKE-Hyperbolic and Kinetic Equations:Asymptotics,Numerics,Analysis”,contract no.HPRN-CT-2002-00282,and by the Austrian Science Fund,project no.W008(Wissenschaftskolleg ”Differential Equations”).Part of it has been carried out while the second and third authors enjoyed the hospitality of the Universit´e des Sciences et Technologies Lille1.1Team SIMPAF–INRIA Futurs&Labo.Paul Painlev´e UMR8524,CNRS–Universit´e des Sciences et Technologies Lille1,Cit´e Scientifique,F-59655Villeneuve d’Ascq cedex,France.2Wolfgang Pauli Institut,Universit¨a t Wien,Nordbergstraße15C,1090Wien,Austria.3Fakult¨a t f¨u r Mathematik,Universit¨a t Wien,Nordbergstraße15C,1090Wien,Austria&RICAM Linz,¨Osterreichische Akademie der Wissenschaften,Altenbergstr.56,4040Linz,Austria.11IntroductionThe Shockley-Read-Hall (SRH-)model was introduced in 1952[13],[9]to describe the sta-tistics of recombination and generation of holes and electrons in semiconductors occurring through the mechanism of trapping.The transfer of electrons from the valence band to the conduction band is referred to as the generation of electron-hole pairs (or pair-generation process),since not only a free electron is created in the conduction band,but also a hole in the valence band which can contribute to the charge current.The inverse process is termed recombination of electron-hole pairs.The bandgap between the upper edge of the valence band and the lower edge of the conduction band is very large in semiconductors,which means that a big amount of energy is needed for a direct band-to-band generation event.The presence of trap levels within the forbidden band caused by crystal impurities facilitates this process,since the jump can be split into two parts,each of them ’cheaper’in terms of energy.The basic mechanisms are illustrated in Figure 1:(a)hole emission (an electron jumps from the valence band to the trapped level),(b)hole capture (an electron moves from an occupied trap to the valence band,a hole disappears),(c)electron emission (an electron jumps from trapped level to the conduction band),(d)electron capture (an electron moves from the conduction band to an unoccupiedtrap).a b cd conduction band trapped level valence bandE v E ce n e r g y of a n e l e c t r o n ,E E Figure 1:The four basic processes of electron-hole recombination.Models for this process involve equations for the densities of electrons in the conduction band,holes in the valence band,and trapped electrons.Basic for the SRH model are the drift-diffusion assumption for the transport of electrons and holes,the assumption of one trap level in the forbidden band,and the assumption that the dynamics of the trapped electrons is quasistationary,which can be motivated by the smallness of the density of trapped states compared to typical carrier densities.This last assumption leads to the elimination of the density of trapped electrons from the system and to a nonlinear effective recombination-generation rate,reminiscent of Michaelis-Menten kinetics in chemistry.This model is an2important ingredient of simulation models for semiconductor devices(see,e.g.,[10],[12]).In this work,two generalizations of the classical SRH model are considered:Instead of a single trapped state,a distribution of trapped states across the forbidden band is allowed and,in a second step,a semiclassical kinetic model including the fermion nature of the charge carriers is introduced.Although direct band-to-band recombination-generation(see, e.g.,[11])and impact ionization(e.g.,[2],[3])have been modelled on the kinetic level before, this is(to the knowledge of the authors)thefirst attempt to derive a’kinetic SRH model’. (We mention also the modelling discussions and numerical simulations in??.) For both the drift-diffusion and the kinetic models with self consistent electricfields ex-istence results and rigorous results concerning the quasistationary limit are proven.For the drift-diffusion problem,the essential estimate is derived similarly to[6],where the quasi-neutral limit has been carried out.For the kinetic model Degond’s approach[4]for the existence of solutions of the Vlasov-Poisson problem is extended.Actually,the existence theory already provides the uniform estimates necessary for passing to the quasistationary limit.In the following section,the drift-diffusion based model is formulated and nondimension-alized,and the SRH-model is formally derived.Section3contains the rigorous justification of the passage to the quasistationary limit.Section4corresponds to Section2,dealing with the kinetic model,and in Section5existence of global solutions for the kinetic model is proven and the quasistationary limit is justified.2The drift-diffusion Shockley-Read-Hall modelWe consider a semiconductor crystal with a forbidden band represented by the energy interval (E v,E c)with the valence band edge E v and the conduction band edge E c.The constant(in space)number density of trap states N tr is obtained by summing up contributions across the forbidden band:N tr= E c E v M tr(E)dE.Here M tr(E)is the energy dependent density of available trapped states.The position density of occupied traps is given byn tr(f tr)(x,t)= E c E v M tr(E)f tr(x,E,t)dE,where f tr(x,E,t)is the fraction of occupied trapped states at position x∈Ω,energy E∈(E v,E c),and time t≥0.Note that0≤f tr≤1should hold from a physical point of view.The evolution of f tr is coupled to those of the density of electrons in the conduction band, denoted by n(x,t)≥0,and the density of holes in the valence band,denoted by p(x,t)≥0. Electrons and holes are oppositely charged.The coupling is expressed through the following3quantitiesS n=1τn N tr n0f tr−n(1−f tr) ,S p=1τp N tr p0(1−f tr)−pf tr ,(1)R n= E c E v S n M tr dE,R p= E c E v S p M tr dE.(2) Indeed,the governing equations are given by∂t f tr=S p−S n=p0τp N tr+nτn N tr−f tr p0+pτp N tr+n0+nτn N tr ,(3)∂t n=∇·J n+R n,J n=µn(U T∇n−n∇V),(4)∂t p=−∇·J p+R p,J p=−µp(U T∇p+p∇V),(5)εs∆V=q(n+n tr(f tr)−p−C).(6) For the current densities J n,J p we use the simplest possible model,the drift diffusion ansatz,with constant mobilitiesµn,µp,and with thermal voltage U T.Moreover,since the trapped states havefixed positions,noflux appears in(3).By R n and R p we denote the recombination-generation rates for n and p,respectively. The rate constants areτn(E),τp(E),n0(E),p0(E),where n0(E)p0(E)=n i2with the energy independent intrinsic density n i.Integration of(3)yields∂t n tr=R p−R n.(7) By adding equations(4),(5),(7),we obtain the continuity equation∂t(p−n−n tr)+∇·(J n+J p)=0,(8) with the total charge density p−n−n tr and the total current density J n+J p.In the Poisson equation(6),V(x,t)is the electrostatic potential,εs the permittivity of the semiconductor material,q the elementary charge,and C=C(x)the given doping profile.Note that ifτn,τp,n0,p0are independent from E,or if there exists only one trap level E trwith M tr(E)=N trδ(E−E tr),then R n=1τn [n0n trN tr−n(1−n tr N tr)],R p=1τp[p0(1−n tr N tr)−p n tr N tr],and the equations(4),(5)together with(7)are a closed system governing the evolution of n,p,and n tr.We now introduce a scaling of n,p,and f tr in order to render the equations(4)-(6) dimensionless:Scaling of parameters:i.M tr→N tr E c−E v M tr.ii.τn,p→¯ττn,p,where¯τis a typical value forτn andτp.iii.µn,p→¯µµn,p,where¯µis a typical value forµn,p.4iv.(n 0,p 0,n i ,C )→¯C(n 0,p 0,n i ,C ),where ¯C is a typical value of C .Scaling of unknowns:v.(n,p )→¯C(n,p ).vi.n tr →N tr n tr .vii.V →U T V .viii.f tr →f tr .Scaling of independent variables:ix.E →E v +(E c −E v )E .x.x →√¯µU T ¯τx ,where the reference length is a typical diffusion length before recombina-tion.xi.t →¯τt ,where the reference time is a typical carrier life time.Dimensionless parameters:xii.λ= εs q ¯C ¯µ¯τ=1¯x εs U T q ¯C is the scaled Debye length.xiii.ε=N tr ¯C is the ratio of the density of traps to the typical doping density,and will be assumed to be small:ε≪1.The scaled system reads:ε∂t f tr =S p (p,f tr )−S n (n,f tr ),S p =1τpp 0(1−f tr )−pf tr ,S n =1τn n 0f tr −n (1−f tr ) ,(9)∂t n =∇·J n +R n (n,f tr ),J n =µn (∇n −n ∇V ),R n = 10S n M tr dE ,(10)∂t p =−∇·J p +R p (p,f tr ),J p =−µp (∇p +p ∇V ),R p = 10S p M tr dE ,(11)λ2∆V =n +εn tr −p −C ,n tr (f tr )= 10f tr M tr dE ,(12)with n 0(E )p 0(E )=n 2i and 10M tr dE =1.5By lettingε→0in(9)formally,we obtain f tr=τn p0+τp nτn(p+p0)+τp(n+n0),and the reduced systemhas the following form∂t n=∇·J n+R(n,p),(13)∂t p=−∇·J p+R(n,p),(14)R(n,p)=(n i2−np) 10M tr(E)τn(E)(p+p0(E))+τp(E)(n+n0(E))dE,(15)λ2∆V=n−p−C.(16) Note that ifτn,τp,n0,p0are independent from E or if there exists only one trap level,thenwe would have the standard Shockley-Read-Hall model,with R=n i2−npτn(p+p0)+τp(n+n0).Existenceand uniqueness of solutions of the limiting system(13)–(16)under the assumptions(21)–(25) stated below is a standard result in semiconductor modelling.A proof can be found in,e.g., [10].3Rigorous derivation of the drift-diffusion Shockley-Read-Hall modelWe consider the system(9)–(12)with the position x varying in a bounded domainΩ∈R3 (all our results are easily extended to the one-and two-dimensional situations),the energy E∈(0,1),and time t>0,subject to initial conditionsn(x,0)=n I(x),p(x,0)=p I(x),f tr(x,E,0)=f tr,I(x,E)(17) and mixed Dirichlet-Neumann boundary conditionsn(x,t)=n D(x,t),p(x,t)=p D(x,t),V(x,t)=V D(x,t)x∈∂ΩD⊂∂Ω(18) and∂n ∂ν(x,t)=∂p∂ν(x,t)=∂V∂ν(x,t)=0x∈∂ΩN:=∂Ω\∂ΩD,(19)whereνis the unit outward normal vector along∂ΩN.We permit the special cases thateither∂ΩD or∂ΩN are empty.More precisely,we assume that either∂ΩD has positive(d−1)-dimensional measure,or it is empty.In the second situation(∂ΩD empty)we haveto assume total charge neutrality,i.e.,Ω(n+εn tr−p−C)dx=0,if∂Ω=∂ΩN.(20)The potential is then only determined up to a(physically irrelevant)additive constant.The following assumptions on the data will be used:For the boundary datan D,p D∈W1,∞loc(Ω×R+t),V D∈L∞loc(R+t,W1,6(Ω)),(21)6for the initial datan I ,p I ∈H 1(Ω)∩L ∞(Ω),0≤f tr,I ≤1,(22) Ω(n I +εn tr (f tr,I )−p I −C )dx =0,if ∂Ω=∂ΩN ,(23)for the doping profile C ∈L ∞(Ω),(24)for the recombination-generation rate constants n 0,p 0,τn ,τp ∈L ∞((0,1)),τn ,τp ≥τmin >0.(25)With these assumptions,a local existence and uniqueness result for the problem (9)–(12),(17)–(19)for fixed positive εcan be proven by a straightforward extension of the approach in[5](see also [10]).In the following,local existence will be assumed,and we shall concentrate on obtaining bounds which guarantee global existence and which are uniform in εas ε→0.For the sake of simplicity,we consider that the data in (21),(22)and (24)do not depend on ε;of course,our strategy works dealing with sequences of data bounded in the mentioned spaces.The following result is a generalization of [6,Lemma 3.1],where the case of homogeneous Neumann boundary conditions and vanishing recombination was treated.Our proof uses a similar approach.Lemma 3.1.Let the assumptions (21)–(25)be satisfied.Then,the solution of (9)–(12),(17)–(19)exists for all times and satisfies n,p ∈L ∞loc ((0,∞),L ∞(Ω))∩L 2loc ((0,∞),H 1(Ω)))uniformly in εas ε→0as well as 0≤f tr ≤1.Proof.Global existence will be a consequence of the following estimates.Introducing the new variables n =n −n D , p =p −p D , C=C −εn tr −n D +p D the equations (10)–(12)take the following form:∂t n =∇·J n +R n −∂t n D ,J n =µn ∇ n +∇n D −( n +n D )∇V ,(26)∂t p =−∇J p +R p −∂t p D ,J p =−µp ∇ p +∇p D +( p +p D )∇V ,(27)λ2∆V = n − p − C.(28)As a consequence of 0≤f tr ≤1, C∈L ∞((0,∞)×Ω)holds.For q ≥2and even,we multiply (26)by n q −1/µn ,(27)by p q −1/µp ,and add:d dt Ω n q qµn + p q qµp dx =−(q −1) Ω n q −2∇ n ∇n dx −(q −1) Ω p q −2∇ p ∇p dx +(q −1) Ω n q −2n ∇ n − p q −2p ∇ p ∇V dx + Ω n q −1µn (R n −∂t n D )+ Ω p q −1µp (R p −∂t p D )=:I 1+I 2+I 3+I 4+I 5.(29)7Using the assumptions on n D ,p D and |R n |≤C (n +1),|R p |≤C (p +1),we estimate I 4≤C Ω| n |q −1(n +1)dx ≤C Ω n q dx +1 ,I 5≤C Ωp q dx +1 .The term I 3can be rewritten as follows:I 3= Ω n q −1∇ n − p q −1∇ p ]∇V dx + Ω n q −2∇ n (n D ∇V )dx − Ω p q −2∇ p (p D ∇V )dx =−1λ2q Ω[ n q − p q ]( n − p − C )dx −1λ2(q −1) Ω n q −1(∇n D ∇V +n D ( n − p − C ) dx +1λ2(q −1) Ωp q −1(∇p D ∇V +p D ( n − p − C ) dx.The second equality uses integration by parts and (28).The first term on the right hand side is the only term of degree q +1.It reflects the quadratic nonlinearity of the problem.Fortunately,it can be written as the sum of a term of degree q and a nonnegative term.By estimation of the terms of degree q using the assumptions on n D and p D as well as ∇V L q (Ω)≤C ( n L q (Ω)+ p L q (Ω)+ CL q (Ω)),we obtain I 3≤−1λ2q Ω[ n q − p q ]( n − p )dx +C Ω( n q + p q )dx +1 ≤C Ω( n q + p q )dx +1 .The integral I 1can be written as I 1=− Ω n q −2|∇n |2dx + Ω n q −2∇n D ∇n dx.(30)By rewriting the integrand in the second integral asn q −2∇n D ∇n = n q −22∇n n q −22∇n Dand applying the Cauchy-Schwarz inequality,we have the following estimate for (30):I 1≤− Ωn q −2|∇n |2dx +Ω n q −2|∇n |2dx Ω n q −2|∇n D |2dx ≤−12 Ω n q −2|∇n |2dx +C n q −2L q ≤−12 Ωn q −2|∇n |2dx +C Ω n q dx +1 .8For I2,the same reasoning(with n and n D replaced by p and p D,respectively)yields an analogous estimate.Collecting our results,we obtaind dt Ω n q qµn+ p q qµp dx≤−12 Ω n q−2|∇n|2dx−12 Ω p q−2|∇p|2dx+C Ω( n q+ p q)dx+1 .(31)Since q≥2is even,thefirst two terms on the right hand side are nonpositive and the Gronwall lemma givesΩ( n q+ p q)dx≤e qCt Ω( n(t=0)q+ p(t=0)q)dx+1 .A uniform-in-q-and-εestimate for n L q, p L q follows,and the uniform-in-εbound in L∞loc((0,∞),L∞(Ω))is obtained in the limit q→∞.The estimate in L2loc((0,∞),H1(Ω)) is then derived by returning to(31)with q=2.Now we are ready for proving the main result of this section.Theorem3.2.Let the assumptions of Theorem3.1be satisfied.Then,asε→0,for every T>0,the solution(f tr,n,p,V)of(9)–(12),(17)–(19)converges with convergence of f tr in L∞((0,T)×Ω×(0,1))weak*,n and p in L2((0,T)×Ω),and V in L2((0,T),H1(Ω)).The limits of n,p,and V satisfy(13)–(19).Proof.The L∞-bounds for f tr,n,and p,and the Poisson equation(12)imply∇V∈L2((0,T)×Ω).From the definition of J n,J p(see(4),(5)),it then follows that J n,J p∈L2((0,T)×Ω).Then(10)and(11)together with R n,R p∈L∞((0,T)×Ω)imply∂t n,∂t p∈L2((0,T),H−1(Ω)).The previous result and the Aubin lemma(see,e.g.,Simon[14,Corollary 4,p.85])gives compactness of n and p in L2((0,T)×Ω).We already know from the Poisson equation that∇V∈L∞((0,T),H1(Ω)).By taking the time derivative of(12),one obtains∂t∆V=∇·(J n+J p),with the consequence that∂t∇V is bounded in L2((0,T)×Ω).Therefore,the Aubin lemma can again be applied as above to prove compactness of∇V in L2((0,T)×Ω).These results and the weak compactness of f tr are sufficient for passing to the limit in the nonlinear terms n∇V,p∇V,nf tr,and pf tr.Let us also remark that∂t n and∂t p are bounded in L2(0,T;H−1(Ω)),so that n,p are compact in C0([0,T];L2(Ω)−weak).With this remark the initial data for the limit equation makes sense.By the uniqueness result for the limiting problem(mentioned at the end of Section2),the convergence is not restricted to subsequences.94A kinetic Shockley-Read-Hall modelIn this section we replace the drift-diffusion model for electrons and holes by a semiclassical kinetic transport model.It is governed by the system∂t f n+v n(k)·∇x f n+q∇x V·∇k f n=Q n(f n)+Q n,r(f n,f tr),(32)∂t f p+v p(k)·∇x f p−q ∇x V·∇k f p=Q p(f p)+Q p,r(f p,f tr),(33)∂t f tr=S p(f p,f tr)−S n(f n,f tr),(34)εs∆x V=q(n+n tr−p−C),(35) where f i(x,k,t)represents the particle distribution function(with i=n for electrons and i=p for holes)at time t≥0,at the position x∈R3,and at the wave vector(or generalized momentum)k∈R3.All functions of k have the periodicity of the reciprocal lattice of the semiconductor crystal.Equivalently,we shall consider only k∈B,where B is the Brillouin zone,i.e.,the set of all k which are closer to the origin than to any other lattice point,with periodic boundary conditions on∂B.The coefficient functions v n(k)and v p(k)denote the electron and hole velocities,respec-tively,which are related to the electron and hole band diagrams byv n(k)=∇kεn(k)/ ,v p(k)=−∇kεp(k)/ ,where is the reduced Planck constant.The elementary charge is still denoted by q.The collision operators Q n and Q p describe the interactions between the particles and the crystal lattice.They involve several physical phenomena and can be written in the general formQ n(f n)= B Φn(k,k′)[M n f′n(1−f n)−M′n f n(1−f′n)]dk′,(36)Q p(f p)= B Φp(k,k′)[M p f′p(1−f p)−M′p f p(1−f′p)]dk′,(37) with the primes denoting evaluation at k′,with the nonnegative,symmetric scattering cross sections Φn(k,k′)and Φp(k,k′),and with the MaxwelliansM n(k)=c n exp(−εn(k)/k B T),M p(k)=c p exp(−εp(k)/k B T),where k B T is the thermal energy of the semiconductor crystal lattice and the constants c n, c p are chosen such that B M n dk= B M p dk=1.The remaining collision operators Q n,r(f n,f tr)and Q p,r(f p,f tr)model the generation and recombination processes and are given byQ n,r(f n,f tr)= E c E vˆS n(f n,f tr)M tr dE,(38)10withˆS n (f n ,f tr )=Φn (k,E )N tr[n 0M n f tr (1−f n )−f n (1−f tr )],and Q p,r (f p ,f tr )= E c E vˆS p (f p ,f tr )M tr dE ,(39)with ˆS p (f p ,f tr )=Φp (k,E )N tr[p 0M p (1−f p )(1−f tr )−f p f tr ],and where Φn,p are non negative and M tr (x,E )is the density of available trapped states as for the drift diffusion model,except that we allow for a position dependence now.This will be commented on below.The parameter N tr is now determined as N tr =sup x ∈R 3 10M tr (x,E )dE .The right hand side in the equation for the occupancy f tr (x,E,t )of the trapped states is defined byS n (f n ,f tr )= BˆS n dk =λn [n 0M n (1−f n )]f tr −λn [f n ](1−f tr ),(40)with λn [g ]= B Φn g dk ,andS p (f p ,f tr )=BˆS p dk =λp [p 0M p (1−f p )](1−f tr )−λp [f p ]f tr ,(41)with λp [g ]= B Φp g dk .The factors (1−f n )and (1−f p )take into account the Pauli exclusion principle,which therefore manifests itself in the requirement that the values of the distribution function have to respect the bounds 0≤f n ,f p ≤1.The position densities on the right hand side of the Poisson equation (35)are given byn (x,t )= B f n dk ,p (x,t )= B f p dk ,n tr (x,t )=E cE v f tr M tr dE.The following scaling,which is strongly related to the one used for the drift-diffusion model,will render the equations (32)-(35)dimensionless:Scaling of parameters:i.M tr →N tr E v −E cM tr ,ii.(εn ,εp )→k B T (εn ,εp ),with the thermal energy k B T ,iii.(Φn ,Φp )→τ−1rg (Φn ,Φp ),where τrg is a typical carrier life time,iv.( Φn , Φp )→τ−1coll( Φn , Φp ),v.(n 0,p 0,C )→C (n 0,p 0,C ),where C is a typical value of |C |,11vi.(M n,M p)→C−1(M n,M p).Scaling of independent variables:vii.x→k B T√τrgτcoll C−1/3 −1x,viii.t→τrg t,ix.k→C1/3k,x.E→E v+(E c−E v)E,Scaling of unknowns:xi.(f n,f p,f tr)→(f n,f p,f tr),xii.V→U T V,with the thermal voltage U T=k B T/q.Dimensionless parameters:xiii.α2=τcoll,τrgxiv.λ=q√τrgτcoll C1/6 εs k B T,,where again we shall study the situationε≪1.xv.ε=N trCFinally,the scaled system readsα2∂t f n+αv n(k)·∇x f n+α∇x V·∇k f n=Q n(f n)+α2Q n,r(f n,f tr),(42)α2∂t f p+αv p(k)·∇x f p−α∇x V·∇k f p=Q p(f p)+α2Q p,r(f p,f tr),(43)ε∂t f tr=S p(f p,f tr)−S n(f n,f tr),(44)λ2∆x V=n+εn tr−p−C=−ρ,(45) with v n=∇kεn,v p=−∇kεp,with Q n and Q p still having the form(36)and,respectively, (37),with the scaled MaxwelliansM n(k)=c n exp(−εn(k)),M p(k)=c p exp(−εp(k)),(46) and with the recombination-generation termsQ n,r(f n,f tr)= 10ˆS n M tr dE,Q p,r(f p,f tr)= 10ˆS p M tr dE,(47) withˆS=Φn[n0M n f tr(1−f n)−f n(1−f tr)],ˆS p=Φp[p0M p(1−f tr)(1−f p)−f p f tr].(48) n12The right hand side of (44)still has the form (40),(41).The position densities are given byn = B f n dk ,p = B f p dk ,n tr = 1f tr M tr dE.(49)The system (42)–(44)conserves the total charge ρ=p +C −n −εn tr .With the definitionJ n =−1α B v n f n dk ,J p =1α Bv p f p dk ,of the current densities,the following continuity equation holds formally:∂t ρ+∇x ·(J n +J p )=0.Setting formally ε=0in (44)we obtainf tr (f n ,f p )=p 0λp [M p (1−f p )]+λn [f n ]p 0λp [M p (1−f p )]+λp [f p ]+λn [f n ]+n 0λn [M n (1−f n )]Substitution f tr into (47)leads to the kinetic Shockley-Read-Hall recombination-generation operatorsQ n,r (f n ,f p )=g n [f n ,f p ](1−f n )−r n [f n ,f p ]f n ,Q p,r (f n ,f p )=g p [f n ,f p ](1−f p )−r p [f n ,f p ]f p ,(50)withg n =10Φn M n n 0 p 0λp [M p (1−f p )]+λn [f n ] M tr p 0λp [M p (1−f p )]+λp [f p ]+λn [f n ]+n 0λn [M n (1−f n )]dE ,r n =10Φn λp [f p ]+n 0λn [M n (1−f n )] M tr p 0λp [M p (1−f p )]+λp [f p ]+λn [f n ]+n 0λn [M n (1−f n )]dE ,g p =10Φp M p p 0 n 0λn [M n (1−f n )]+λp [f p ] M tr p 0λp [M p (1−f p )]+λp [f p ]+λn [f n ]+n 0λn [M n (1−f n )]dE ,r p = 1Φp λn [f n ]+p 0λp [M p (1−f p )] M trp 0λp [M p (1−f p )]+λp [f p ]+λn [f n ]+n 0λn [M n (1−f n )]dE.Of course,the limiting model still conserves charge,which is expressed by the identityB Q n,r dk = BQ p,r dk.Pairs of electrons and holes are generated or recombine,however,in general not with the same wave vector.This absence of momentum conservation is reasonable since the process involves an interaction with the trapped states fixed within the crystal lattice.135Rigorous derivation of the kinetic Shockley-Read-Hall modelThe limitε→0will be carried out rigorously in an initial value problem for the kineticmodel with x∈R3.Concerning the behaviour for|x|→∞,we shall require the densities tobe in L1and use the Newtonian potential solution of the Poisson equation,i.e.,(45)will bereplaced byE(x,t)=−∇x V=λ−2 R3x−y|x−y|3ρ(y,t)dy.(51) We define Problem(K)as the system(42)–(44),(51)with(36),(37),(47)–(49),(40),and(41),subject to the initial conditionsf n(x,k,0)=f n,I(x,k),f p(x,k,0)=f p,I(x,k),f tr(x,E,0)=f tr,I(x,E).We start by stating our assumptions on the data.For the velocities we assumev n,v p∈W1,∞per(B),(52) where here and in the following,the subscript per denotes Sobolev spaces of functions of ksatisfying periodic boundary conditions on∂B.Further we assume that the cross sectionssatisfyΦn, Φp≥0, Φn, Φp∈W1,∞per(B×B),(53) andΦn,Φp≥0,Φn,Φn∈W1,∞per(B×(0,1)).(54) Afinite total number of trapped states is assumed:M tr≥0,M tr∈W1,∞(R3×(0,1))∩W1,1(R3×(0,1)).The L1-assumption with respect to x is needed for controlling the total number of generatedparticles.For the initial data we assume0≤f n,I,f p,I≤1,f n,I,f p,I∈W1,∞per(R3×B)∩W1,1per(R3×B),0≤f tr,I≤1,f tr,I∈W1,∞per(R3×(0,1)).(55) We also assumen0,p0∈L∞((0,1)),C∈W1,∞(R3)∩W1,1(R3).(56) Finally,we need an upper bound for the life time of trapped electrons:B(Φn min{1,n0M n}+Φp min{1,p0M p})dk≥γ>0.(57)The reason for the various differentiability assumptions above is that we shall constructsmooth solutions by an approach along the lines of[11],which goes back to[4].14An essential tool are the following potential theory estimates[15]:E L∞(R3)≤C ρ 1/3L1(R3) ρ 2/3L∞(R3),(58)∇x E L∞(R3)≤C 1+ ρ L1(R3)+ ρ L∞(R3) 1+log(1+ ∇xρ L∞(R3)) .(59) We start by rewriting the collision and recombination generation operators asQ i(f i)=a i[f i](1−f i)−b i[f i]f i,i=n,p,andQ i,r(f i,f tr)=g i[f tr](1−f i)−r i[f tr]f i,i=n,p,witha i[f i]= B Φi M i f′i dk′,b i[f i]= B Φi M′i(1−f′i)dk′,i=n,pg n[f tr]= 10Φn n0M n f tr M tr dE,g p[f tr]= 10Φp p0M p(1−f tr)M tr dE,r n[f tr]= 10Φn(1−f tr)M tr dE,r p[f tr]= 10Φp f tr M tr dE.In order to construct an approximating sequence(f j n,f j p,f j tr,E j)we begin withf0i(x,k,t)=f i,I(x,k),i=n,p,f0tr(x,E,t)=f tr,I(x,E)(60) Thefield always satisfiesE j(x,t)= R3x−y|x−y|3ρj(y,t)dy(61)Let(f j n,f j p,f j tr,E j)be given.Then the f i j+1are defined as the solutions of the following problem:α2∂t f j+1n +αv n(k)·∇x f j+1n−αE j·∇k f j+1n=(a n[f j n]+α2g n[f j tr])(1−f j+1n)−(b n[f j n]+α2r n[f j tr])f j+1n,α2∂t f j+1p +αv p(k)·∇x f j+1p+αE j·∇k f j+1p=(a p[f j p]+α2g p[f j tr])(1−f j+1p)−(b p[f j p]+α2r p[f j tr])f j+1p,ε∂t f j+1tr =(p0λp[M p(1−f j p)]+λn[f j n])(1−f j+1tr)−(n0λn[M n(1−f j n)]+λp[f j p])f j+1tr,(62)subject to the initial conditionsf j+1 n (x,k,0)=f n,I(x,k),f j+1p(x,k,0)=f p,I(x,k),f j+1tr(x,E,0)=f tr,I(x,E).(63)For the iterative sequence we state the following lemma,which is very similar to the Propo-sition3.1from[11]:15Lemma5.1.Let the assumptions(52)–(56)be satisfied.Then the sequence(f j n,f j p,f j tr,E j), defined by(60)-(63)satisfies for any time T>0a)0≤f i j≤1,i=n,p,tr.b)f j n and f j p are uniformly bounded with respect to j→∞andε→0in L∞((0,T),L1(R3×B).c)E j is uniformly bounded with respect to j andεin L∞((0,T)×R3).Proof.Thefirst two equations in(62)are standard linear transport equations,and the third equation is a linear ODE.Existence and uniqueness for the initial value problems are therefore standard results.Note that the a i,b i,g i,r i,andλi in(62)are nonnegative if we assume that a)holds for j.Then a)for j+1is an immediate consequence of the maximum principle.To estimate the L1-norms of the distributions,we integrate thefirst equation in(62)and obtainf j+1n L1(R3×B)≤ f n,I L1(R3×B)+ t0 a n[f j n]1α2+g n[f j tr] L1(R3×B)(s)ds.(64) The boundedness of Φn,Φn,and f j tr,and the integrability of M tr implya n[f j n] L1(R3×B)≤C f j n L1(R3×B), g n[f j tr] L1(R3×B)≤C.(65) Now this is used in(64).Then an estimate is derived for f j n by replacing j+1by j and using the Gronwall inequality.Finally,it is easily since that this estimate is passed from j to j+1by(64).An analogous argument for f j p completes the proof of b).A uniform-in-ε(L1∩L∞)-bound for the total charge densityρj=n j+εn j tr−p j−C follows from b)and from the integrability of M tr.The statement c)of the lemma is now a consequence of(58).For passing to the limit in the nonlinear terms some compactness is needed.Therefore we prove uniform smoothness of the approximating sequence.Lemma5.2.Let the assumptions(52)–(57)be satisfied.Then for any time T>0:a)f j n and f j p are uniformly bounded with respect to j andεin L∞((0,T),W1,1per(R3×B)∩(R3×B)),W1,∞perb)f j tr is uniformly bounded with respect to j andεin L∞((0,T),W1,∞(R3×(0,1))),c)E j is uniformly bounded with respect to j andεin L∞((0,T),W1,∞(R3)).16。
合展望人工智能英语作文The Future of Artificial Intelligence: Shaping a Brighter TomorrowArtificial intelligence (AI) has undoubtedly become one of the most transformative technologies of our time. From virtual assistants that streamline our daily tasks to autonomous vehicles that revolutionize transportation, the impact of AI is pervasive and far-reaching. As we gaze into the future, the potential of this technology to reshape our world is both exciting and thought-provoking. In this essay, we will explore the vast landscape of AI and delve into the possibilities that lie ahead, examining the ways in which this remarkable innovation can contribute to a more prosperous and equitable future for all.At the heart of AI's promise is its ability to process and analyze vast amounts of data with unparalleled speed and accuracy. This capacity has already yielded remarkable advancements in fields such as healthcare, where AI-powered algorithms can assist in the early detection of diseases, personalize treatment plans, and even aid in the development of life-saving drugs. Imagine a future where AI-driven medical diagnostics become the norm, empoweringhealthcare professionals to provide more personalized and effective care, ultimately leading to improved patient outcomes and a more efficient healthcare system.Beyond the realm of healthcare, AI is poised to revolutionize numerous other industries. In the realm of education, AI-powered adaptive learning platforms can tailor the learning experience to the unique needs and preferences of each student, fostering a more engaging and effective learning environment. Imagine a world where every student has access to a personalized tutor, guiding them through the complexities of their studies and helping them to reach their full potential.In the realm of transportation, the advent of autonomous vehicles promises to transform the way we move around our cities and communities. By removing the human element from the driving equation, AI-powered cars can navigate our roads with unprecedented precision, reducing the risk of accidents and traffic congestion. Furthermore, the integration of AI with public transportation systems can optimize route planning, schedules, and resource allocation, leading to more efficient and accessible mobility options for all.The potential of AI extends far beyond these specific applications. In the realm of sustainability and environmental protection, AI-poweredsystems can analyze vast amounts of data to identify patterns and trends, informing more effective strategies for addressing pressing challenges such as climate change, resource depletion, and ecological preservation. Imagine a future where AI-driven models can predict the impact of human activities on the environment, guiding policymakers and communities towards more sustainable practices that safeguard our planet for generations to come.Undoubtedly, the rise of AI also presents a range of ethical and societal challenges that must be carefully navigated. As AI systems become more sophisticated and integrated into our daily lives, concerns around privacy, bias, and the displacement of human labor have become increasingly salient. It is crucial that as we harness the power of AI, we do so in a manner that prioritizes transparency, accountability, and the well-being of all members of society.To this end, the development of AI must be guided by a robust ethical framework that ensures the technology is deployed in a manner that is fair, inclusive, and aligned with the values and aspirations of the communities it serves. This may involve the establishment of regulatory bodies, the implementation of rigorous testing and evaluation protocols, and the active engagement of diverse stakeholders in the decision-making process.Furthermore, as AI becomes more ubiquitous, it is essential that weinvest in the education and reskilling of the workforce to ensure that individuals are equipped to navigate the changing landscape of employment and remain competitive in an AI-driven economy. By proactively addressing the potential disruptions caused by AI, we can minimize the risk of social upheaval and ensure that the benefits of this technology are equitably distributed.In the face of these challenges, the future of AI holds immense promise. As we continue to push the boundaries of what is possible, we must remain steadfast in our commitment to shaping a future where AI enhances and empowers rather than replaces human potential. By fostering a culture of collaboration, innovation, and ethical stewardship, we can harness the transformative power of AI to create a world that is more prosperous, sustainable, and inclusive for all.In conclusion, the future of artificial intelligence is a tapestry of boundless possibilities. From revolutionizing healthcare and education to transforming transportation and environmental protection, the impact of AI has the potential to touch every aspect of our lives. As we navigate this exciting new frontier, it is our collective responsibility to ensure that the development and deployment of AI is guided by a deep commitment to human-centric values, ethical principles, and the pursuit of a better tomorrow for all. By embracing the promise of AI with wisdom and foresight, we canshape a future that is brighter, more equitable, and truly transformative for generations to come.。
Rigorous Analysis of (Distributed) Simulation ResultsO. Kremien1 , J. Kramer2 , M. Kapelevich1AbstractFormal static analysis of the correctness and complexity of scalable and adaptive algorithms for distributed systems is difficult and often not appropriate. Rather, tool support is required to facilitate the 'trial and error' approach which is often adopted. Simulation supports this experimental approach well. In this paper we discuss the need for a rigorous approach to simulation results analysis and model validation. These aspects are often neglected in simulation studies, particularly in distributed simulation. Our aim is to provide the practitioner with a set of guidelines which can be used as a ‘recipe’ in different simulation environments, making sound techniques (simulation and statistics) accessible to users. We demonstrate use of the suggested analysis method with two different distributed simulators (CNCSIM [8]) and (NEST[3]) thus illustrating its generality. The same guidelines may be used with other simulation tools to ensure meaningful results while obviating the need to acquire more detailed knowledge in the area.Keywords:Confidence intervals, distributed algorithms, independence, initialization bias, performance, scalability, simulation, statistical analysis, steady state.1. IntroductionDistributed systems consist of multiple computer nodes interconnected by a communication network and supporting multiple distributed applications. A distinguishing feature of distributed systems is the inability to maintain consistent global state information at distributed points of control. A distributed system can thus be viewed as a collection of distributed decision makers taking decisions1Department of Mathematics and Computer Science, Bar-Ilan University, 52900 Ramat-Gan, ISRAEL.email: {orly,michael}@macs.biu.ac.il2 Department of Computing, Imperial College of Science, Technology & Medicine, 180 Queen’s Gate,London SW7 2BZ, U.K. email: jk@to achieve common goals under uncertain local and partial views of the system state. Formal static analysis of the correctness and complexity of such systems and associated algorithms is usually difficult, and a "trial and error" approach is often adopted. This approach may be supported by either a simulation based tool or a prototype implementation. Tool support is required which will enable rapid prototyping and assessment of different ideas, and allow one to determine whether functional and performance goals are likely to be met. Simulation supports this approach. A simulation based tool may be based on a centralized simulator or on a distributed one making use of multiple processors which may be available.A simulation is a computer based statistical sampling experiment. Thus, if the results of a simulation study are to have any meaning, appropriate statistical techniques must be used to design and analyze the simulation experiments. This paper studies methods for simulation results analysis and model validation which can be automated and put into the tool selected for algorithm prototyping and assessment. Our interest in automated methods is stimulated by the existence of simulation packages (e.g. NEST [3] and many others) that are used in different application areas by a wide population of practitioners who have little knowledge or interest in the intricacies of simulation results analysis. The practitioner is essentially interested in his application and in obtaining meaningful results. Our aim is to provide a set of guidelines which may be used in different environments (centralized or distributed) in order to aid the process of simulation planning and output analysis. We also illustrate by example steps which should be taken in order to validate the tool selected. The reader need not study all the technical details involved and can avoid gaining deep knowledge of different statistical techniques. However, by using the analysis method advocated, we expect to provide guidance as to how to produce and interpret the simulation output. This is intended to help bridge the gap that exists between the background in statistics which practitioners may actually have and that required. By using the suggested guidelines as a ‘recipe’, the lengthy process of selecting the right statistical techniques out of a big collection of available ones is skipped. The building blocks forming our approach are taken from different sources, generalized and then put under the same ‘umbrella’ so that the reader has at hand all statistical tests required for rigorous analysis. We believe the main contribution of this paper lies in providing sound techniques accessible to users, using experience to illustrate and demonstrate the utility of the approach. In addition, we also supply the reader with a sample implementation of the already tested methods on our Web site, the address of which is attached below1. Guidance as of possible use as well as error situations more often encountered in a distributed environment are also included.1http: //www.cs.biu.ac.il/~dsg/statistical_tests.htmlWe describe the use of tools for implementing the advocated analysis method on a load sharing algorithm [10] designed to be scalable and adaptive. The same tool and analysis method can serve to analyze the results of other distributed algorithms: this particular example is chosen for illustration only. The statistical analysis method advocated was already used in several cases where users could directly contact the authors to get the required information. Positive results of this experience together with the generality of the approach motivated publishing this paper.In Section 2 the reasons for choosing to develop a simulator are explained. Section 3 discusses some concerns regarding statistical inferences of simulation studies in general, giving guidelines for centralized or distributed systems. Two examples of very different simulation tools implementing the suggested techniques are given in Section 4. The model is first validated and then results from the literature available from the first tool are reproduced by the second. Conclusions and directions for future study are presented in Section 5.2.Selecting a Tool for Rapid Assessment2.1Required PropertiesSimulation provides good support for the “trial and error” approach. It permits flexible modeling and control over model constraints [2]. Thus, simulation can facilitate the study of a particular aspect of the system (e.g. network delay) and its effect. Tool support is required to determine if functional and performance goals are likely to be met by the implementation of a suggested solution. The following tool properties are generally desirable:•Minimize effort required for algorithm assessment.•Minimize effort required for transition to the target environment.•Independence of the simulation environment.Whereas the following results from specific algorithm requirements:•Scale-up.Unlike distributed systems, failure of any component of the simulation can generally be allowed to cause termination of the overall computation, since aspects such as performance can no longer be assessed. Thus, fault tolerance of the simulation is generally not required.2.2Simulation vs. ImplementationOne possible approach is to prototype an implementation. This is the most realistic. Unfortunately, implementation can be very costly and time consuming. Also, with an implementation it may not befeasible to thoroughly study various parameters of the system.On the other hand, simulation can aid in the modeling and evaluation of distributed algorithms. One has control over the parameters (e.g. system size) and events (e.g. application arrival) of the system under scrutiny. Since simulation is performed in simulation-time, aspects such as network delay can be controlled. Simulations are also repeatable and a simulator can be used to compare different approaches to the problem under scrutiny. One expects to identify sensitivities of an algorithm to aspects such as the size of the system, amount and frequency of state information shared and other parameters.In short, although simulation is less realistic than implementation, it better supports experimentation. This approach should be adopted for assessment and analysis. Nevertheless, a prototype implementation can be useful as a complement to simulation to discover certain error situations which may not have been taken into account in a simulation. A smooth transition from simulation to prototype implementation can shorten this process.However in performing a simulation, care must be taken in obtaining and interpreting the results. Virtually all simulation output variables are nonstationary (the distributions of the successive observations change over time) and autocorrelated (the observations in the component are correlated with each other). Thus classical statistical techniques based on independent identically distributed observations are not directly applicable. Statistical techniques applicable for simulation are discussed next.3.Statistical TechniquesStatistical techniques are used to analyze the simulation experiments: to decide when steady-state has been reached, how long a simulation run should be and also to analyze the results themselves including determination of the accuracy of the simulation run response (e.g. the average response-time). These are discussed below.3.1Run Length ControlOne objective of simulation output analysis is to estimate some unknown characteristic or parameter of the system being studied. The experimenter often wants not only an estimate of this parameter value, but also some measure of the estimate’s precision. Confidence intervals are widely used for this purpose [11,15].The initial state of the simulated system must be specified each time the program is run. It is often inconvenient or impossible to ensure that these initial conditions are typical. As a result, estimation of the steady-state simulation response is complicated by the possible presence of initialization bias. For example, the queues in the system are usually of a certain non-zero length whereas initially thequeues are empty (of length zero).When simulating a single variant (measure of performance that is of interest) in a steady-state simulation, the desired measure of performance is defined as a limit as the length of the simulation tends to infinity. The following questions arise:•How long should we run the simulation program?•In which state should we start the simulated system, and how can we determine that simulation has reached steady-state? In other words, how long should we run the simulation for it to “warm up”?•Once we have stopped the simulation run, how can we determine the accuracy of the simulation run response (e.g. accuracy of the mean response-time estimate).•How do we account for there being a multiplicity of resources (e.g. processors) rather than a single one?Solutions to these problems are based on mathematical statistics. As already mentioned, when analyzing single or multiple server systems, responses are autocorrelated and we cannot assume they are normally and identically distributed. Several approaches can be applied for steady-state performance analysis of such systems like: batching, replication and regenerative (renewal). The aim of these approaches is to create (almost) independent, normally and identically distributed responses for which classical techniques apply.With the batching approach, we make a single extremely long run. We discard the initial (possibly biased) part of that single run, and divide the remaining (much larger) part of the run into a number of subruns or batches of fixed length. Our estimator of mean response time is equal to the average of the batch averages. When using the replication approach, we start with a very long run, and obtain a single observation on the steady-state response (the average response-time for this run). To obtain the next observation, we start all over again using a different random stream, so that the responses are independent. With the regenerative (renewal) approach, we define a system state (e.g. all queues are empty) and run the simulation until this state is reached. This process is repeated n times, and produces n independent cycles.The advantages of the regenerative approach is that it creates truly independent responses, and that we need not discard initialization bias. The main disadvantage is that it is not clear at all how long it will take before the renewal state is reached, especially in a distributed environment. The replication approach may also require very long runs. For the batching approach we have to detect initialization bias only once. The independence test must be applied [6], but this is not expected to put high demands on the system. We therefore tend to recommend the batching approach which we expect tobe less demanding than the other approaches.The run length control procedure proposed by Heidelberger and Welch [5] is extended to become applicable also in a distributed (decentralized) simulation environment. This is described next.3.2. Batching Approach - GuidelinesIn the following analysis it is assumed that the pseudo-random number generator works satisfactorily (i.e. pseudo-random numbers are truly independent). As an illustrative example, we use load sharing and assume that the measure of interest is the mean response time.The batching approach to steady-state performance analysis of simulation output is used. The initial (biased) part of a very long run is thrown away, applying Schruben et al. test [16], and the remaining is divided into a number of batches. If batches are independent (i.e. pass Von Neumann test for independence - appendix B), the confidence interval for the chosen estimator can be calculated. It is recommended that at least 100 batches from the second half of a run are used [6,15,16].In order to control the run length, this methodology is combined with a run length control procedure proposed by Heidelberger and Welch [5]. There are six basic parameters of interest:1. An initial batch size. (we initially set batch-size=8 observations ),2. An initial end of run checkpoint j1. This provides an initial run length for our search for asequence of batches.(we use j1 =200 batches),3. A maximum run length of j max batches. This gives a run upper bound for our search for asequence of batches. (we use j max =500 batches),4. A multiplicative checkpoint increment parameter I. This is used to extend the current end ofrun checkpoint j k if necessary, such that j k+1=minimum{I×j k, j max}, where k is the checkpoint index. (we use I=1.5)5. Confidence level required α.(we use α=0.10).6. A relative half-width requirement ε.(we use α/2).A batch response is the results from a batch. In order to minimize our run, we are looking for the smallest values of n such that the sequence of batch responses {X(n) | n=n0+1,..., j k} is a sample from a stationary sequence (i.e. behaving as if batch responses or means were truly independent), where X(i) is the i th batch in the sequence, n0 is the start and j k is the checkpoint marking the end of the run. If the number of batches in the sequence is large enough (at least 100), then validconfidence intervals (with confidence-level α) result. In small samples, however, the intervals may miss the true mean with a probability higher than α. The approach is thus to find a potentially stationary sequence and check for independence. If positive, it is expected that the confidence interval generated will meet the required accuracy.Step 1:Initialization-Bias Detection (Transient Test)1. First set k=1 (the checkpoint index) and n0=0. We thus test the sequence {X(n) | n=1,..., j1} by applying the Schruben et al. test. The hypothesis is that the mean of a sequence does not change throughout a run. This is tested by checking the behavior of the mean cumulative sum process Sm=Yn−Ym. S is highly sensitive to the presence of initialization-bias. A two-sided test is applied in our case since imposing load on ready queues of nodes may cause them to grow whereas applying a load sharing algorithm may have the adverse effect. Implementation details are included in Appendix A and full details may be found in [16]. If the sequence passes the test (there is no initialization bias), then we let n0=0, i.e. we take the entire sequence as the stationary portion. We estimate the standard deviation for the Schruben et al. test from the second half of the data in order to reduce possible bias [6,16].2. If it fails the test, then some observations from the initial (biased) portion of the sequence must be deleted. In our implementation, we remove the initial 10% (i.e.. n0=j1/10), and apply the test again to the truncated sequence {X(n) | n=[j1/10]+1,..., j1}. If this truncated sequence fails the test, an additional 10% is removed. We repeat this process until we have a satisfactory value for n0, or until it is concluded that there is not a sufficiently long stationary portion with which to generate the required confidence interval.3. If we could not select a satisfactory value for n0, we set k=k+1 (using the checkpoint increment parameter) and simulation proceeds to the next checkpoint. For instance, in our case j2=minimum{I×j1, j max} = minimum{1.5×200, 500} = 300, so we run the tests again on the new sequence of length 300. The process described above is repeated at the new checkpoint in exactly the same fashion and it is independent of the results from previous checkpoint, i.e. a new decision is made from the entire new sequence without regard to the results of the initial-bias test at the previous checkpoint. The simulation continues from checkpoint to checkpoint until either a stationary sequence is found (and n0 selected) or j max is reached.4. If j max was reached and we could not select n0, we double the batch size and start the whole process all over again. For example, if the batch size was first set to 8 we double it to 16 and restart from the beginning.Step 2: Independence TestIf a stationary sequence was found (i.e. we have a big enough sample from steady state), we now check for its independence applying the Von Neumann test. Implementation details are included in appendix B. If the hypothesis of independence is rejected:1. If j max has not been reached, we set k=k+1 (next checkpoint) and go back to Step 1.2. If j max has been reached, we double batch size, set k=1, and start the process all over again from Step 1.Step 3: Confidence-Interval AccuracyA confidence interval can now be generated from the stationary, independent sequence. We compare the estimated relative half width (ERHW) of the confidence interval with the mean response time estimate (the average over the whole sequence):ERHW=confidence_interval_width2m where m is the mean estimator.There are three cases:1. If ERHW≤ε, the simulation stops.2. If ERHW>ε, and j max has not yet been reached, we set k=k+1 and go back to Step 1.3. If ERHW>ε, and j max has been reached, we double batch size, set k=1, and start the process all over again from Step 1.3.3Special Concerns for the Distributed CaseFor distributed (or decentralized) simulation this is further complicated by not all nodes detecting initialization bias or completing gathering the required number of batches at the same time.We refine the guidelines as follows:a. we set the number of batches and the batch size to the same value for all nodes.b. when a node has collected the required number of batch responses, it will forward this information to the monitor or any centralized entity gathering statistics from all nodes, but will stop running only after all nodes have collected the required number of observations. A system wide batch is the average of the corresponding batches of all nodes.The run length control procedure can enhance a simulator. It is implemented in both CNCSIM and NEST which are discussed next.4.Illustrations of the Suggested Statistical TechniquesWe illustrate use of the suggested techniques with two simulation tools differing in their characteristics but with a common aim: prototyping and evaluation of distributed load sharing algorithms.4.1Simulation InitializationBefore a simulation run can be started (with any tool), the user has to set some system parameters characterizing that run. Following the setting of these parameters, the user has to define the load intensity on each node of the system. Finally, all parameters relevant to the simulated load sharing algorithm have to be set and also those relevant to some assumptions made (e.g. network delay). A simulation run can then be started. Several distribution functions implemented in the simulator facilitate the setting of some of these parameters.4.2Some Performance and Efficiency MeasuresEach of the tools implements the statistical technique described earlier and also the measures detailed in [9] for performance and efficiency evaluation. In our case, the most important measure is:• average response-tim e. This is specified in Fig. 1:Fig. 1: system performance (response time)This measure together with those detailed in [9] permit objective evaluation and comparison of load sharing algorithms. CNCSIM was designed especially for this purpose whereas the second is a general NEtwork Simulation Testbed (NEST). The tools also implement the statistical inferencemodels described earlier in this paper. The tools used are briefly described next.4.3.C N C S I MCNCSIM [8] is a decentralized discrete event-driven simulator implemented in CONIC. CNCSIM comprises several modules, some of which encapsulate the load sharing algorithm. There are few predefined entrypoints for routines and data-structures which have to be programmed by the algorithm designer. These entries provide for algorithm binding, and describe the algorithm dependent procedures (initialization, control decision taking, and state update messages reception/transmission), data-structures (algorithm parameters, messages exchanged by interacting entities), as well as assumptions made by the algorithm designer (e.g. network-delay model). Thus, the distributed algorithm can be changed simply by module replacement and reconfiguration. It provides predictions of response-time, and other performance measures as a function of offered load, and allows tracing of the progress through the system of each message/event to facilitate model validation and analysis.4.4.N E S TThe Network Simulation Testbed (NEST) is a graphical environment for simulation and rapid prototyping of distributed algorithms developed at Columbia university and available to the public. NEST is embedded within a standard UNIX® environment. Multiple threads of execution are supported within a single UNIX process. The overhead associated with context-switching is thus significantly reduced. Therefore NEST can support large simulations (scores to hundreds to thousands of nodes), making it especially attractive in our case where scalability has to be shown.In NEST, there are no built-in functions for statistics gathering, but since the user can easily program the functions that run on nodes (using NEST’s node-functions) and pass data over links (using NEST’s channel-functions), relevant statistics can easily be gathered.4.5.Model ValidationThe simulation model and associated tool were validated against some analytic models (queuing theory models). In addition, the results of published load sharing algorithms were reproduced. These are described next.Queuing Theory ModelsAs a first step in the direction of validating our model, first by CNCSIM and later on by NEST, we tried reproducing results of very simple queuing models making some simplifying assumptions.No cooperation is tried first: this is the simplest case to prove that the simulator is valid. Few caseswere run (under CNCSIM and NEST) and compared successfully to M/M/1 results.The other extreme of full cooperation (ideal) is represented by the M/M/n model. This model implicitly (and unrealistically) assumes accurate system state information at no cost. Few cases were run and compared successfully to M/M/n results.To further illustrate our method, we select an example from the literature [10]. Results have been reproduced by both CNCSIM and NEST. This is discussed next.4.6.Production of Initial ResultsReproduction of earlier results was important for gaining more confidence in the package selected to run the algorithm under study. The results published for FLS in [10] were produced using CNCSIM. Later on, CONIC [12] was replaced by REX [7], which was recently replaced by Regis [13]. It had to be decided whether to convert CNCSIM to each of these distributed programming environments or turn into a more independent and general environment. The second option was chosen, which is discussed in the next section.In order to be able to reproduce results we need to have a detailed description of all inputs and assumptions made. This enables the reproduction of output. For the reader to have a complete example, we include a detailed description of the required information, so that the process can be easily repeated in other environments.AssumptionsThe following assumptions are used throughout our analysis. Applications are processed in first-come-first-served order and are run to completion. A system is first analyzed subject to an even load on its nodes (Table 1). A simulated system of 5 evenly loaded nodes consists of 4 node types. Each node type is characterized by cpu demands, arrival rates and application size distribution functions. For a larger system all we need to do is multiply the #nodes as appropriate. We also study the case of uneven or extreme load imposed on the system (Table 2). Both cases have the same overall loadEach node in the system has a communication co-processor, and is also equipped with a hard disk. Delay in the network is modeled as a function of information size - the size of the information to be sent divided by packet size (1K bits) multiplied by the average delay per packet. Application size is taken from the distribution function described in Table 3.Packet delay, i.e. preparation (packaging), transmission and reception (unpackaging) is assumed to be 10ms, and the basic cost per invocation for these very simple algorithms is assumed to be 10ms. These algorithms involve internal load state observation. The cost associated with such observation or preprocessing of state information is assumed to be 10ms. An internal state observation is takenover the last second every 200ms. Note that when conducting such a study, the ratio between the requested service time and various delays is more important than the absolute numbers. Also, load assumptions are drawn from exponential distributions which are not always realistic. Nevertheless, from this study we are able to draw some useful conclusions._____________________________________________________________ _____________________________________________ #nodes of resulting intensitythis type CPU requested (sec) arrival rate (per sec) per node_____________________________________________________________ _____________________________________________node-type 11exponential(0.5)Poisson(1.53)76.5%node-type 22exponential(0.7)Poisson(1.25)87.5%node-type 31exponential(0.6)Poisson(1.43)85.8%node-type 41exponential(0.5)Poisson(1.43)71.5%_____________________________________________________________ _____________________________________________overall load81.76%_____________________________________________________________ _____________________________________________Table 1: Even load (71.5-87.5%)_____________________________________________________________ _____________________________________________ #nodes of resulting intensitythis type CPU requested (sec) arrival rate (per sec) per node_____________________________________________________________ _____________________________________________node-type 14exponential(0.695) Poisson(1.43)99.3%node-type 21exponential(0.093) Poisson(1.25)11.6%_____________________________________________________________ _____________________________________________overall load81.76%_____________________________________________________________ _____________________________________________Table 2: Uneven load (11.6-99.3%)_____________________________________________________________ ______________________________________________ percentile size percentile size percentile size_____________________________________________________________ ______________________________________________ 10100070180009534000 201200080200009838000 4014000852200099440006016000903000099.550000_____________________________________________________________ _____________________________________________Table 3: Application size distribution4.7.Reproduction of Published Results with NESTIn NEST the monitor process is responsible for gathering global system statistics (for initial bias detection, batches independence test and confidence interval calculation) and for simulation termination. A predefined number of observations is set to constitute a single batch. As in CNCSIM, the statistical test may be performed only when all the nodes of the system accumulate the required number of batches. The statistical tests could be easily embedded in the NEST simulation environment, resulting in a simple yet powerful method of automatic control over。