Supporting Software Distributed Shared Memory with an Optimizing Compiler
- 格式:pdf
- 大小:71.93 KB
- 文档页数:10
In the realm of computer science and programming, state variables serve as fundamental building blocks for modeling systems and processes that evolve over time. They embody the essence of dynamic behavior in software applications, enabling developers to capture and manipulate various aspects of an object or system's condition at any given moment. This essay delves into the concept of state variables from multiple perspectives, providing a detailed definition, discussing their roles and significance, examining their implementation across various programming paradigms, exploring their impact on program design, and addressing the challenges they introduce.**Definition of State Variables**At its core, a state variable is a named data item within a program or computational system that maintains a value that may change over the course of program execution. It represents a specific aspect of the system's state, which is the overall configuration or condition that determines its behavior and response to external stimuli. The following key characteristics define state variables:1. **Persistence:** State variables retain their values throughout the lifetime of an object or a program's execution, unless explicitly modified. These variables hold onto information that persists beyond a single function call or statement execution.2. **Mutability:** State variables are inherently mutable, meaning their values can be altered by program instructions. This property allows programs to model evolving conditions or track changes in a system over time.3. **Contextual Dependency:** The value of a state variable is dependent on the context in which it is accessed, typically determined by the object or scope to which it belongs. This context sensitivity ensures encapsulation and prevents unintended interference with other parts of the program.4. **Time-variant Nature:** State variables reflect the temporal dynamics of a system, capturing how its properties or attributes change in response to internal operations or external inputs. They allow programs to model systemswith non-static behaviors and enable the simulation of real-world scenarios with varying conditions.**Roles and Significance of State Variables**State variables play several critical roles in software development, contributing to the expressiveness, versatility, and realism of programs:1. **Modeling Dynamic Systems:** State variables are instrumental in simulating real-world systems with changing states, such as financial transactions, game characters, network connections, or user interfaces. By representing the relevant attributes of these systems as state variables, programmers can accurately model complex behaviors and interactions over time.2. **Enabling Data Persistence:** In many applications, maintaining user preferences, application settings, or transaction histories is crucial. State variables facilitate this persistence by storing and updating relevant data as the program runs, ensuring that users' interactions and system events leave a lasting impact.3. **Supporting Object-Oriented Programming:** In object-oriented languages, state variables (often referred to as instance variables) form an integral part of an object's encapsulated data. They provide the internal representation of an object's characteristics, allowing objects to maintain their unique identity and behavior while interacting with other objects or the environment.4. **Facilitating Concurrency and Parallelism:** State variables underpin the synchronization and coordination mechanisms in concurrent and parallel systems. They help manage shared resources, enforce mutual exclusion, and ensure data consistency among concurrently executing threads or processes.**Implementation Across Programming Paradigms**State variables find expression in various programming paradigms, each with its own idiomatic approach to managing and manipulating them:1. **Object-Oriented Programming (OOP):** In OOP languages like Java, C++, or Python, state variables are typically declared as instance variables withina class. They are accessed through methods (getters and setters), ensuring encapsulation and promoting a clear separation of concerns between an object's internal state and its external interface.2. **Functional Programming (FP):** Although FP emphasizes immutability and statelessness, state management is still necessary in practical applications. FP languages like Haskell, Scala, or Clojure often employ monads (e.g., State monad) or algebraic effects to model stateful computations in a pure, referentially transparent manner. These constructs encapsulate state changes within higher-order functions, preserving the purity of the underlying functional model.3. **Imperative Programming:** In imperative languages like C or JavaScript, state variables are directly manipulated through assignment statements. Control structures (e.g., loops and conditionals) often rely on modifying state variables to drive program flow and decision-making.4. **Reactive Programming:** Reactive frameworks like React or Vue.js utilize state variables (e.g., component state) to manage UI updates in response to user interactions or data changes. These frameworks provide mechanisms (e.g., setState() in React) to handle state transitions and trigger efficient UI re-rendering.**Impact on Program Design**The use of state variables significantly influences program design, both positively and negatively:1. **Modularity and Encapsulation:** Well-designed state variables promote modularity by encapsulating relevant information within components, objects, or modules. This encapsulation enhances code organization, simplifies maintenance, and facilitates reuse.2. **Complexity Management:** While state variables enable rich behavioral modeling, excessive or poorly managed state can lead to complexity spirals. Convoluted state dependencies, hidden side effects, and inconsistent state updates can make programs difficult to understand, test, and debug.3. **Testing and Debugging:** State variables introduce a temporal dimension to program behavior, necessitating thorough testing across different states and input scenarios. Techniques like unit testing, property-based testing, and state-machine testing help validate state-related logic. Debugging tools often provide features to inspect and modify state variables at runtime, aiding in diagnosing issues.4. **Concurrency and Scalability:** Properly managing shared state is crucial for concurrent and distributed systems. Techniques like lock-based synchronization, atomic operations, or software transactional memory help ensure data consistency and prevent race conditions. Alternatively, architectures like event-driven or actor-based systems minimize shared state and promote message-passing for improved scalability.**Challenges and Considerations**Despite their utility, state variables pose several challenges that programmers must address:1. **State Explosion:** As programs grow in size and complexity, the number of possible state combinations can increase exponentially, leading to a phenomenon known as state explosion. Techniques like state-space reduction, model checking, or static analysis can help manage this complexity.2. **Temporal Coupling:** State variables can introduce temporal coupling, where the correct behavior of a piece of code depends on the order or timing of state changes elsewhere in the program. Minimizing temporal coupling through decoupled designs, immutable data structures, or functional reactive programming can improve code maintainability and resilience.3. **Caching and Performance Optimization:** Managing state efficiently is crucial for performance-critical applications. Techniques like memoization, lazy evaluation, or cache invalidation strategies can optimize state access and updates without compromising correctness.4. **Debugging and Reproducibility:** Stateful programs can be challenging to debug due to their non-deterministic nature. Logging, deterministic replaysystems, or snapshot-based debugging techniques can help reproduce and diagnose issues related to state management.In conclusion, state variables are an indispensable concept in software engineering, enabling programmers to model dynamic systems, maintain data persistence, and implement complex behaviors. Their proper utilization and management are vital for creating robust, scalable, and maintainable software systems. While they introduce challenges such as state explosion, temporal coupling, and debugging complexities, a deep understanding of state variables and their implications on program design can help developers harness their power effectively, ultimately driving innovation and progress in the field of computer science.。
University of Wisconsin-Madison(UMW)周玉龙1101213442 计算机应用UMW简介美国威斯康辛大学坐落于美国密歇根湖西岸的威斯康辛州首府麦迪逊市,有着风景如画的校园,成立于1848年, 是一所有着超过150年历史的悠久大学。
威斯康辛大学是全美最顶尖的三所公立大学之一,是全美最顶尖的十所研究型大学之一。
在美国,它经常被视为公立的常青藤。
与加利福尼亚大学、德克萨斯大学等美国著名公立大学一样,威斯康辛大学是一个由多所州立大学构成的大学系统,也即“威斯康辛大学系统”(University of Wisconsin System)。
在本科教育方面,它列于伯克利加州大学和密歇根大学之后,排在公立大学的第三位。
除此之外,它还在本科教育质量列于美国大学的第八位。
按美国全国研究会的研究结果,威斯康辛大学有70个科目排在全美前十名。
在上海交通大学的排行中,它名列世界大学的第16名。
威斯康辛大学是美国大学联合会的60个成员之一。
特色专业介绍威斯康辛大学麦迪逊分校设有100多个本科专业,一半以上可以授予硕士、博士学位,其中新闻学、生物化学、植物学、化学工程、化学、土木工程、计算机科学、地球科学、英语、地理学、物理学、经济学、德语、历史学、语言学、数学、工商管理(MBA)、微生物学、分子生物学、机械工程、哲学、西班牙语、心理学、政治学、统计学、社会学、动物学等诸多学科具有相当雄厚的科研和教学实力,大部分在美国大学相应领域排名中居于前10名。
学术特色就学术方面的荣耀而言,威斯康辛大学麦迪逊校区的教职员和校友至今共获颁十七座诺贝尔奖和二十四座普立兹奖;有五十三位教职员是国家科学研究院的成员、有十七位是国家工程研究院的成员、有五位是隶属于国家教育研究院,另外还有九位教职员赢得了国家科学奖章、六位是国家级研究员(Searle Scholars)、还有四位获颁麦克阿瑟研究员基金。
威斯康辛大学麦迪逊校区虽然是以农业及生命科学为特色,但是令人注目,同时也是吸引许多传播科系学子前来留学的最大诱因,则是当前任教于该校新闻及传播研究所、在传播学界有「近代美国传播大师」之称的杰克·麦克劳(Jack McLauld)。
为当地慈善机构捐赠物品英语作文Donating Items to Local Charitable OrganizationsCharitable organizations play a vital role in supporting individuals and communities in need. These organizations rely on the generosity of donors to provide essential services and resources to those who may be struggling with poverty, homelessness, or other challenges. As members of our local community, we all have the opportunity to make a positive impact by donating items to these organizations.One of the primary benefits of donating to local charitable organizations is the direct impact it can have on the lives of those in need. When we donate items such as clothing, household goods, or non-perishable food, we are directly contributing to the well-being of our neighbors. These donations can provide warmth, comfort, and nourishment to those who may not have access to these basic necessities.Moreover, donating to local charitable organizations can be a highly efficient way to support our community. These organizations often have well-established distribution networks and partnerships with other local organizations, ensuring that the donations reach theindividuals and families who need them most. By donating through these channels, we can be confident that our contributions are making a meaningful difference.One of the most common ways to donate to local charitable organizations is by contributing gently used clothing and household items. Many organizations, such as Goodwill, Salvation Army, and local shelters, accept donations of clothing, furniture, toys, and other household goods. These items can then be distributed to individuals and families in need, or sold in thrift stores to generate funds for the organization's programs and services.In addition to clothing and household items, local charitable organizations often accept donations of non-perishable food items. Food banks and pantries play a crucial role in addressing food insecurity within our communities. By donating canned goods, dry foods, and other non-perishable items, we can help ensure that families and individuals have access to the nourishment they need.Another way to support local charitable organizations is by donating personal care items and hygiene products. These items, such as toothpaste, soap, shampoo, and feminine products, are often in high demand but can be difficult for individuals and families to afford. By donating these essential items, we can help alleviate the burden on those who may be struggling to meet their basic needs.When it comes to donating to local charitable organizations, it's important to consider the specific needs of the organizations and the individuals they serve. Many organizations maintain wish lists or have specific guidelines for the types of items they accept. It's crucial to research the organizations in our local area and to align our donations with their current needs.In addition to physical donations, monetary contributions can also be a valuable way to support local charitable organizations. Many organizations rely on financial donations to fund their programs, pay staff, and cover operational expenses. By making a monetary donation, we can help ensure that these organizations have the resources they need to continue their important work.In conclusion, donating items to local charitable organizations is a powerful way to make a positive impact on our community. Whether it's contributing clothing, household goods, non-perishable food, or personal care items, our donations can provide essential support to those in need. By aligning our donations with the specific needs of local organizations, we can ensure that our contributions are making a meaningful difference in the lives of our neighbors. By embracing the spirit of generosity and community, we can all play a role in creating a more compassionate and equitable society.。
In an era where globalization has woven the world into a tightly interconnected web, the concept of sharing a common future for all humanity has become more relevant than ever. The idea of a shared future transcends geographical boundaries, cultural differences, and political ideologies, emphasizing the need for collective responsibility and mutual cooperation.The Importance of a Shared Future1. Economic Interdependence: The global economy is a prime example of how nations are interconnected. The rise of multinational corporations and international trade has made it clear that the prosperity of one nation can significantly impact others. A shared future in economic terms means working towards policies that promote fair trade, reduce poverty, and ensure that the benefits of economic growth are distributed equitably.2. Environmental Sustainability: Climate change is a global challenge that requires a global response. The shared future in this context involves committing to sustainable practices, reducing carbon emissions, and investing in renewable energy sources. It is about ensuring that the planet remains habitable for future generations, regardless of their nationality.3. Cultural Exchange: The exchange of cultural practices, ideas, and values enriches societies and fosters understanding among different peoples. A shared future in cultural terms means embracing diversity and promoting dialogue that respects and learns from different traditions and perspectives.4. Technological Advancement: Technology has the power to transform lives and societies. A shared future in this regard is about ensuring that technological advancements are accessible to all, reducing the digital divide and using technology as a tool for education, healthcare, and social development.5. Peace and Security: The pursuit of a peaceful and secure world is fundamental to a shared future. This involves addressing the root causes of conflicts, promoting diplomacy over violence, and ensuring that international laws and norms are respected.Challenges to a Shared Future1. Inequality: Economic, social, and political inequalities pose a significant challenge to the idea of a shared future. These disparities can lead to social unrest and hinder cooperation among nations.2. Nationalism and Protectionism: The rise of nationalistic sentiments and protectionist policies can create barriers to international cooperation and hinder efforts towards a shared future.3. Lack of Access to Education and Healthcare: In many parts of the world, access to basic services like education and healthcare is limited, which can perpetuate cycles of poverty and hinder social mobility.4. Environmental Degradation: The overexploitation of natural resources and disregard for environmental conservation threaten the sustainability of our planet, posing a significant challenge to a shared future.The Role of Individuals and Governments1. Individual Responsibility: Each person has a role to play in shaping a shared future. This can be through making conscious choices about consumption, supporting social causes, and advocating for policies that promote a fair and sustainable world.2. Governmental Initiatives: Governments must take the lead in formulating and implementing policies that address global challenges. This includes investing in education, healthcare, and infrastructure, and working with international partners to tackle issues like climate change and poverty.3. International Cooperation: International organizations play a crucial role in facilitating dialogue and cooperation among nations. They can help to coordinate efforts and provide a platform for nations to work together towards common goals.In conclusion, the concept of a shared future for all humanity is not merely an idealistic vision but a practical necessity in our interconnected world. It requires a commitment to collaboration, understanding, and the recognition that the wellbeing of one is intrinsically linked to the wellbeing of all. By working together, we can overcome the challenges that face us and build a future that is sustainable, equitable, and prosperous for all.。
小学上册英语第二单元测验卷英语试题一、综合题(本题有100小题,每小题1分,共100分.每小题不选、错误,均不给分)1.I think friendship is one of the greatest gifts. Friends support each other through thick and thin. I’m grateful for my friend __________, who always knows how to cheer me up.2.My dog loves to fetch the ______ (球).3. A ______ is a large area of elevated land with a flat top.4.I enjoy _______ (运动) with my friends.5.The ______ is always smiling.6.The chemical formula for calcium hydroxide is ______.7.What is the name of the famous landmark in Sydney?A. Opera HouseB. Harbour BridgeC. Bondi BeachD. UluruA8.The chemical formula for sodium acetate is _______.9.The chemical symbol for argon is _______.10.I love to listen to ______ (音乐) while I study.11.My mom bought me a new ________ (滑梯) for the backyard. I can slide down________ (很快).12.Acids taste ______.13.What is the main ingredient in a salad?A. MeatB. VegetablesC. GrainsD. FruitB14.Which instrument has keys?A. GuitarB. DrumsC. PianoD. FluteC15. A ______ (温暖的气候) benefits many flowers.16.Which shape has three sides?A. SquareB. RectangleC. TriangleD. CircleC17.What animal is known as "man's best friend"?A. CatB. BirdC. DogD. FishC18.The process of breaking down food involves __________.19.I have _____ (two) pets.20.In , America declared its _______ from Britain.21.What is the main ingredient in bread?A. SugarB. FlourC. YeastD. WaterB22.What is the name of the famous structure in Egypt that was built as a tomb?A. Great WallB. ColosseumC. PyramidsD. ParthenonC23.The ______ (自然) has many wonders.24.The _______ of sound can be perceived in different ways by different people.25.The ______ shows the relationship between animals and plants.26.My family has a ______ pet. (我的家里有一只______宠物。
The Future of Work Gig Economy andRemote WorkThe future of work is rapidly evolving with the rise of the gig economy and remote work. This trend has been accelerated by the global pandemic, which forced many companies to quickly adapt to remote work arrangements. As we look ahead,it's clear that the traditional 9-to-5 office setup is no longer the only viable option for many workers. Instead, the gig economy and remote work are shaping the way we approach employment and professional opportunities. This shift presentsboth exciting possibilities and significant challenges for workers, companies, and society as a whole. From the perspective of workers, the gig economy and remote work offer unprecedented flexibility and autonomy. Freelancers and independent contractors have the freedom to choose their own projects, set their own schedules, and work from anywhere with an internet connection. This level of control overone's work life can lead to greater job satisfaction and work-life balance. Additionally, remote work eliminates the daily commute, reducing stress andallowing workers to reclaim valuable time that would have otherwise been spent in traffic or public transportation. This newfound flexibility has the potential to reshape the traditional notion of work and provide individuals with theopportunity to craft a career that aligns with their personal values andpriorities. However, the gig economy and remote work also present challenges for workers, particularly in terms of job security and benefits. Unlike traditionalfull-time employment, gig workers often lack access to employer-sponsored healthcare, retirement plans, and other essential benefits. Furthermore, the fluctuating nature of gig work means that income can be unpredictable, making financial planning and stability more challenging. Remote work can also blur the boundaries between professional and personal life, leading to potential burnoutand isolation if not managed effectively. As the workforce becomes increasingly decentralized, it's crucial to address these issues and ensure that all workers have access to the support and resources they need to thrive in this new landscape. From the perspective of companies, the gig economy and remote work present opportunities to access a broader talent pool and reduce overhead costs. By hiringfreelancers and remote workers, companies can tap into a global network of diverse skills and expertise without the constraints of geographical location. This not only fosters innovation and creativity but also allows businesses to scale more efficiently. Additionally, remote work arrangements can lead to increased productivity and employee retention, as workers appreciate the flexibility and freedom to tailor their work environment to their individual needs. Embracing the gig economy and remote work can position companies to stay competitive in arapidly changing business environment. However, companies also face challenges in managing and supporting a distributed workforce. Communication and collaboration can become more complex in a remote setting, requiring intentional efforts to maintain a strong company culture and cohesive team dynamics. Additionally, ensuring data security and compliance with labor laws across different regions can be a daunting task for companies operating in the gig economy. As the nature of work continues to evolve, businesses must invest in the infrastructure and technology necessary to effectively support and empower remote workers while upholding the values and integrity of the organization. Societally, the gig economy and remote work have the potential to reshape the dynamics of urbanization and economic opportunity. Remote work allows individuals to reside in locations outside of major metropolitan areas, easing the strain on infrastructure and potentially mitigating issues related to urban overcrowding. This could lead to more balanced regional development and reduced pressure on housing markets inlarge cities. Additionally, the gig economy creates opportunities for individuals who may have faced barriers to traditional employment, such as stay-at-home parents, individuals with disabilities, or those living in underserved communities. However, it's crucial to address the potential downsides of remote work, such as exacerbating inequalities in access to technology and furthering the divide between those who can work remotely and those who cannot. In conclusion, thefuture of work in the gig economy and remote work opens up a world ofpossibilities for workers, companies, and society at large. However, it alsobrings forth a unique set of challenges that must be carefully navigated. As we embrace this new era, it's essential to prioritize the well-being and inclusivity of all members of the workforce, while also harnessing the potential forinnovation and progress. By approaching this shift with empathy and foresight, we can create a future of work that is truly fulfilling, sustainable, and equitable.。
新词积累Swimming lane design: 数据流交互图JIRA:是Atlassian公司出品的项目与事务跟踪工具,被广泛应用于缺陷跟踪、客户服务、需求收集、流程审批、任务跟踪、项目跟踪和敏捷管理等工作领域。
JIRA中配置灵活、功能全面、部署简单、扩展丰富.Heartbeat:是Linux-HA 工程的一个组成部分,它实现了一个高可用集群系统。
心跳服务和集群通信是高可用集群的两个关键组件,在Heartbeat 项目里,由heartbeat 模块实现了这两个功能。
下面描述了heartbeat 模块的可靠消息通信机制,并对其实现原理做了一些介绍。
A(Active-matrix)主动矩阵(Adapter cards)适配卡(Advanced application)高级应用(Analytical graph)分析图表(Analyze)分析(Animations)动画(Application software)应用软件(Arithmetic operations)算术运算(Audio-output device)音频输出设备(Access time)存取时间(A ccess)存取(A ccuracy)准确性(A d network cookies)广告网络信息记录软件(A dministrator)管理员(Add-ons)插件(Address)地址(Agents)代理(Analog signals)模拟信号(Applets)程序(Asynchronous communications por t)异步通信端口(Attachment)附件AGP(accelerated graphics port)加速图形接口ALU (arithmetic-logic unit)算术逻辑单元AAT(Average Access Time) 平均存取时间ACL(Access Control Lists)访问控制表ACK(acknowledgement character)确认字符ACPI (Advanced Configuration and Power Interface)高级配置和电源接口ADC(Analog to Digital Converter) 模数转换器ADSL(Asymmetric Digital Subscriber Line)非对称用户数字线路ADT(Abstract Data Type) 抽象数据类型AGP(Accelerated Graphics Port)图形加速端口AI(Artif icial Intelligence) 人工智能AIFF(Audio Image File Format) 声音图像文件格式ALU(Arithmetic Logical Unit) 算术逻辑单元AM(Amplitude Modulation) 调幅ANN(Artificial Neural Network) 人工神经网络ANSI(American National Standard Institute)美国国家标准协会API(Application Programming Interface)应用程序设计接口APPN(Advanced Peer-to-Peer Network )高级对等网络ARP(Address Resolution Protocol) 地址分辨/ 转换协议ARPG(Action Role Playing Game) 动作角色扮演游戏ASCII (American Standard Code for Information Interchange)美国信息交换标准代码ASP(Active Server Page) 活动服务器网页ASP(Application Service Provider) 应用服务提供商AST(Average Seek Time) 平均访问时间ATM(asynchronous transfer mode)异步传输模式ATR (Automatic Target Recognition) 自动目标识别AVI (Audio Video Interleaved)声音视频接口Algorithm 算法B(Bar code)条形码(Bar code reader)条形码读卡器(Basic application)基础程序Beta testing Beta测试是一种验收测试。
D ATA S HE E TM 20 I n t e r n e t B a c k b o n e R o u t e rThe M20 router’s compact design offers tremendous performance and portdensity. The M20 router has a rich feature set that includes numerous advantages.sRoute lookup rates in excess of 40 Mpps for wire-rate forwarding performancesAggregate throughputcapacity exceeding 20 Gbps sPerformance-based packet filtering, rate limiting, and sampling with the Internet Processor II™ ASIC sRedundant System and Switch Board andredundant Routing Engine sMarket-leading port density and flexibility sProduction-proven routing software with Internet-scale implementations of BGP4, IS-IS, OSPF , MPLS traffic engineering, class of service, and multicasting applicationsThe M20™ Internet backbone router is a high-performance routing platform that is built for a variety of Internet applications, including high-speed access, public and private peering,hosting sites, and backbone core networks.The M20 router leverages proven M-series ASIC technology to deliver wire-rateperformance and rich packet processing,such as filtering, sampling, and rate limiting.It runs the same JUNOS™ Internet software and shares the same interfaces that are supported by the M40™ Internet backbone router, providing a seamless upgrade path that protects your investment. Moreover, its compact design (14 in / 35.56 cm high)delivers market-leading performance and port density, while consuming minimal rack space.The M20 router offers wire-rate performance,advanced features,internal redundancy,and scaleability in a space-efficient package.A d v a n t a g e sFeatur esBenefitsIt [JUNOS software]dramatically increases our confidence that we will have access to technology to keep scaling along with what the demands on the network are.We can keep running.—Michael O’Dell,Chief Scientist,UUNETTechnologies, Inc.“”A r c h i t e c t u r eThe two key components of the M20 architecture are the Packet Forwarding Engine (PFE) and the Routing Engine,which are connected via a 100-Mbps link. Control traffic passing through the 100-Mbps link is prioritized and rate limited to help protect against denial-of-service attacks.sThe PFE is responsible for packet forwarding performance.It consists of the Flexible PIC Concentrators (FPCs),physical interface cards (PICs), System and Switch Board (SSB), and state-of-the-art ASICs.sThe Routing Engine maintains the routing tables andcontrols the routing protocols. It consists of an Intel-based PCI platform running JUNOS software.The architecture ensures industry-leading service delivery by cleanly separating the forwarding performance from the routing performance. This separation ensures that stressexperienced by one component does not adversely affect the performance of the other since there is no overlap of required resources.Leading-edge ASICsThe feature-rich M20 ASICs deliver a comprehensive hardware-based system for packet processing, including route lookups, filtering, sampling, rate limiting, loadbalancing, buffer management, switching, encapsulation,and de-encapsulation functions. To ensure a non-blocking forwarding path, all channels between the ASICs are oversized, dedicated paths.Internet Processor and Internet Processor II ASICsThe Internet Processor™ ASIC, which was originally deployed with M20 routers, supports an aggregated lookup rate of over 40 Mpps.An enhanced version, the Internet Processor II ASIC,supports the same 40 Mpps lookup rate. With over one million gates, this ASIC delivers predictable, high-speed forwarding performance with service flexibility, including filtering and sampling. The Internet Processor II ASIC is the largest, fastest, and most advanced ASIC ever implemented on a router platform and deployed in the Internet.Distributed Buffer Manager ASICsThe Distributed Buffer Manager ASICs allocate incoming data packets throughout shared memory on the FPCs. This single-stage buffering improves performance by requiring only one write to and one read from shared memory. There are no extraneous steps of copying packets from input buffers to output buffers. The shared memory is completelynonblocking, which in turn, prevents head-of-line blocking.I/O Manager ASICsEach FPC is equipped with an I/O Manager ASIC that supports wire-rate packet parsing, packet prioritizing, and queuing.Each I/O Manager ASIC divides the packets, stores them in shared memory (managed by the Distributed Buffer Manager ASICs), and re-assembles the packets for transmission.Media-specific ASICsThe media-specific ASICs perform physical layer functions,such as framing. Each PIC is equipped with an ASIC or FPGA that performs control functions tailored to the PIC’s media type.Packet Forwarding EngineThe PFE provides Layer 2 and Layer 3 packet switching, route lookups, and packet forwarding. The Internet Processor II ASIC forwards an aggregate of up to 40 Mpps for all packet sizes. The aggregate throughput is 20.6 Gbps half-duplex.The PFE supports the same ASIC-based features supported by all other M-series routers. For example, class-of-service features include rate limiting, classification, priority queuing,Random Early Detection and Weighted Round Robin to increase bandwidth efficiency. Filtering and sampling areLogical View of M20 ArchitecturePacket Forwarding Enginealso available for restricting access, increasing security, and analyzing network traffic.Finally, the PFE delivers maximum stability duringexceptional conditions, while also providing a significantly lower part count. This stability reduces power consumption and increases mean time between failure.Flexible PIC ConcentratorsThe FPCs house PICs and connect them to the rest of the PFE. There is a dedicated, full-duplex, 3.2-Gbps channel between each FPC and the core of the PFE.You can insert up to four FPCs in an M20 chassis. Each FPC slot supports one FPC or one OC-48c/STM-16 PIC. Each FPC supports up to four of the other PICs in any combination,providing unparalleled interface density and configuration flexibility.Each FPC contains shared memory for storing data packets received; the Distributed Buffer Manager ASICs on the SSB manage this memory. In addition, the FPC houses the I/O Manager ASIC, which performs a variety of queue management and class-of-service functions.Physical Interface CardsPICs provide a complete range of fiber optic and electrical transmission interfaces to the network. The M20 router offers flexibility and conserves rack space by supporting a wide variety of PICs and port densities. All PICs occupy one of four PIC spaces per FPC except for the OC-48c/STM-16 PIC, which occupies an entire FPC slot.An additional Tunnel Services PIC enables the M20 router to function as the ingress or egress point of an IP-IP unicasttunnel, a Cisco generic routing encapsulation (GRE) tunnel, or a Protocol Independent Multicast - Sparse Mode (PIM-SM) tunnel.For a list of available PICs, see the M-series Internet Backbone Routers Physical Interface Cards datasheet.System and Switch BoardThe SSB performs route lookup, filtering, and sampling, as well as provides switching to the destination FPC. Hosting both the Internet Processor II ASIC and the Distributed Buffer Manager ASICs, the SSB makes forwarding decisions,distributes data cells throughout memory , processes exception and control packets, monitors system components, and controls FPC resets. You can have one or two SSBs, ensuring automatic failover to a redundant SSB in case of failure.Routing EngineThe Routing Engine maintains the routing tables and controls the routing protocols, as well as the JUNOS software processes that control the router’s interfaces, the chassis components, system management, and user access to the router. These routing and software processes run on top of a kernel that interacts with the PFE.sThe Routing Engine processes all routing protocol updates from the network, so PFE performance is not affected.sThe Routing Engine implements each routing protocol with a complete set of Internet features and provides full flexibility for advertising, filtering, and modifying routes.Routing policies are set according to route parameters,such as prefixes, prefix lengths, and BGP attributes.You can install a redundant Routing Engine to ensuremaximum system availability and to minimize MTTR in case of failure.JUNOS Internet SoftwareJUNOS software is optimized to scale to large numbers of network interfaces and routes. The software consists of a series of system processes running in protected memory on top of an independent operating system. The modular design improves reliability by protecting against system-wide failure since the failure of one software process does not affect other processes.SuppliesBack ViewM20 Router Front and Back ViewsFront View14 inS p e c i f i c a t i o n sSpecification DescriptionCopyright © 2000, Juniper Networks, Inc. All rights reserved. Juniper Networks is a registered trademark of Juniper Networks, Inc. Internet Processor,Internet Processor II, JUNOS, M5, M10, M20, M40, and M160 are trademarks of Juniper Networks, Inc. All other trademarks, service marks, registered trademarks, or registered service marks may be the property of their respective owners. All specifications are subject to change without notice.Printed in USA.O r d e r i n g I n f o r m a t i o nModel NumberDescriptionPart Number 100009-003 09/00w w w.j u n i p e r.n e tC O R P O R AT EH E A D Q U A R T E RSJuniper Networks, Inc.1194 North Mathilda Avenue Sunnyvale, CA 94089 USAPhone 408 745 2000 or 888 JUNIPER Fax 408 745 2100Juniper Networks, Inc. has sales offices worldwide.For contact information, refer to /contactus.html .。
SOLUTION BRIEF ©2014 Mellanox Technologies. All rights reserved.This document discusses an implementation ofVMware Virtual SAN TM (VSAN) that supports thestorage requirements of a VMware Horizon™View™ Virtual Desktop (VDI) environment. Al-though VDI was used to benchmark the perfor-mance of this Virtual SAN implementation, anyapplication supported by ESXi 5.5 can be used.VSAN is VMware’s hypervisor-converged storagesoftware that creates a shared datastore acrossSSDs and HDDs using multiple x86 server hosts.T o measure VDI performance, the Login VSI work-load generator software tool was used to test theperformance when using Horizon View. VDI per-formance is measured as the number of virtualdesktops that can be hosted while delivering auser experience equal to or better than a physicaldesktop including consistent, fast response timesand a short boot time. Supporting more desktopsper server reduces CAPEX and OPEX requirements.Benefits of Virtual SANData storage in VMware ESXenvironments has historicallybeen supported using NAS-or SAN-connected sharedstorage from vendors such asEMC, Netapp, and HDS. Theseproducts often have consider-able CAPEX requirements andbecause they need specially-trained personnel to supportthem, OPEX increases as well.VSAN eliminates the needfor NAS- or SAN-connectedshared storage by using SSDsand HDDs attached locally tothe servers in the cluster. Aminimum of three servers arerequired in order to survive a server failure. Infor-mation is protected from storage device failure byreplicating data on multiple servers. A dedicatednetwork connection between the servers provideslow latency storage transactions.SSDs Boost Virtual SAN PerformanceApplication performance is often constrainedby storage. Flash-based SSDs reduce delays(latency) when reading or writing data to harddrives, thereby boosting performance.READ Caching: By caching commonly accesseddata in SSDs READ latency is significantlyreduced because it is faster to retrieve datadirectly from the cache than from slow, spin-ning HDDs. Because DRS1 may cause VMs tomove occasionally from one server to another,VSAN does not attempt to store a VM’s dataon a SSD connected to the server that hoststhe VM. This means many READ transactionsmay need to traverse the network, so highbandwidth and low latency is critical.Implementing VMware’s Virtual SAN™ with Micron SSDs and the Mellanox Interconnect SolutionFigure 1. Virtual SAN1 VMware’s Dynamic Resource Scheduling performs application load balancing every 5 minutes.WRITE Buffering: VSAN temporarily buffers all WRITEsin SSDs to significantly reduce latency. T o protectagainst SSD or server failure, this data is also stored on a SSD located on different server. At regular intervals, the WRITE data in the SSDs are de-staged to HDDs. Because Flash is non-volatile, data that has not been de-staged is retained during a power loss. In the event of a server failure, the copy of the buffered or de-staged data on the other server ensures that no data loss will occur.Dedicated Network Enables Low Latency for VSANMost READ and all WRITE transactions must traverse over a network. VSAN does not try to cache data that is local to the application because it results in poor balancing of SSD utilization across the cluster. Because caching is distributed across multiple servers, a dedicated network is required to lower contention for LAN resources. For data redundancy and to enable high availability, data is written to HDDs located on separate servers. Since two traverses across the network are typically required for a READ and one for a WRITE, the latency of the LAN must be sub-millisecond. VMware recommends at least a 10GbE connection.VSAN-Approved SSD ProductsVMware has a compatibility guide specifically listing I/O controllers, SSDs, and HDDs approved for implementing VSAN. Micron’s P320h and P420m PCIe HHHL SSD cards are listed 2 in the compatibility list.Tested ConfigurationThree servers, each with dual Intel Xeon E5-2680 v2 pro-cessors and 384GB of memory, were used for this test. Each server included one disk group consisting of one SSD and six HDDs. Western Digital 1.2TB 10K rpm SAS hard drives were connected using an LSI 9207-8i host bus adapter set to a queue depth of 600. A 1.4TB Micron P420m PCIe card was used for the SSD. A dedicatedstorage network supporting VSAN used Mellanox’s end-to-end 10GbE interconnect solution, including their SX1012 twelve-port 10GbE switch, ConnectX ®-3 10GbE NICs and copper interconnect cables.On the software side, ESXi 5.5.0 Build 1623387 and Ho-rizon View 5.3.2 Build 1887719 were used. Within the desktop sessions, Windows 7 64-bit was used. Each per-sistent desktop used 2GB of memory and one vCPU. VDI performance was measured as the number of virtual desk-tops that could be hosted while delivering a user experi-ence equal to or better than a physical desktop.ResultsVersion 4.1.0.757 of the Login VSI load simulator was used for testing. This benchmark creates the workload representative of an office worker using Microsoft Office applications. The number of desktop sessions is steadily in-creased until a maximum is reached, in this case 450 ses-sions. Increasing the number of sessions raises the load on the servers and the VSAN-connected storage, which causes response times to lengthen. Based on minimum, average, and maximum response times, the benchmark software will calculate VSImax, which is their recommen-dation for the maximum number of desktops that can be supported. The following figure shows that using the three-server configuration, up to 356 desktops can be supported.Figure 2. VSAN Test Configuration2The Virtual SAN compatibility guide is located at https:///resources/compatibility/search.php?deviceCategory=vsan350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: © Copyright 2014. Mellanox Technologies. All rights reserved.Mellanox, Mellanox logo, ConnectX, and SwitchX are registered trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.15-4175SB Rev1.0Other critical factors in VDI environments are the times required to boot, deploy, and recompose desktops. Boot is when an office worker arrives at work and wants to access their desktop. Deploy is the creation of a desktop session, and recompose is the update of an existing session. An update may be required after a patch release has been dis-tributed. For this test, 450 desktops were simultaneously booted, deployed, and recomposed.• Boot: 0.7 seconds/desktop • Deployment: 7.5 seconds/desktop • Recompose: 9.2 seconds/desktopConclusionSoftware-defined storage appears to be a viable alternative to SAN or NAS storage from our experience using VSAN. Using directly-attached SSDs and HDDs can provide supe-rior performance by bypassing the need for shared storage. The VSAN implementation provides the fault tolerance and high availability necessary for enterprise environments that has historically has been the limitation of DAS.Read caching and write buffering using the Micron P420mPCIe SSD sufficiently masks the latency limitations of HDDs, allowing VMs to run at high performances. Since VMs frequently move between servers for load balancing, there is no guarantee that SSDs that are local to the VM will have cached data. The Mellanox interconnect provides low latencies whenever accessing data between servers is necessary.T o evaluate the Micron and Mellanox hardware supporting VSAN, VMware’s Horizon View virtual desktop application was implemented. Using the Login VSI workload simu-lator, 356 desktops were hosted across three servers. This number is comparable to what a SAN- or NAS-connected shared storage implementation can support, but at a frac-tion of the cost.About Login VSILogin Virtual Session Indexer (Login VSI) is a software tool that simulates realistic user workloads for Horizon View and other major desktop implementations. It is an industry stan-dard for measuring the VDI performance that a softwareand hardware implementation can support.Figure 3. VSAN Results。
JVM for a Heterogeneous Shared Memory SystemDeQing Chen,Chunqiang Tang,Sandhya Dwarkadas,and Michael L.ScottComputer Science Department,University of Rochester AbstractInterWeave is a middleware system that supports the shar-ing of strongly typed data structures across heterogeneouslanguages and machine architectures.Java presents spe-cial challenges for InterWeave,including write detection,data translation,and the interface with the garbage col-lector.In this paper,we discuss our implementation ofJ-InterWeave,a JVM based on the Kaffe virtual machineand on our locally developed InterWeave client software.J-InterWeave uses bytecode instrumentation to detectwrites to shared objects,and leverages Kaffe’s class ob-jects to generate type information for correct transla-tion between the local object format and the machine-independent InterWeave wire format.Experiments in-dicate that our bytecode instrumentation imposes lessthan2%performance cost in Kaffe interpretation mode,and less than10%overhead in JIT mode.Moreover,J-InterWeave’s translation between local and wire format ismore than8times as fast as the implementation of ob-ject serialization in Sun JDK1.3.1for double arrays.Toillustrate theflexibility and efficiency of J-InterWeave inpractice,we discuss its use for remote visualization andsteering of a stellar dynamics simulation system writtenin C.1IntroductionMany recent projects have sought to support distributedshared memory in Java[3,16,24,32,38,41].Manyof these projects seek to enhance Java’s usefulness forlarge-scale parallel programs,and thus to compete withmore traditional languages such as C and Fortran in thearea of scientific computing.All assume that applicationcode will be written entirely in Java.Many—particularlythose based on existing software distributed shared mem-ory(S-DSM)systems—assume that all code will run oninstances of a common JVM.has yet to displace Fortran for scientific computing sug-gests that Java will be unlikely to do so soon.Even for systems written entirely in Java,it is appealing to be able to share objects across heterogeneous JVMs. This is possible,of course,using RMI and object serial-ization,but the resulting performance is poor[6].The ability to share state across different languages and heterogeneous platforms can also help build scalable dis-tributed services in general.Previous research on var-ious RPC(remote procedure call)systems[21,29]in-dicate that caching at the client side is an efficient way to improve service scalability.However,in those sys-tems,caching is mostly implemented in an ad-hoc man-ner,lacking a generalized translation semantics and co-herence model.Our on-going research project,InterWeave[9,37],aims to facilitate state sharing among distributed programs written in multiple languages(Java among them)and run-ning on heterogeneous machine architectures.InterWeave applications share strongly-typed data structures located in InterWeave segments.Data in a segment is defined using a machine and platform-independent interface de-scription language(IDL),and can be mapped into the ap-plication’s local memory assuming proper InterWeave li-brary calls.Once mapped,the data can be accessed as ordinary local objects.In this paper,we focus on the implementation of In-terWeave support in a Java Virtual Machine.We call our system J-InterWeave.The implementation is based on an existing implementation of InterWeave for C,and on the Kaffe virtual machine,version1.0.6[27].Our decision to implement InterWeave support directly in the JVM clearly reduces the generality of our work.A more portable approach would implement InterWeave support for segment management and wire-format trans-lation in Java libraries.This portability would come,how-ever,at what we consider an unacceptable price in perfor-mance.Because InterWeave employs a clearly defined internal wire format and communication protocol,it is at least possible in principle for support to be incorporated into other JVMs.We review related work in Java distributed shared state in Section2and provide a brief overview of the Inter-Weave system in Section3.A more detailed description is available elsewhere[8,37].Section4describes the J-InterWeave implementation.Section5presents the results of performance experiments,and describes the use of J-InterWeave for remote visualization and steering.Sec-tion6summarizes our results and suggests topics for fu-ture research.2Related WorkMany recent projects have sought to provide distributed data sharing in Java,either by building customized JVMs[2,3,24,38,41];by using pure Java implementa-tions(some of them with compiler support)[10,16,32]; or by using Java RMI[7,10,15,28].However,in all of these projects,sharing is limited to Java applications. To communicate with applications on heterogeneous plat-forms,today’s Java programmers can use network sock-ets,files,or RPC-like systems such as CORBA[39].What they lack is a general solution for distributed shared state. Breg and Polychronopoulos[6]have developed an al-ternative object serialization implementation in native code,which they show to be as much as eight times faster than the standard implementation.The direct compari-son between their results and ours is difficult.Our exper-iments suggest that J-Interweave is at least equally fast in the worst case scenario,in which an entire object is mod-ified.In cases where only part of an object is modified, InterWeave’s translation cost and communication band-width scale down proportionally,and can be expected to produce a significant performance advantage.Jaguar[40]modifies the JVM’s JIT(just-in-time com-piler)to map certain bytecode sequences directly to na-tive machine codes and shows that such bytecode rewrit-ing can improve the performance of object serialization. However the benefit is limited to certain types of objects and comes with an increasing price for accessing object fields.MOSS[12]facilitates the monitoring and steering of scientific applications with a CORBA-based distributed object system.InterWeave instead allows an application and its steerer to share their common state directly,and in-tegrates that sharing with the more tightly coupled sharing available in SMP clusters.Platform and language heterogeneity can be supported on virtual machine-based systems such as Sun JVM[23] and [25].The Common Language Run-time[20](CLR)under framework promises sup-port for multi-language application development.In com-parison to CLR,InterWeave’s goal is relatively modest: we map strongly typed state across languages.CLR seeks to map all high-level language features to a common type system and intermediate language,which in turn implies more semantic compromises for specific languages than are required with InterWeave.The transfer of abstract data structures wasfirst pro-posed by Herlihy and Liskov[17].Shasta[31]rewrites bi-nary code with instrumentation for access checks forfine-grained S-DSM.Midway[4]relies on compiler support to instrument writes to shared data items,much as we do in the J-InterWeave JVM.Various software shared memory systems[4,19,30]have been designed to explicitly asso-ciate synchronization operations with the shared data they protect in order to reduce coherence costs.Mermaid[42] and Agora[5]support data sharing across heterogeneous platforms,but only for restricted data types.3InterWeave OverviewIn this section,we provide a brief introduction to the design and implementation of InterWeave.A more de-tailed description can be found in an earlier paper[8]. For programs written in C,InterWeave is currently avail-able on a variety of Unix platforms and on Windows NT. J-InterWeave is a compatible implementation of the In-terWeave programming model,built on the Kaffe JVM. J-InterWeave allows a Java program to share data across heterogeneous architectures,and with programs in C and Fortran.The InterWeave programming model assumes a dis-tributed collection of servers and clients.Servers maintain persistent copies of InterWeave segments,and coordinate sharing of those segments by clients.To avail themselves of this support,clients must be linked with a special In-terWeave library,which serves to map a cached copy of needed segments into local memory.The servers are the same regardless of the programming language used by clients,but the client libraries may be different for differ-ent programming languages.In this paper we will focus on the client side.In the subsections below we describe the application programming interface for InterWeave programs written in Java.3.1Data Allocation and AddressingThe unit of sharing in InterWeave is a self-descriptive data segment within which programs allocate strongly typed blocks of memory.A block is a contiguous section of memory allocated in a segment.Every segment is specified by an Internet URL and managed by an InterWeave server running at the host indi-cated in the URL.Different segments may be managed by different servers.The blocks within a segment are num-bered and optionally named.By concatenating the seg-ment URL with a block number/name and offset(delim-ited by pound signs),we obtain a machine-independent pointer(MIP):“/path#block#offset”. To create and initialize a segment in Java,one can ex-ecute the following calls,each of which is elaborated on below or in the following subsections:IWSegment seg=new IWSegment(url);seg.wl_acquire();MyType myobj=new MyType(seg,blkname);myobj.field=......seg.wl_release();In Java,an InterWeave segment is captured as an IWSegment object.Assuming appropriate access rights, the new operation of the IWSegment object communi-cates with the appropriate server to initialize an empty segment.Blocks are allocated and modified after acquir-ing a write lock on the segment,described in more detail in Section3.3.The IWSegment object returned can be passed to the constructor of a particular block class to al-locate a block of that particular type in the segment. Once a segment is initialized,a process can convert be-tween the MIP of a particular data item in the segment and its local pointer by using mip ptr and ptr mip where appropriate.It should be emphasized that mip ptr is primar-ily a bootstrapping mechanism.Once a process has one pointer into a data structure(e.g.the root pointer in a lat-tice structure),any data reachable from that pointer can be directly accessed in the same way as local data,even if embedded pointers refer to data in other segments.In-terWeave’s pointer-swizzling and data-conversion mech-anisms ensure that such pointers will be valid local ma-chine addresses or references.It remains the program-mer’s responsibility to ensure that segments are accessed only under the protection of reader-writer locks.3.2HeterogeneityTo accommodate a variety of machine architectures,In-terWeave requires the programmer to use a language-and machine-independent notation(specifically,Sun’s XDR[36])to describe the data types inside an InterWeave segment.The InterWeave XDR compiler then translates this notation into type declarations and descriptors appro-priate to a particular programming language.When pro-gramming in C,the InterWeave XDR compiler generates twofiles:a.hfile containing type declarations and a.c file containing type descriptors.For Java,we generate a set of Java class declarationfiles.The type declarations generated by the XDR compiler are used by the programmer when writing the application. The type descriptors allow the InterWeave library to un-derstand the structure of types and to translate correctly between local and wire-format representations.The lo-cal representation is whatever the compiler normally em-ploys.In C,it takes the form of a pre-initialized data struc-ture;in Java,it is a class object.3.2.1Type Descriptors for JavaA special challenge in implementing Java for InterWeave is that the InterWeave XDR compiler needs to gener-ate correct type descriptors and ensure a one-to-one cor-respondence between the generated Java classes and C structures.In many cases mappings are straight forward: an XDR struct is mapped to a class in Java and a struct in C,primitivefields to primitivefields both in Java andC,pointersfields to object references in Java and pointers in C,and primitive arrays to primitive arrays. However,certain“semantics gaps”between Java and C force us to make some compromises.For example,a C pointer can point to any place inside a data block;while Java prohibits such liberties for any object reference. Thus,in our current design,we make the following compromises:An InterWeave block of a single primitive data item is translated into the corresponding wrapped class for the primitive type in Java(such as Integer,Float, etc.).Embedded structfields in an XDR struct definition areflattened out in Java and mapped asfields in its parent class.In C,they are translated naturally into embeddedfields.Array types are mapped into a wrapped IWObject(including the IWacquire,wl acquire, and rlpublic class IWSegment{public IWSegment(String URL,Boolean iscreate);public native staticint RegisterClass(Class type);public native staticObject mip_to_ptr(String mip);public native staticString ptr_to_mip(IWObject Ob-ject obj);......public native int wl_acquire();public native int wl_release();public native int rl_acquire();public native int rl_release();......}Figure2:IWSegment Class4.1.1JNI Library for IWSegment ClassThe native library for the IWSegment class serves as an intermediary between Kaffe and the C InterWeave library. Programmer-visible objects that reside within the IWSeg-ment library are managed in such a way that they look like ordinary Java objects.As in any JNI implementation,each native method has a corresponding C function that implements its function-ality.Most of these C functions simply translate their pa-rameters into C format and call corresponding functions in the C InterWeave API.However,the creation of an In-terWeave object and the method RegisterClass need special explanation.Mapping Blocks to Java Objects Like ordinary Java objects,InterWeave objects in Java are created by“new”operators.In Kaffe,the“new”operator is implemented directly by the bytecode execution engine.We modi-fied this implementation to call an internal function new-Block in the JNI library and newBlock calls the Inter-Weave C library to allocate an InterWeave block from the segment heap instead of the Kaffe object heap.Before returning the allocated block back to the“new”operator, newBlock initializes the block to be manipulated cor-rectly by Kaffe.In Kaffe,each Java object allocated from the Kaffe heap has an object header.This header contains a pointer to the object class and a pointer to its own monitor.Since C InterWeave already assumes that every block has a header (it makes no assumption about the contiguity of separate blocks),we put the Kaffe header at the beginning of what C InterWeave considers the body of the block.A correctly initialized J-InterWeave object is shown in Figure3.Figure3:Block structure in J-InterWeaveAfter returning from newBlock,the Kaffe engine calls the class constructor and executes any user cus-tomized operations.Java Class to C Type Descriptor Before any use of a class in a J-InterWeave segment,including the creation of an InterWeave object of the type,the class object must befirst registered with RegisterClass.Register-Class uses the reflection mechanism provided by the Java runtime system to determine the following informa-tion needed to generate the C type descriptor and passes it to the registration function in the C library.1.type of the block,whether it is a structure,array orpointer.2.total size of the block.3.for structures,the number offields,eachfield’s off-set in the structure,and a pointer to eachfield’s type descriptor.4.for arrays,the number of elements and a pointer tothe element’s type descriptor.5.for pointers,a type descriptor for the pointed-to data.The registered class objects and their corresponding C type descriptors are placed in a hashtable.The new-Block later uses this hashtable to convert a class object into the C type descriptor.The type descriptor is required by the C library to allocate an InterWeave block so that it has the information to translate back and forth between local and wire format(see Section3).4.2KaffeJ-InterWeave requires modifications to the byte code in-terpreter and the JIT compiler to implementfine-grained write detection via instrumentation.It also requires changes to the garbage collector to ensure that InterWeave blocks are not accidentally collected.Figure4:Extended Kaffe object header forfine-grained write detection4.2.1Write DetectionTo support diff-based transmission of InterWeave segment updates,we must identify changes made to InterWeave objects over a given span of time.The current C ver-sion of InterWeave,like most S-DSM systems,uses vir-tual memory traps to identify modified pages,for which it creates pristine copies(twins)that can be compared with the working copy later in order to create a diff.J-InterWeave could use this same technique,but only on machines that implement virtual memory.To enable our code to run on handheld and embedded devices,we pursue an alternative approach,in which we instrument the interpretation of store bytecodes in the JVM and JIT. In our implementation,only writes to InterWeave block objects need be monitored.In each Kaffe header,there is a pointer to the object method dispatch table.On most architectures,pointers are aligned on a word boundary so that the least significant bit is always zero.Thus,we use this bit as theflag for InterWeave objects.We also place two32-bit words just before the Kaffe object header,as shown in Figure4.The second word—modification status—records which parts of the object have been modified.A block’s body is logically divided into32parts,each of which corresponds to one bit in the modification status word.Thefirst extended word is pre-computed when initializing an object.It is the shift value used by the instrumented store bytecode code to quickly determine which bit in the modification status word to set(in other words,the granularity of the write detection).These two words are only needed for In-terWeave blocks,and cause no extra overhead for normal Kaffe objects.4.2.2Garbage CollectionLike distributedfile systems and databases(and unlike systems such as PerDiS[13])InterWeave requires man-ual deletion of data;there is no garbage collection.More-over the semantics of InterWeave segments ensure that an object reference(pointer)in an InterWeave object(block) can never point to a non-InterWeave object.As a result, InterWeave objects should never prevent the collection of unreachable Java objects.To prevent Kaffe from acci-dentally collecting InterWeave memory,we modify the garbage collector to traverse only the Kaffe heap.4.3InterWeave C libraryThe InterWeave C library needs little in the way of changes to be used by J-InterWeave.When an existing segment is mapped into local memory and its blocks are translated from wire format to local format,the library must call functions in the IWSegment native library to initialize the Kaffe object header for each block.When generating a description of modified data in the write lock release operation,the library must inspect the modifi-cation bits in Kaffe headers,rather than creating diffs from the pristine and working copies of the segment’s pages.4.4DiscussionAs Java is supposed to be“Write Once,Run Anywhere”, our design choice of implementing InterWeave support at the virtual machine level can pose the concern of the portability of Java InterWeave applications.Our current implementation requires direct JVM support for the fol-lowing requirements:1.Mapping from InterWeave type descriptors to Javaobject classes.2.Managing local segments and the translation be-tween InterWeave wire format and local Java objects.3.Supporting efficient write detection for objects in In-terWeave segments.We can use class reflection mechanisms along with pure Java libraries for InterWeave memory management and wire-format translation to meet thefirst two require-ments and implement J-InterWeave totally in pure Java. Write detection could be solved using bytecode rewrit-ing techniques as reported in BIT[22],but the resulting system would most likely incur significantly higher over-heads than our current implementation.We didn’t do this mainly because we wanted to leverage the existing C ver-sion of the code and pursue better performance.In J-InterWeave,accesses to mapped InterWeave blocks(objects)by different Java threads on a single VM need to be correctly synchronized via Java object monitors and appropriate InterWeave locks.Since J-InterWeave is not an S-DSM system for Java virtual machines,the Java memory model(JMM)[26]poses no particular problems. 5Performance EvaluationIn this section,we present performance results for the J-InterWeave implementation.All experiments employ a J-InterWeave client running on a1.7GHz Pentium-4Linux machine with768MB of RAM.In experiments involving20406080100120_201_co mp r e s s _202_j e s s _205_ra y t r a c e _209_db _213_j a va c _222_m p e g a u d i o _227_m t r t _228_j a c kJVM98 BenchmarksT i m e (s e c .)Figure 5:Overhead of write-detect instrumentation in Kaffe’s interpreter mode01234567_201_c o mp r e s s _202_j e s s _205_r a y t r a c e _209_d b _213_j a v a c _222_m p e g a u d i o _227_m t r t _228_j a c k JVM98 Benchmarks T i m e (s e c .)Figure 6:Overhead of write-detect instrumentation inKaffe’s JIT3modedata sharing,the InterWeave segment server is running on a 400MHz Sun Ultra-5workstation.5.1Cost of write detectionWe have used SPEC JVM98[33]to quantify the perfor-mance overhead of write detection via bytecode instru-mentation.Specifically,we compare the performance of benchmarks from JVM98(medium configuration)run-ning on top of the unmodified Kaffe system to the per-formance obtained when all objects are treated as if they resided in an InterWeave segment.The results appear in Figures 5and 6.Overall,the performance loss is small.In Kaffe’s inter-preter mode there is less than 2%performance degrada-tion;in JIT3mode,the performance loss is about 9.1%.The difference can be explained by the fact that in inter-preter mode,the per-bytecode execution time is already quite high,so extra checking time has much less impact than it does in JIT3mode.The Kaffe JIT3compiler does not incorporate more re-cent and sophisticated technologies to optimize the gener-ated code,such as those employed in IBM Jalepeno [35]and Jackal [38]to eliminate redundant object referenceand array boundary checks.By applying similar tech-niques in J-InterWeave to eliminate redundant instrumen-tation,we believe that the overhead could be further re-duced.5.2Translation costAs described in Sections 3,a J-InterWeave application must acquire a lock on a segment before reading or writ-ing it.The acquire operation will,if necessary,ob-tain a new version of the segment from the InterWeaveserver,and translate it from wire format into local Kaffeobject format.Similarly,after modifying an InterWeavesegment,a J-InterWeave application must invoke a write lock release operation,which translates modified por-tions of objects into wire format and sends the changes back to the server.From a high level point of view this translation re-sembles object serialization ,widely used to create per-sistent copies of objects,and to exchange objects between Java applications on heterogeneous machines.In this sub-section,we compare the performance of J-InterWeave’stranslation mechanism to that of object serialization in Sun’s JDK v.1.3.1.We compare against the Sun im-plementation because it is significantly faster than Kaffe v.1.0.6,and because Kaffe was unable to successfully se-rialize large arrays in our experiments.We first compare the cost of translating a large array of primitive double variables in both systems.Under Sun JDK we create a Java program to serialize double arrays into byte arrays and to de-serialize the byte arrays backagain.We measure the time for the serialization and de-serialization.Under J-InterWeave we create a programthat allocates double arrays of the same size,releases (un-maps)the segment,and exits.We measure the releasetime and subtract the time spent on communication with the server.We then run a program that acquires (maps)the segment,and measure the time to translate the byte arrays back into doubles in Kaffe.Results are shown in Figure 7,for arrays ranging in size from 25000to 250000elements.Overall,J-InterWeave is about twenty-three times faster than JDK 1.3.1in serialization,and 8times faster in dese-rialization.5.3Bandwidth reduction To evaluate the impact of InterWeave’s diff-based wire format,which transmits an encoding of only those bytes that have changed since the previous communication,we modify the previous experiment to modify between 10and 100%of a 200,000element double array.Results appear in Figures 8and 9.The former indicates translation time,the latter bytes transmitted.20406080100120140250005000075000100000125000150000175000200000225000250000Size of double array (in elements)T i m e (m s e c .)Figure 7:Comparison of double array translation betweenSun JDK 1.3.1and J-InterWeave102030405060708090100100908070605040302010Percentage of changesT i m e (m s e c .)Figure 8:Time needed to translate a partly modified dou-ble arrayIt is clear from the graph that as we reduce the per-centage of the array that is modified,both the translationtime and the required communication bandwidth go down by linear amounts.By comparison,object serialization is oblivious to the fraction of the data that has changed.5.4J-InterWeave Applications In this section,we describe the Astroflow application,developed by colleagues in the department of Physics andAstronomy,and modified by our group to take advan-tage of InterWeave’s ability to share data across hetero-geneous platforms.Other applications completed or cur-rently in development include interactive and incremental data mining,a distributed calendar system,and a multi-player game.Due to space limitations,we do not present these here.The Astroflow [11][14]application is a visualization tool for a hydrodynamics simulation actively used in the astrophysics domain.It is written in Java,but employs data from a series of binary files that are generated sepa-rately by a computational fluid dynamics simulation sys-00.20.40.60.811.21.41.61.8100908070605040302010Percentage of changesT r a n s mi s s i o n s i z e (M B )Figure 9:Bandwidth needed to transmit a partly modified double array2040608010012014012416Number of CPUsT i m e (s e c .)Figure 10:Simulator performance using InterWeave in-stead of file I/Otem.The simulator,in our case,is written in C,and runs on a cluster of 4AlphaServer 41005/600nodes under the Cashmere [34]S-DSM system.(Cashmere is a two-level system,exploiting hardware shared memory within SMP nodes and software shared memory among nodes.InterWeave provides a third level of sharing,based on dis-tributed versioned segments.We elaborate on this three-level structure in previous papers [8].)J-InterWeave makes it easy to connect the Astroflow vi-sualization front end directly to the simulator,to create an interactive system for visualization and steering.The ar-chitecture of the system is illustrated in Figure 1(page 1).Astroflow and the simulator share a segment with one header block specifying general configuration parameters and six arrays of doubles.The changes required to the two existing programs are small and limited.We wrote an XDR specification to describe the data structures we are sharing and replaced the original file operations with shared segment operations.No special care is re-quired to support multiple visualization clients or to con-trol the frequency of updates.While the simulation data。
小学上册英语第1单元暑期作业英语试题一、综合题(本题有100小题,每小题1分,共100分.每小题不选、错误,均不给分)1.I like to ___ (try) new foods.2.My favorite hobby is ______.3.What is the opposite of 'fast'?A. SlowB. QuickC. RapidD. SpeedyA4.The _______ (The Renaissance) led to a revival in art and learning.5.The __________ (历史的影响) guides our actions.6.The ______ (药用植物) has been used for centuries in medicine.7. A process that involves the absorption of energy is called an ______ process.8.We have ___ (history/math) class today.9.The ancient Romans held gladiatorial ________ in arenas.10.An acid-base reaction produces ______.11. A __________ is a type of animal that can camouflage itself.12.The flamingo stands on one _________ (腿).13. A _______ is a chemical reaction that produces a new substance.14.What do we call a person who studies animals?A. BiologistB. ZoologistC. BotanistD. Ecologist15.What do we call the spiral-shaped galaxies?A. Elliptical GalaxiesB. Irregular GalaxiesC. Spiral GalaxiesD. Lenticular Galaxies16.The chemical formula for zinc chloride is _______.17.What do we call the person who creates software?A. EngineerB. ProgrammerC. ScientistD. Designer18.Which fruit is known for having seeds on the outside?A. BananaB. StrawberryC. KiwiD. PeachB Strawberry19.What do you call the bright light produced by a firefly?A. GlowB. ShineC. SparkleD. FlickerA20.What is the capital of Turkey?A. IstanbulB. AnkaraC. IzmirD. AntalyaB21.Trees provide us with ______.22.What do we call the process of water vapor turning into liquid?A. EvaporationB. CondensationC. PrecipitationD. SublimationB23.Which instrument has keys and is played by pressing them?A. GuitarB. DrumsC. PianoD. Violin24.I like to draw _____ (动物).25.We had a treasure hunt with our toy ____. (玩具名称)26.What do you call a baby cat?A. PuppyB. KittenC. CubD. CalfB27.The _____ (树木) provide shade on hot days.28.The ________ was a monument built to honor a famous leader.29.The ____ has a distinct waddle when it walks.30.The state of matter that has no definite volume or shape is _______.31.What is the sound of a sheep?A. MooB. BaaC. QuackD. Woof32.The _______ (The Spanish Conquest) of the Aztecs led to Spanish colonization.33.My _____ (外婆) makes the best pie.34.The ________ (cake) looks delicious.35.My friend’s sister, ______ (我朋友的妹妹), loves to dance.36.His favorite movie is a ________.37.What do we call the imaginary line that divides the Earth into northern and southern hemispheres?A. EquatorB. Prime MeridianC. Tropic of CancerD. Tropic of CapricornA38.How many teeth does a typical adult human have?A. 28B. 30C. 32D. 34C39. A ________ is an area that receives a lot of rain.40.The ________ was a famous explorer who sailed for Spain.41. A chemical that promotes combustion is called an ______.42.The chameleon can blend into its _________. (环境)43.What do you call a baby cat?A. PuppyB. KittenC. CalfD. Chick44.What do you call a young kangaroo?A. JoeyB. KitC. PupD. Calf45.The chemical formula for calcium phosphate is ______.46.What do we call the first woman to fly solo across the Atlantic Ocean?A. Amelia EarhartB. Bessie ColemanC. Harriet QuimbyD. Jacqueline CochranA47.My cousin is in the school ____ (choir).48.What is the name of the famous Canadian landmark?A. Sydney Opera HouseB. CN TowerC. Empire State BuildingD. Burj KhalifaB49.My cousin is a ______. She likes to dance ballet.50. A _____ (生态系统保护) is essential for future generations.51.I wear ______ (glasses) to see better.52.I love to watch _______ (我爱看_______).53.The chemical process that occurs in our bodies to release energy is called ______.54.I love to spend time in nature because it relaxes me and brings me _______ (快乐).55.Which of these is a common pet?A. LionB. DogC. ElephantD. CrocodileB56.We have _____ (很多) fun games.57.The kitten is ________ and soft.58.The __________ (奥林匹克运动会) originated in ancient Greece.59.I like to ______ (参加) community service.60.What do we call the study of human societies and cultures?A. SociologyB. AnthropologyC. PsychologyD. Archaeology61.My cousin is very good at ____ (acting).62. A _______ is a chemical reaction where energy is absorbed.63.She is _______ (running) in the race.64.Which animal is known for its long neck?A. ElephantB. GiraffeC. ZebraD. KangarooB65.The first man to reach the North Pole was _______. (皮尔·阿蒙森)66.What do we call a person who studies the relationships between organisms in their environment?A. EcologistB. BiologistC. ChemistD. GeologistA67.What do you call a young female rabbit?A. BunnyB. KitC. CalfD. Pup68.What is 15 + 15?A. 25B. 30C. 35D. 40B69.Which holiday comes in December?A. HalloweenB. ThanksgivingC. ChristmasD. New Year70. A _______ is a mixture where the components are not uniformly distributed.71.My cat enjoys chasing ______ (光点).72.The concept of ecological networks focuses on the connections between ______ and their habitats.73.What is the name of the famous American singer known as the "King of Rock and Roll"?A. Elvis PresleyB. Michael JacksonC. Frank SinatraD. Johnny CashA74.The _______ (Bill of Rights) protects individual freedoms in the US.75.My friend is __________ (热爱学习的).76.My favorite subject in school is __________. I find it exciting because __________.I look forward to learning more about it each day.77.What is the name of the plant that grows in deserts?A. CactusB. FernC. RoseD. Oak78.What is the name of the famous mountain in the Andes?A. AconcaguaB. ChimborazoC. CotopaxiD. All of the aboveD All of the above79.My sister enjoys __________ (学习新语言).80.The ________ (fountain) is in the park.81.He plays ______ (basketball) every Friday.82.The _______ (鲸鱼) is often seen breaching the surface.83.What is the primary function of roots in plants?A. Absorb sunlightB. Absorb water and nutrientsC. Produce flowersD. Support the plantB84.The festival is very _______ (exciting).85.The wind makes everything feel ______ (凉爽).86. A bumblebee is important for ______ (授粉).87.What is the name of the famous British rock band that included John Lennon?A. The WhoB. The Rolling StonesC. The BeatlesD. Pink FloydC The Beatles88.We go ______ during the summer. (swimming)89.Many countries have _______ that connect them.90. A __________ is a reaction that absorbs heat.91.She is ______ her shoes. (tying)92.What do you call the sound a cow makes?A. MeowB. BarkC. MooD. Quack93.Do you like _____ (小鸟)?94.Which of these is a water animal?A. CatB. DogC. DolphinD. HorseC95.What do you call the sound a cow makes?A. BarkB. MeowC. MooD. Roar96.My friend is a ______. She loves to paint.97.How many hearts does an octopus have?A. 1B. 2C. 3D. 4C98.ts can grow in ______ (水) like water lilies. Some pla99.The _______ is important for pollination and growth.100.The process of sublimation is when a solid changes to a gas without becoming______.。
新疆石油勘察设计研究院简介(部分)特色技术(一)沙漠油气田开发建设地面工程配套技术形成了一套适应沙漠油田滚动开发生产的地面配套技术-沙漠模式。
采用多通阀自动选井、分体式计量、恒流配水、橇装加热、大起伏地势油气混输回压控制、非金属复合管材全面应用等集输工艺。
SCADA与DCS技术相结合的油田自动化控制、数据监控、采集及处理,实现油井、计量站无人值守,百万吨油田百人管理。
1.Surface engineering technology for oil-gas field development and construction in desertsProvide a set of surface supporting technologies –desert mode, adequate for desert oilfield’s rolling development and production.Adopt several gathering processes such as automatic well selection by multi-ported valve, split metering, constant flow water distribution, skid-mounted heating, oil-gas mixed transportation and back pressure control in large relief region, comprehensive application of non-metallic composite pipes, etc. As a result of oilfield automation control based on the data monitoring, acquisition and processing system combining SCADA and DCS technologies, it realized the automation of oil well and metering station, and is able to manage a megaton oilfield with only a hundred of persons(二)稠油、特超稠油、油砂矿开发建设地面工程设计技术形成了一套成熟、实用的稠油、特稠油地面集输与处理及注蒸汽工艺配套技术-稠油模式,集中或短半径分散供热、多通阀选井集油配汽、单管注采集输、罐上罐称重式油井计量、掺蒸汽加热脱水处理、稠油污水处理及回用、旋流除砂等技术。
Supporting Brocade 5600 vRouter, VNF Platform, and DistributedServices PlatformSOFTWARE LICENSING GUIDE53-1004757-01© 2016, Brocade Communications Systems, Inc. All Rights Reserved.Brocade, the B-wing symbol, and MyBrocade are registered trademarks of Brocade Communications Systems, Inc., in the United States and in other countries. Other brands, product names, or service names mentioned of Brocade Communications Systems, Inc. are listed at /en/legal/ brocade-Legal-intellectual-property/brocade-legal-trademarks.html. Other marks may belong to third parties.Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government.The authors and Brocade Communications Systems, Inc. assume no liability or responsibility to any person or entity with respect to the accuracy of this document or any loss, cost, liability, or damages arising from the information contained herein or the computer programs that accompany it.The product described by this document may contain open source software covered by the GNU General Public License or other open source license agreements. T o find out which open source software is included in Brocade products, view the licensing terms applicable to the open source software, and obtain a copy of the programming source code, please visit /support/oscd.Contents Preface (5)Document conventions (5)Notes, cautions, and warnings (5)T ext formatting conventions (5)Command syntax conventions (6)Brocade resources (6)Document feedback (6)Contacting Brocade T echnical Support (7)Brocade customers (7)Brocade OEM customers (7)About This Guide (9)Licensing (11)Licensing overview (11)Obtaining a license (11)License Management Tasks (13)License management overview (13)Adding an evaluation license (13)Deleting an evaluation license (14)Configuring the license expiration-warning period (14)Viewing license information (15)Evaluation License Commands (17)add trial online (17)delete trial online (19)license expiration-warning period (20)show license installed (21)Syslog Messages (23)Preface•Document conventions (5)•Brocade resources (6)•Document feedback (6)•Contacting Brocade T echnical Support (7)Document conventionsThe document conventions describe text formatting conventions, command syntax conventions, and important notice formats used in Brocade technical documentation.Notes, cautions, and warningsNotes, cautions, and warning statements may be used in this document. They are listed in the order of increasing severity of potential hazards.NOTEA Note provides a tip, guidance, or advice, emphasizes important information, or provides a reference to related information.ATTENTIONAn Attention statement indicates a stronger note, for example, to alert you when traffic might be interrupted or the device might reboot.CAUTIONA Caution statement alerts you to situations that can be potentially hazardous to you or cause damage to hardware,firmware, software, or data.DANGERA Danger statement indicates conditions or situations that can be potentially lethal or extremely hazardous to you. Safetylabels are also attached directly to products to warn of these conditions or situations.Text formatting conventionsT ext formatting conventions such as boldface, italic, or Courier font may be used to highlight specific words or phrases.Format Descriptionbold text Identifies command names.Identifies keywords and operands.Identifies the names of GUI elements.Identifies text to enter in the GUI.italic text Identifies emphasis.Identifies variables.Identifies document titles.Courier font Identifies CLI output.Identifies command syntax examples.Brocade resourcesCommand syntax conventionsBold and italic text identify command syntax components. Delimiters and operators define groupings of parameters and their logical relationships.Convention Descriptionbold text Identifies command names, keywords, and command options.italic text Identifies a variable.value In Fibre Channel products, a fixed value provided as input to a command option is printed in plain text, forexample, --show WWN.[ ]Syntax components displayed within square brackets are optional.Default responses to system prompts are enclosed in square brackets.{ x | y | z } A choice of required parameters is enclosed in curly brackets separated by vertical bars. You must selectone of the options.In Fibre Channel products, square brackets may be used instead for this purpose.x | y A vertical bar separates mutually exclusive elements.< >Nonprinting characters, for example, passwords, are enclosed in angle brackets....Repeat the previous element, for example, member[member...].\Indicates a “soft” line break in command examples. If a backslash separates two lines of a commandinput, enter the entire command at the prompt without the backslash.Brocade resourcesVisit the Brocade website to locate related documentation for your product and additional Brocade resources.White papers, data sheets, and the most recent versions of Brocade software and hardware manuals are available at . Product documentation for all supported releases is available to registered users at MyBrocade.Click the Support tab and select Document Library to access documentation on MyBrocade or You can locate documentation by product or by operating system.Release notes are bundled with software downloads on MyBrocade. Links to software downloads are available on the MyBrocade landing page and in the Document Library.Document feedbackQuality is our first concern at Brocade, and we have made every effort to ensure the accuracy and completeness of this document. However, if you find an error or an omission, or you think that a topic needs further development, we want to hear from you. You can provide feedback in two ways:•Through the online feedback form in the HTML documents posted on •By sending your feedback to *************************Provide the publication title, part number, and as much detail as possible, including the topic heading and page number if applicable, as well as your suggestions for improvement.Contacting Brocade T echnical Support Contacting Brocade Technical SupportAs a Brocade customer, you can contact Brocade T echnical Support 24x7 online, by telephone, or by e-mail. Brocade OEM customers should contact their OEM/solution provider.Brocade customersFor product support information and the latest information on contacting the T echnical Assistance Center, go to and select Support.If you have purchased Brocade product support directly from Brocade, use one of the following methods to contact the BrocadeT echnical Assistance Center 24x7.Brocade OEM customersIf you have purchased Brocade product support from a Brocade OEM/solution provider, contact your OEM/solution provider for all of your product support needs.•OEM/solution providers are trained and certified by Brocade to support Brocade® products.•Brocade provides backline support for issues that cannot be resolved by the OEM/solution provider.•Brocade Supplemental Support augments your existing OEM support contract, providing direct access to Brocade expertise.For more information, contact Brocade or your OEM.•For questions regarding service levels and response times, contact your OEM/solution provider.About This GuideThis guide describes license and entitlement management for Brocade products that run on the Brocade Vyatta Network OS (referred to as a virtual router, vRouter, or router in the guide).Licensing•Licensing overview (11)•Obtaining a license (11)Licensing overviewAn evaluation license is available for the Brocade 5600 vRouter.The evaluation license is a nonproduction license that enables the maximum performance of the Brocade vRouter for a trial period of 60 days from the day of download.The following table describes license terminology for the Brocade vRouter.TABLE 1 License terminologyObtaining a licenseT o order and obtain a 60-day evaluation license, click FREE TRIAL at /en/products-services/software-networking/network-functions-virtualization/5600-vrouter.html.NOTEFor important installation information depending on your deployment, see the Brocade Vyatta Network OS vRouterDeployment Options Configuration Guide document. For instructions on verifying the connectivity needed to install yourlicense, see the "Verifying connectivity" section in the installation guide for your environment.Obtaining a licenseT o install a license on your Brocade vRouter, complete the following steps.1.NOTEYou should receive an entitlement e-mail from Brocade within one hour after placing your order. Sometimesentitlement e-mails are routed to junk or spam e-mail folders. If you do not receive your entitlement e-mail within onehour, check your junk and spam e-mail folders to determine if it was routed there.Download the Brocade vRouter software and read your entitlement e-mail.After installing the Brocade vRouter software and before adding an evaluation license, the Brocade vRouter displays a license warning message on bootup.Welcome to vRouterVersion: 4.0.0.R1Description: Brocade vRouter 5600 4.0.0 R1Copyright: 2015 Brocade Communications Systems, Inc.Last login: Wed Oct 21 17:22:59 2015 from 10.72.16.14WARNING: A valid vRouter 5600 license was not detected on this device.The license may not be configured or may be expired.vRouter 5600 features have been disabled.Please install a valid license within 24 hours of vRouter creation.2.Refer to the Order Details section of your entitlement e-mail for details about the activation code for your evaluation license.3.NOTEFor the automatic generation of a license key, the Brocade vRouter must be online and have public connectivity tocommunicate with the Brocade licensing portal.NOTEYou must enter your activation code to retrieve your license key after you install the Brocade vRouter software. Theactivation code can be used at any time during your evaluation period. However, the trial software runs for only 24hours after download without the activation code being entered into the system.Install and enter your activation code. For detailed information on how to add a license to your Brocade vRouter and verify that it is installed correctly, refer to Adding an evaluation license on page 13.Your 60-day evaluation license includes free support. If you have questions or need help with your evaluation license for the Brocade vRouter, contact *******************.License Management Tasks•License management overview (13)•Adding an evaluation license (13)•Deleting an evaluation license (14)•Configuring the license expiration-warning period (14)•Viewing license information (15)License management overviewA Brocade vRouter evaluation license is managed through an internet connection to a licensing portal and by using the command line interface (CLI) on the vRouter.NOTEA license must be added within the 24-hour period after installing the Brocade vRouter. When the license is not added withinthe 24-hour period, the Brocade vRouter ceases to function. The Brocade vRouter must be re-installed and the license added within the 24-hour period after re-installing the Brocade vRouter. You can add your original evaluation license to the Brocade vRouter after re-installation; you do not need to obtain a new evaluation license.Adding an evaluation licenseFor details about your Brocade vRouter license key (activation code), refer to the entitlement certificate e-mail that you receive after your Brocade vRouter license order is confirmed.For a Brocade vRouter that is connected to the internet, it is assumed that the following tasks have been completed:•Routes for internet access are set up. For further information, refer to Brocade Vyatta Network OS Basic Routing Configuration Guide.•DNS is set up. For further information, refer to Brocade Vyatta Network OS Basic System Configuration Guide and Brocade Vyatta Network OS Services Configuration Guide•Interfaces are set up. For further information, refer to Brocade Vyatta Network OS LAN Interfaces Configuration Guide.•The system host name is configured. For further information, refer to Brocade Vyatta Network OS Basic System Configuration Guide.T o add an evaluation license to a Brocade vRouter that is connected to the licensing portal, perform the following steps.1.Log on to the Brocade vRouter in operational mode.2.Add a license to the Brocade vRouter by specifying the license key.The following example shows how to add an evaluation license with a license key of ABCD-EFGH-IJKL-1234.vyatta@vyatta:~$ add trial online ABCD-EFGH-IJKL-1234License request successful.[ ok ] Restarting vyatta-routing (via systemctl): vyatta-routing.service.Deleting an evaluation license3.Enter the show license installed command to confirm that the license is added to the system.The following example shows that an evaluation license with a license key of ABCD-EFGH-IJKL-1234 is installed on the system.vyatta@vyatta:~$ show license installedDevice ID: 331cb-f91ec-b2440-7fdc0-03461-4a282-089b1-d7884Auto-Update: 1 daysExpiration Warning: 30 daysLicense: 60 DAY EVALUATION LICENSE SWActivation ID: ABCD-EFGH-IJKL-1234Feature: EvaluationIssuer: MANUFACTURER 001Start Date: 20-Oct-2015Expiration: 21-Dec-2015Deleting an evaluation licenseT o delete an evaluation license from the Brocade vRouter that is connected to the licensing portal, perform the following steps.1.Log on to the Brocade vRouter in operational mode.2.Delete the license from the system by specifying the license key.The following example shows how to delete an evaluation license with a license key of ABCD-EFGH-IJKL-1234 from the Brocade vRouter.vyatta@vyatta:~$ delete trial online ABCD-EFGH-IJKL-12343.Enter the show license installed command to confirm that the license is deleted from the system. Configuring the license expiration-warning periodBy default, license-expiration warning messages are displayed during the 30-day period before the license-expiration date on the Brocade vRouter. T o change the license expiration-warning period, perform the following steps.1.Log on to the Brocade vRouter in configuration mode.2.Enter the set license expiration-warning period command to configure the license expiration-warning period.The following example shows how to set the warning period to 40 days.vyatta@vyatta# set license expiration-warning period 40mit the change.vyatta@vyatta# commitThe license expiration-warning period is now set to 40 days on the Brocade vRouter.4.Confirm the license expiration-warning period by issuing the show license expiration-warning period command.vyatta@vyatta# show license expiration-warning periodlicense {expiration-warning {period 40}}Viewing license information Viewing license informationUse the show license installed command to view installed-license information on the Brocade vRouter.1.Log on to the Brocade vRouter in operational mode.2.Enter the show license installed command to view information about licenses that are installed on the system.The following example shows how to view that a valid evaluation license is installed on the system.vyatta@vyatta:~$ show license installedDevice ID: 331cb-f91ec-b2440-7fdc0-03461-4a282-089b1-d7884Auto-Update: 1 daysExpiration Warning: 30 daysLicense: 60 DAY EVALUATION LICENSE SWActivation ID: A047-B811-858F-DC58Feature: EvaluationIssuer: MANUFACTURER 001Start Date: 20-Oct-2015Expiration: 21-Dec-2015For examples of information that is displayed by the show license installed command at different stages of the license life cycle, refer to show license installed on page 21.Evaluation License Commands•add trial online (17)•delete trial online (19)•license expiration-warning period (20)•show license installed (21)add trial onlineAdds an evaluation license to an online system.Syntaxadd trial online license-keyParameterslicense-keyLicense key (activation code) that is used to activate the evaluation license. The format of the license-key is xxxx-xxxx-xxxx-xxxx, where x is an alphabetic or a numeric character. The license-key must include the "-" characters and is notcase-sensitive.ModesOperational modeUsage GuidelinesThe Brocade vRouter must have an internet connection to the licensing portal. For a connected system, it is assumed that the following prerequisite tasks have been completed:•Routes for internet access are set up. For further information, refer to Brocade Vyatta Network OS Basic Routing Configuration Guide.•DNS is set up. For further information, refer to Brocade Vyatta Network OS Services Configuration Guide•Interfaces are set up. For further information, refer to Brocade Vyatta Network OS LAN Interfaces Configuration Guide.•The system host name is configured. For further information, refer to Brocade Vyatta Network OS Basic System Configuration Guide.Before entering this command, it is also recommended that you:•Set the date and time zone: the license-enforcement process may interpret a change to the date or time zone that is made after a license is added to the system as license tampering and disable system functionality.•Set the host name and domain name: after the host name and domain name are set, it is easy to identify the system on the licensing portal because the fully qualified name, that is, hostname.domain is displayed next to the device ID onthe portal.add trial onlineExamplesThe following example shows how to add an evaluation license (with the license key of ABCD-EFGH-IJKL-1234) to the Brocade vRouter that is connected to the licensing portal.vyatta@vyatta:~$ add trial online ABCD-EFGH-IJKL-1234delete trial online delete trial onlineDeletes an evaluation license from an online system.Syntaxdelete trial online license-keyParameterslicense-keyLicense key (activation code) to be deleted. The format of the license-key is xxxx-xxxx-xxxx-xxxx, where x is analphabetic or a numeric character. The license-key must include the "-" characters and is not case-sensitive.ModesOperational modeUsage GuidelinesThe Brocade vRouter must have an internet connection to the licensing portal. For a connected system, it is assumed that the following prerequisite tasks have been completed:•Routes for internet access are set up. For further information, refer to Brocade Vyatta Network OS Basic Routing Configuration Guide.•DNS is set up. For further information, refer to Brocade Vyatta Network OS Services Configuration Guide•Interfaces are set up. For further information, refer to Brocade Vyatta Network OS LAN Interfaces Configuration Guide.•The system host name is configured. For further information, refer to Brocade Vyatta Network OS Basic System Configuration Guide.ExamplesThe following example shows how to delete the ABCD-EFGH-IJKL-1234 evaluation license from the Brocade vRouter that is connected to the licensing portal.vyatta@vyatta:~$ delete trial online ABCD-EFGH-IJKL-1234license expiration-warning periodlicense expiration-warning periodSpecifies the license expiration-warning period.Syntaxset license expiration-warning period daysdelete license expiration-warning period daysCommand DefaultThe display of messages is enabled.ParametersdaysNumber of days for the license expiration-warning period, that is, the period during which the Brocade vRouterdisplays upcoming license expiration-warning messages. The number of days ranges from 0 through 120. The defaultis 30. Specifying 0 disables the display of upcoming license expiration-warning messages on the system.ModesConfiguration modeConfiguration Statementlicense {expiration-warning {period days}}Usage GuidelinesBy default, the Brocade vRouter displays license expiration-warning messages for the 30-day period before the license-expiration date. Use the set form of this command to configure an alternate license expiration-warning period.Use the delete form of this command to restore the default license expiration-warning period of 30 days.show license installed show license installedDisplays information about installed licenses.Syntaxshow license installedModesOperational modeUsage GuidelinesThis command displays information about licenses that are installed on the Brocade vRouter. It also checks and displays, where applicable, license-related warning messages.Command OutputThis command displays the following information.ExamplesFor a new system when a license is not yet installed, this command shows the following information.vyatta@vyatta:~$ show license installedDevice ID: 331cb-f91ec-b2440-7fdc0-03461-4a282-089b1-d7884Auto-Update: 1 daysExpiration Warning: 30 daysWARNING: A valid vRouter 5600 license was not detected on this device.The license may not be configured or may be expired.vRouter 5600 features have been disabled.Please install a valid license within 24 hours of vRouter creation.show license installedWhen a valid evaluation license is installed, this command shows the following information.vyatta@vyatta:~$ show license installedDevice ID: 331cb-f91ec-b2440-7fdc0-03461-4a282-089b1-d7884Auto-Update: 1 daysExpiration Warning: 30 daysLicense: 60 DAY EVALUATION LICENSE SWActivation ID: A047-B811-858F-DC58Feature: EvaluationIssuer: MANUFACTURER 001Start Date: 20-Oct-2015Expiration: 21-Dec-2015In the following example, the license expiration-warning period is configured as 30 days. During the 30-day period before the license-expiration date of December 21, 2015 ( 21-Dec-2015) this command shows the following information.vyatta@vyatta:~$ show license installedDevice ID: 331cb-f91ec-b2440-7fdc0-03461-4a282-089b1-d7884Auto-Update: 1 daysExpiration Warning: 30 daysLicense: 60 DAY EVALUATION LICENSE SWActivation ID: A047-B811-858F-DC58Feature: EvaluationIssuer: MANUFACTURER 001Start Date: 20-Oct-2015Expiration: 21-Dec-2015WARNING: The license for A047-B811-858F-DC58 will expire on 21-Dec-2015.Syslog MessagesMessage A valid vRouter 5600 license was not detected on this device.Explanation Indicates that a valid license is not detected on the system. If a license exists, it may not be configured or it may have expired. For further information about license management, refer to the License Management Tasks section. Message Level WarningMessage The license for ABCD-EFGH-IJKL-1234 will expire on 30-Oct-2015.Explanation Indicates that the current date on the system is within the warning period before the license expiration date. Message Level WarningMessage The license for <ABCD-EFGH-IJKL-1234 expired on 30-Oct-2015.Explanation Indicates that the license has expired but the current date on the system is within the grace period allowed for license renewal. Licensed features on the system continue to function but changes are not permitted.Message Level Alert。
西安医疗器械行业Technical Lead岗位介绍JD模板岗位名称:Technical Lead岗位关键词:mysql,java,sql,oracle,javascript,css,html,nosql,jenkins职责描述:• Active participation in various scrum ceremonies and contribute towards identifying technical risks, developing risk mitigation plans and supporting various teams• Plans, participates and performs the technical work• Responsible for managing quality of technical work• Provides guidance on design, development, integration etc. to team members on technical aspects relating to the project.• Leads the implementation, automated unit and integration testing, code reviews, and debugging• Drive resoluti on with technical team to overcome technical barriers leveraging skill and working experience to accelerate the solution delivery • Ensure delivery success by providing guidance, mentorship and technical expertise to development team across multiple projects• Conduct technical proofs of concept and lead the design and development of software solutions to enable progress for development teams任职要求:• Bachelor’s in engineering or master’s in computer science• 8 to 10 years of experience in various d evelopment roles of which minimum 6 years’ experience in Web/Mobile Application Development using Java/J2EE related technologies;• Experience in architecture/management/development of /Java/JEE basedweb/mobile applications based on micro-services architecture (using Spring Boot, REST APIs)• Experience in Java application development frameworks and technologies: Apache tomcat, Jetty, Juice, Spring and JSON/XML/Ajax• Experience of using on AWS cloud native services such as EC2, ELB, S3, API Gateway, SNS, SQS, Lambda, DynamoDB and RDS• Experience with databases (Postgres/MySQL/Oracle/NoSQL DB), persistence frameworks, and SQL• Experience with GitHub, Docker, Kubernetes, CI/CD frameworks (Jenkins)• Experience with containerization and container orch estration with technologies including Docker, Kubernetes, Container Registries• Experience with Agile development practices such as Scrum and related tools such as Atlassian Jira• Experience with continues integration and deployment (Jenkins, Gradle, SonarQube)• Strong in programming disciplines like Object Oriented principles, design patterns, data structures and unit testing (TDD using Junit), Domain driven Design (DDD)• Extensive knowledge in handling complex data structures and well versed in developing multithreading applications.• Ability to effectively document artifacts and processes then explain them to others• Excellent verbal and written communication skills• Must have prior wok experience in Agile delivery methodology.• Strong pro blem solving and troubleshooting skills• Must be effective in working both independently and in a team settingOther Skills• Experience in designing and architecting large-scale and highly available distributed software is an added advantage.• Fro ntend development experience with Javascript, Ajax, Bootstrap, HTML 5, CSS, Angular JS and understanding of browser compatibility issues is an added advantage.。
for Reliable ConnectivityExecutive SummarySoftware-defined wide-area networking (SD-WAN) is starting to replace traditional WAN in remote operational technology (OT) sites. While SD-WAN offers connection reliability benefits that support new digital innovations, few SD-WAN solutions offer consolidated networking and security features optimized for harsh environments. Companies looking to provision SD-WAN to remote factories, substations, or oil rigs have had to cobble together separate point products. Asset operators need a simplified approach to contain costs, improve efficiency, and reduce risks. Fortinet FortiGate Rugged Secure SD-WAN delivers just this, combining next-generation firewalls (NGFWs) hardened for harsh environments with integrated solutions for management and analytics. This centralizes and simplifies SD-WAN operations. Supporting Innovation in Distributed Production Sites Factories, electrical substations, and oil rigs are adopting digital innovations—such as Software-as-a-Service (SaaS) applications and real-time applications such as voice and video—to increase productivity, improve communications, and foster rapid business growth. However, traditional WAN architectures at many remote locations struggle to support the traffic demands of these new technologies at reasonable costs. This has led to increasing adoption of SD-WAN architectures that utilize more affordable direct internet connections. The SD-WAN market has grown at a CAGR of 110% from $841 million in 2018 to $1.77 billion in 2019.But while SD-WAN improves connectivity reliability, it can also increase the organization’s risk exposure. According to Gartner survey analysis, “Customers continue to strive for better WAN performance and visibility, but security now tops their priorities when it comes to the challenges with their WAN.In many organizations, the need for SD-WAN security has led network engineering and operations leaders to incorporate many different tools and point products to address individual functions, threat exposures, or compliance requirements. But this approach leads to infrastructure complexity, which increases manageability burdens while creating new defensive gaps at the network edge.Fortinet Simplifies and Secures SD-WAN Deployments Consolidation of the networking and security tools required for a security-drivenSD-BranchSecure SD-WAN Orchestrator is part of its Fabric Management Center. This allows customers to significantly simplify centralized deployment, enable automation to save time, and offer business-centric policies. Fortinet management tools can support much larger deployments than competing solutions—up to 100,000 FortiGate devices. Features such as SD-WAN and NGFW templating, enterprise-grade configuration management, and role-based access controls help network engineering and operations leaders easily mitigate human errors.SD-WAN reporting and analyticsEnhanced analytics for WAN link availability, performance service-level agreement (SLA) and application traffic in runtime, and historical stats allow the infrastructure team to troubleshoot and quickly resolve network issues. Fabric Management Center offers advanced telemetry for application visibility and network performance to achieve faster resolution and reduce the number of IT support tickets. On-demand SD-WAN reports provide further insight into the threat landscape, trust level, and asset access, which are mandated for compliance purposes.These features include SD-WAN bandwidth monitoring reports and datasets; SLA logging and history monitoring via datasets, charts, and reports plus customizable SLA alerting; and application usage reports and dashboards. It also provides adaptive response handlers for SD-WAN events as well as event logging and archiving around SLAs across applications and interfaces.infrastructure and eliminating the need for many manual processes. FabricManagement Center includes customizable regulatory templatesreports for standards such as Security Activity Report (SAR), Center for InternetTo be effective, security must become seamlessly integrated across every part of the distributed organization—every remote office location. Network engineering and operations leaders need full visibility of the entire attack surface from a single location. Then, they need automated responses to reduce the window of time from detection to remediation and to alleviateFortinet Realizes Security-driven SD-WANWhile there are many use cases for security-driven SD-WAN, Fortinet’s approach enables this in the most effective way for all types of SD-WAN projects. Simplifying SD-WAN operations is core to making its implementation and expansion successful。
International Conference on Network, Communication, Computer Engineering (NCCE 2018)Mail Scheme Log Processing Based on ELK.Bu Yun a)School of Computer Science &Technology, Chongqing University of Posts and Telecommunications, Chongqing400065, Chinaa)Correspondingauthor:****************Abstract. With the continuous development of Internet technology, how to deal with and analyze a large number of data has become a hot spot. The mail system generates a large number of logs every day, and the traditional technology is not efficient in handling huge log data and is unable to make use of the information in the log. Proposing an information processing architecture based on ELK for mail logs to solve these problems. It ex-tracts information from logs by regular expressions, and define the concept of mail events, modeling data and storing them in graph database. The graph database is stored with the original graph of the data. When dealing with a large number of network relationships, it avoids the consumption of data connection in the traditional relational database. The experiment proves that the scheme can realize real-time processing and modeling storage of large moduli data and meet the needs of mail system.Key words: ELK; mail system; hot spot.; original graph; Internet technology.INTRODUCTIONWith the advent of the information age, e-mail has become an indispensable means of communication because of its convenience, speed, and cheap-ness. Users send emails frequently. The mail server generates a large number of logs. These logs contain a lot of valuable information. They record people's previous communication networks, communication habits, and even living habits. Mail is the medium for transmitting information.The effective analysis and processing of the mail log is an important task for the operation and management of the mail system. The mail server generates a large amount of data every day. Since most of the mail logs (such as smtpd, pop3, etc.) are not only large in data size, they also look obscure. In the face of the discovery of mail anomalies, and the need to check the delivery status of mail, if only relying on the manual work of the administrator to view the log records, each time a message is queried, it takes a minute or two, when the demand slightly increases. Large, the workload is very heavy, and the operation is inefficient and error-prone.The mail communication network is a complex flow network, similar to the social dynamic network diagram, without a fixed main structure. Everything has been continuously developed and updated over time [1]. In order to achieve rapid search and mining of these data in real time. This article provides a solution for processing mail logs, aiming to extract fragmented mail information for modeling, making it easier to use the information in the logs for data analysis and research.RELATED WORKThe Status of Email Log ResearchIn recent years, the processing and analysis of email logs has been one of the hot topics for researchers. Using email interaction data to mine user behavior patterns, Li Quangang et al. used Enron public data in the literature [1] to extract the structural features and functional characteristics of the mail network and used non-negative matrix factorization to calculate the basic behavioral units of the network, using vectors to represent User behavior pattern[1]. Yang Zhen et al. [2] also used the improved EM algorithm to determine mail labels in the Enron mail network. According to the interaction strength between users, a collaborative filtering mechanism was designed to filter spam [2]. Hu Tiantian et al. used JavaMail to parse the data in the literature [3], and then built a mail network, calculated the weighted center degree according to the node's connection center degree, closeness center and middle center degree, and excavated the modularity indication to mine the core community. [3]. Chen Bin et al [4] used the mail transfer protocol session log to analyze the behavior of the host based on the failed message in the log record and used the incremental passive attack learning algorithm to effectively adjust the host of the detected spam host. Recent mail classification behavior [4].The massive data processing generated by the mail server is often not a single node in the traditional technology. The distributed software processing framework provides a feasible solution to solve the impact brought by the information wave. Zhang Jianzhong and his colleagues used the ElaticSearch distributed indexing technology to perform distributed indexing and retrieval of resources in the literature [5]. The HDFS distributed file system was used to implement the university library resource retrieval system [5]. Bai Jun et al [6] proposed a software integration scheme based on ElasticSearch real-time large log data search. The experimental results show that with the increase in the number of logs, does not affect the search response time, indicating the feasibility of this program.Framework IntroductionWith the increase of data processing capacity, the storage, computing capacity and processing efficiency of a single node cannot meet the requirements of application scenarios. Traditional methods based on relational database management systems cannot handle analysis problems efficiently.ELK, which is a data processing tool chain consisting mainly of three open source software, Elasticsearch, Logstash, and Kibana, implements distributed and scalable data storage and search. It is a zero-configuration and easy-to-use full-text search mode, supporting distributed processing and supporting systems. Extensions.Elasticsearch, as an open source distributed search and data processing platform, is not only a database, but also an open source, distributed, RESTful-based information retrieval framework built on Lucene that enables real-time search, efficient retrieval, and adoption of JSON data formats. The Ruby DSL design pattern provides Aggregations-based statistics capabilities, while providing easy deployment and setup. The cluster can be easily extended to hundreds of servers to handle structured or unstructured data at the PB level, but it can also run on Single PC [7].Logstash can collect, analyze, and convert related network logs, store them for later use, store them in Elasticsearch, and convert/store them to other destinations. Logstash itself does not generate logs. It is only a pipeline that accepts a wide variety of log input and is processed and forwarded to multiple different destinations [8].Kibana can help aggregate, analyze, and search important data logs and provide a friendly visual interface.As one of the emerging NoSql, Neo4j is currently the most popular graphics database. It stores data in the form of nodes, edges, attributes, and graphs. It provides transaction operations similar to traditional databases for highly connected data, and at the same time It is also several orders of magnitude higher than traditional databases. For a meshed data structure, it turns out to be an ideal choice for dealing with complex data.APPLICATION IMPLEMENTATIONPreprocessing of data FiguresAn e-mail system is mainly composed of three parts: user agent, mail server, mail sending protocol (SMTP) and mail reading protocol. Log in to the email account, log out, delete emails, send emails, receive emails, and delete emails. These operations are logged. Taking the campus mail system as an example, there are 760,000 lines per day for access logs, and up to several million lines for passing logs.The data in the real world is incomplete, in-consistent, and most of the data is unstructured or semi-structured and cannot be used directly. In the experiment, incomplete log records were filtered out, and the daily generated logs were imported into Elasticsearch in JSON format. Visualized by Kibina, the log format in this experiment was as follows.FIGURE 1. visual logs in KibanaThis article selects the log of the message. The message contains the time of the operation, the name of the mail server, the action record, the ID of the current server, the email address, and the opera-tion status. By parsing the information of the mes-sage, you can understand the dynamic behavior of the mail in the current server.The Definition and Structure of Mail EventsIn order to effectively organize the data in the log, the event definitions in the mail log are given below.Definition 1: A complete mail event refers to the process of sending and receiving a mail in the network. Has the following properties:(1) The unique identifier of the mail.(2) Outgoing mailboxes and incoming emails.(3) Shipping time and receiving time.(4) Sending IP and receiving IP.(5) Mail delivery status.The sending relationship of the mail is in line with the graph of the network. We define nodes to represent users and mails, and edges represent the user's sending behavior to mails.SenderReceiver Sendtime Receivetime ReceiveipFIGURE 2. mail event modelAlgorithm DescriptionAccording to the definition requirements, in order to restore event events in a large number of tedious logs, it is logically divided into two steps. First, each message received from the beginning of the transmission to the first server becomes the only one on the server. The ID is identified and the event is restored using each of the above-mentioned IDs obtained after processing.1. Use a regular expression to extract the initial ID in the following log:Jul 28 20:58:39 mx postfix/smtpd[10206]: 8504634015B:Cli-ent= [14.17.44.30] AVYxleipJ4h_jrOyBL_DGet the initial ID set Q= {ids1, ids2..., idsn}.2. For j=1, 2..., n, get idsi∈Q, do3. Enter idsi,S:=Search(idsi), where S is the set of all idsi logs, S={p1,p2,...}.4. Traverse every log in S. The regular email address, the email address, the time of sending, the email, the unique ID of the email, the IP of the email, and the IP of the email in the log.5. Check the log for "status" and "queued as".(1) If "status" is included and the mail delivery status is extracted, the event is restored.(2) If "status" is not included, extract the ID containing the word "queued as" and repeat step 3.6. Take the next ID from Q and repeat step 3.Experimental ResultsThe mail event recovery document format and the import graph database neo4j are visualized as follows:FIGURE 3. mail event in TXT and Neo4jCONCLUSIONA traditional mail log processing method cannot meet the needs of large-scale enterprises or colleges and universities for mail systems. This paper proposes a data processing program based on the ELK software framework that can deal with a large number of mail logs in real time. By introducing the concept of mail events, it extracts the log information from the mail server and establishes a suitable model to achieve efficient query. Visualizing email dynamic network and user behavior has important use value for detecting spam and user behavior patterns.REFERENCES1.Li Jingang, Shi Jinqiao, Qin Zhiguang, Liu Hallwen. User Behavior Pattern Mining for Email Network EventMonitoring[J]. Chinese Journal of Computers, 2014,37(5): 1135-1146.2.Yang Zhen, Lai Yingxu, Duan Lijuan, Li Yujian, Xu Wei. Research on Collaborative Filtering Mechanism ofMail Networks[J]. Acta Automatica Sinica,2012,38(3):399-411.3.Hu Tiantian, Dai Hang, Huang Dongxu. CN-M Based Email Network Core Community Mining[J]. ComputerTechnology and Development. 2014, 24(11):9-12.4.Ian Robinson et al. Fig. Database [M]. Liu Wei et al. Beijing: People's Posts and Telecommunications Press,2016.5.Zhang Jianzhong, Huang Yanfei, Xiong Yongjun. Digital Library Retrieval System Based on ElasticSearch[J].Computer and Modernization. 2015, 6: 69-73.6.Bai Jun, Guo Hebin. Research on software integration scheme for real-time search of big logs based onElasticSearch[J]. Jilin Normal University (Natural Science Edition).2014,1:85-877.Chen Bin, Dong Yizhou, Mao Mingrong.Infrastructure learning algorithm-based campus network spamdetection model[J]. Journal of Computer Applications, 2017,37(1): 206-216.8.Gao Kai. Big Data Search and Log Mining and Visualization Scheme [M]. Beijing: Tsinghua University Press,2016.9.(U.S.) Ian Robinson et al. Fig. Database [M]. Liu Yi et al. Beijing: People's Posts and TelecommunicationsPress, 2016.10.Chen Bin, Dong Yizhou, Mao Mingrong.Infrastructure learning algorithm-based campus network spamdetection model[J]. Journal of Computer Applications, 2017,37(1): 206-216.。
Supporting Software Distributed Shared Memorywith an Optimizing CompilerTatsushi Inagaki Junpei Niwa Takashi Matsumoto Kei Hiraki Department of Information Science,Faculty of Science,University of Tokyo 7-3-1Hongo,Bunkyo-ku,Tokyo113Japaninagaki,niwa,tm,hiraki@is.s.u-tokyo.ac.jpAbstractTo execute a shared memory program efficiently,we have to manage memory consistency with low overheads,and have to utilize communication bandwidth of the platform as much as possible.A software distributed shared mem-ory(DSM)can solve these problems via proper support by an optimizing compiler.The optimizing compiler can de-tect shared write operations,using interprocedural points-to analysis.It also coalesces shared write commitments onto contiguous regions,and removes redundant write com-mitments,using interprocedural redundancy elimination.A page-based target software DSM system can utilize commu-nication bandwidth,owing to coalescing optimization.We have implemented the above optimizing compiler and a run-time software DSM on AP1000+.We have obtained a high speed-up ratio with the SPLASH-2benchmark suite.The result shows that using an optimizing compiler to assist a software DSM is a promising approach to obtain a good performance.It also shows that the appropriate protocol selection at a write commitment is an effective optimization.1.IntroductionApplications using software distributed shared memory (DSM)can run without troubles of unnecessary memory copy and address translation which happen with the in-spector/executor mechanism[22].Most of existing software DSM systems are designed on the assumption of using se-quential compilers[23,20,19].An executable object made by a sequential compiler only issues a shared memory ac-cess as the ordinary memory access(load/store).To utilize bandwidth,a runtime system has to buffer the remote mem-ory access.There is another approach where a programmer can specify optimal granularity,protocol,and association Presently with Tokyo Research Laboratory,IBM Japan,Ltd.between synchronization and shared data[3,30].However, with this approach,existing shared memory applications re-quire rewriting.Our idea is that an optimizing compiler directly analy-ses shared memory source programs,and optimizes com-munication and consistency management for software DSM execution[28].Our target is a page-based software DSM, asymmetric distributed shared memory(ADSM)[26,25]. ADSM uses a virtual memory mechanism for shared read, and uses explicit user-level consistency management code sequences for shared write.This enables static optimization of shared write operations.Static optimizing information about them can reduce the overhead of the runtime system. Shasta[29]is another software DSM system assuming op-timizing compiler support.Since Shasta compiler analyzes objects generated by sequential compilers,it only performs limited local optimizations.Our compiler analyzes a source program directly.Therefore,it performs array data-flow analysis interprocedurally.Here we have to solve the following three problems in order to show that our approach is effective.First,the compiler must perform sufficient optimization in reasonable compilation time.We have applied interprocedural points-to analysis[14,31],and implemented interprocedural write set calculation,to detect and optimize shared write opera-tions.We have found out that the above powerful analysis is done in reasonable time.Second,the runtime system also must work efficiently.We had been using a history-based runtime system of lazy release consistency[28].But when the compiler can not optimize,the system introduces a large runtime overhead and causes the growth of synchronization costs.Therefore,we have implemented a new page-based runtime system with delayed invalidate release consistency (DIRC)model[12]to overcome these problems.We have made sure that the new system is more efficient than the history-based runtime system.Third,we have to provide an interface such that users can give information which the compiler can not extract statically.Memory access patterns of irregular applications depend on input parameters.It isdifficult for a compiler to optimize copy management pro-tocols statically.We have examined the effect of manual protocol selection on the bottleneck shared write operations of the program.We have evaluated the performances with the SPLASH-2benchmark suite[32].SPLASH-2is not only the most frequently used benchmark to evaluate shared memory sys-tems,but also a benchmark suite with in detailed algorith-mic information about each program.We have manually optimized shared write protocols using these descriptions. We do not consider SPLASH-2as“dusty deck”.Our target is to investigate what information from a user or a compiler is required for the efficient execution about shared memory programs on software DSM.Section2describes a process of compilation and opti-mization.Section3describes the implementation of the runtime software DSM.Section4describes performance evaluation with SPLASH-2.Section5describes related work about a combination of optimizing compiler and soft-ware DSM.Section6gives a summary.pilation ProcessFigure1describes the overall compilation process.The input is a shared memory program written in C extended with PARMACS[4].PARMACS provides the primitives for task creation,shared memory allocation,and synchro-nization(barrier,lock,and pause).The consistency of shared memory follows lazy release consistency(LRC) model[20].Our compiler inserts consistency management code sequences for software DSM into a given shared mem-ory program.The backend sequential compiler compiles the instrumented source program and links it with a runtime library.To inform the runtime system that a write happened onto a contiguous shared block,we use a pair found by the ini-tial address and the size of of the block.We call this pair a(shared)write commitment.Besides the start address and the size,a write commitment also requires the written con-tents of the block.Therefore,we place a write commitment after the corresponding shared write operations.The single write commitment can represent a lot of shared writes onto a large contiguous region.When there are succeeding write commitments with the same parameters,we can eliminate them but the last one.2.1.Shared Write DetectionThe goal of our optimizing compiler is to insert valid write commitments and to decrease the number of write commitments as much as possible.First we have to enu-merate all shared memory access in a given shared memory program.Since the input program is written in C,a shared address may be contained in a pointer variable and may be passed across procedure calls.We have applied interprocedural points-to analysis[14, 31]to shared write detection.Interprocedural points-to analysis calculates symbolic locations where variables may point to.Variables and heap locations are represented with a location set,a tuple of a symbolic base address,an off-set,and a stride.The compiler interprocedurally calculates points-to relations among location sets using a depth-first traversal of the call graph.We track the return values of shared memory allocation primitive(G MALLOC).We in-sert a write commitment after a write operation using shared address values.We adopted interprocedural points-to analysis because of the following merits:succeeding optimization passes can perform code mo-tion using pointer information,andprecise shared pointer information can decrease thecosts of the redundancy elimination pass.Points-to analysis represents all variables as memory lo-cations.This is a conservative assumption in C.When an input program contains unions or type-castings,they may generate false alias information,which takes many itera-tions to converge.We assume that an input program is type-safe about pointer values,that is pointer values are not conveyed through non-pointer locations.In points-to anal-ysis,we only record pointer assignments into pointer type locations.This assumption prevents generating false alias relations in a program with complex structures.2.2.Redundancy EliminationIn release consistency model,a shared write is not trans-mitted to other nodes until the node which had issued the shared write reaches a synchronization.Therefore,it is valid that we place a write commitment everywhere from the corresponding shared write to thefirst synchronization thereafter.We use thisflexibility to remove redundant write commitments.For example,let us look the following code sequence from LU:a[ii][jj]=((double)lrand48())/MAXRAND; if(i==j)a[ii][jj]*=10;Suppose that a[ii][jj]is shared.It is valid that we in-sert write commitments after both assignments.However, if we delay thefirst write commitment after the conditional, the write commitment within the conditional is redundant. When we denote a write commitment as WC,parallel shared memoryprogram (C + PARMACS)software DSMmacro Figure1.Overall compilation processa[ii][jj]=((double)lrand48())/MAXRAND; if(i==j)a[ii][jj]*=10;WC(&a[ii][jj],1);Note that this holds if the order between the assignment and the conditional is opposite.This optimization can be formalized as redundancy elimination[8,27]of write commitment.Here we repre-sent a statement in a procedure as i.We can consider that i is a node of a controlflow graph(CFG)of the procedure. For simplicity,wefix a write commitment with the same ad-dress and the same size.From the result of points-to anal-ysis,we obtain the following logical constants about each statement i:COMP i the statement i issues the shared writeTRANS i the statement i propagates informationabout the shared writeTRANS i is false when the statement i is a synchronization primitive or the statement i modifies the parameters of the write commitment.We can calculate the following logical dataflow variables from these constants:Availability In all paths which precede the statement i,the shared write is issuedAnticipatability In all paths which succeed the statement i,the shared write is issuedTo minimize the number of write commitments,we place write commitments only where,the shared write is available,the shared write is not available in one of the succeed-ing paths,andthe shared write is not anticipatable.We represent availability before and after execution of the statement i as A VIN i and A VOUT i.Similarly,we represent anticipatability as ANTIN i and ANTOUT i. INSERT i is a variable which means we actually place the write commitment after the statement i.Variables are cal-culated under dataflow equations in Figure2.Primitives pred i and succ i represent sets of statements preceding and succeeding the statement i.A VIN i∏p pred iA VOUT pA VOUT i COMP i TRANS i A VIN i ANTOUT i∏s succ iANTIN sANTIN i COMP i TRANS i ANTOUT i INSERT i A VOUT i∏s succ iA VOUT sANTOUT iFigure2.Dataflow equation to remove redun-dant write commitmentsTo compute interprocedurally,we reflect A VOUT at the exit of the callee procedure to the COMP at the call site of the caller procedure.When the availability of the callee can not be propagated to the caller,we insert write commit-ments at the exit of the callee.We call a procedure which is called recursively or called through function pointers,as open procedure[7].An open procedure does not inform availability to the call sites.Therefore,we can consider the call graph is acyclic.The compiler simply calculates inter-procedural availability with bottom-up traversal of the call graph.If we want more precise elimination,the compiler also can traverse the call graph in depth-first manner,whichis not implemented yet.2.3.Merging Multiple Write CommitmentsA write commitment can handle shared write operations onto a contiguous region.For example,let us look the fol-lowing code sequence in LU:for(i=0;i<n;i++)a[i]+=alpha*b[i];Suppose that a is a shared pointer.Instead of inserting a write commitment into the innermost loop,we can generate: for(i=0;i<n;i++)a[i]+=alpha*b[i];WC(a,n);This code generation has two merits.First,a consistency management overhead is reduced because the write com-mitment is hoisted out from the loop.Second,the runtime system can utilize the size information for message vector-ization.To combine multiple write commitments,it is convenient to represent a sequence of write commitments as(shared) write set.A write set W f s C is a tuple,such that f is a start address of a write commitment,s is a size,and C is a set of inequalities which generate write commitments.In-equalities C represent induction variables of enclosing loops around the write commitment.A dataflow variable takes a set of write sets.The logical operations in the above dataflow equations are considered as set operations.Just after points-to analysis,each write set includes only one write commitment,i.e.,s1C/0.We use interval analysis[9,5]to calculate dataflow equations. In interval analysis,CFG is represented hierarchically with interval(i.e.loop)structures.When a summary of interval is propagated outward,inequalities which represent induc-tion variables are added to C.We describe optimizing methods to combine multiple write commitments using write set.Coalescing This is applicable when write commitments onto contiguous locations are issued in a loop.Sup-pose a write set W f i s C i and the induction variable i has a increment value c.If f i c f i s, we can replace i with the initial value of i,multiply s by the number of iterations,and remove inequalities about i from C.For the above example,W&a i10i n W a n/0 Coalescing is applicable when the index variable is only continuous.For example,let us look the follow-ing code sequence in Radix:for(i=key_start;i<key_stop;i++){ this_key=key_from[i]&bb;this_key=this_key>>shiftnum;tmp=rank_ff_mynum[this_key];key_to[tmp]=key_from[i];rank_ff_mynum[this_key]++;}/*i*/Suppose key_to points to shared addresses.Vari-ables rank_ff_mynum[this_key]are incre-mented by one when key_to[tmp]is writ-ten.Therefore,we can coalesce write com-mitments using initial values andfinal values of rank_ff_mynum[this_key].Fusion We can also merge write commitments originating in different statements in the program.We represent this operation as a binary operator“”.For example, let us look the following code sequence in FFT:for(i=0;i<n1;i++){x[2*i]/=N;x[2*i+1]/=N;}Suppose x points to shared address,W&x2i1/0W&x2i11/0W W W&x2i2/0 Redundant index elimination When the start address of a write commitment is a constant,we can delegate the write commitment with the maximal size.If we can detect the maximum,this index variable is redundant.We can eliminate redundant indexes using Fourier-Motzkin elimination[11].Fourier-Motzkin elimina-tion is also applicable to nonlinear but monotonous ex-pressions.For example,in the following write set in FFT,W x22q N2q1q M we can eliminate q,using monotonicity of2q and QN Q,and obtainW W x4N2/0The names coalescing and fusion come from the similarity to loop transformations.When a dimension of inequalities in C is decreased,the dimension of generated loop of write commitments is decreased.When the summary of an interval is computed,we ap-ply coalescing and redundant index elimination to write sets.Fusion is applied to the computation of set union in dataflow equations.When a write set is propagated outwardfrom a loop without coalescing or index elimination,we add inequalities about loop indexes into C.This corresponds tofission(or distribution)in loop transformations.Fission does not reduce the number of issued write commitments but improves memory access locality.Along dataflow com-putation in interval analysis,the compiler repeatedly ap-plies Fourier-Motzkin elimination to the expressions in in-nermost loops.We use memorization[1]technique which stores and reuses the results computed before.3.Target Software DSMWe implemented a runtime library of ADSM on a Fu-jitsu AP1000+.The AP1000+has dedicated hardware which executes remote block transfer operation(put/get interface[18]).We assume that point-to-point message or-der is preserved.Formerly,we had been using a history-based runtime system of lazy release consistency[28].This implementa-tion stores write commitments as a write history.When a synchronization primitive is issued,the page contents are written back to the page-home.This corresponds to a software emulation of automatic update release consistency (AURC)[19].Diff based implementation compares whole page contents[20].History based implementation can avoid this when the compiler successfully eliminates and coa-lesces the write commitments.However,the following two problems exist:When the compiler can not optimize,history manage-ment introduces a large runtime overhead.We handle logical timestamps between each synchro-nization like LRC and AURC.Frequent synchroniza-tion causes long synchronization messages and the growth of synchronization costs.This time,we have implemented a new page-based runtime system.The basic design is similar to that of SoftFLASH[15]with delayed invalidate release consistency (DIRC)model[12].We use a write commitment for mes-sage vectorization.3.1.Basic DesignShared memory is managed by pages.Each page has a page-home node and the user can specify which it is.Each node manages the following bit tables with the size of the number of shared pages.Valid bit table indicates that the page contents are valid. Dirty bit table indicates that the node has written into the page with the current synchronization interval[20].Each node also manages the following bit table with the size of the number of nodes.Acknowledge table indicates that the node had written into the page of the corresponding page-home node. Synchronization tags of locks and pauses are handled by specified synchronization-home(i.e.,lock-home or pause-home)nodes.Each lock and pause has its own dirty bit table.We describe the behaviors of the runtime system for each primitive.When a write commitment is issued,the written memory contents are sent to the page-home node with a put oper-ation.The size parameter of the write commitment corre-sponds to the length of the block transfer.The page-home node is recorded in the acknowledge table.At an acquire operation,the node receives the dirty bit table from the lock-home processor.The obtained dirty bit table is applied to the valid bit table.The size of synchro-nization messages are limited by the dirty bit table size be-cause the time information is not utilized at synchroniza-tion.However,if a node acquires the same lock again,a page may be invalidated even when the page is not written between lock acquisitions.In a release operation,the node sends the nodes recorded in the acknowledge table and confirms that all sent mes-sages have arrived to the destinations.Then,the node sends the dirty bit table to the lock-home node.When a page fault occurs,the page contents are copied from the page-home by a get operation.At a barrier operation,the following steps are executed: 1.Each node confirms whether all the preceding page-home updates are completed.2.All nodes send their own dirt bit tables to the masternode.3.The master merges the sent dirty bit tables and broad-casts the merged one.4.All nodes invalidate their copies using the sent dirty bittable.5.Each node clears its dirty bit table and the dirty bittable of synchronization tags which it manages.Communications at page faults and write commitments are handled asynchronously.Acquire and release oper-ations are serialized by sending explicit messages to the synchronization-home nodes.Currently we use CellOS on AP1000+.CellOS does not provide a signal mechanism to users.Therefore,shared memory accesses are not han-dled by the virtual memory mechanism.But they are exe-cuted by code sequences which check valid bit tables.The optimizing compiler inserts this code sequence before each shared memory access.The compiler also inserts message polling[29].3.2.Protocol Selection at Write CommitmentThe above runtime system provides a write-invalidate protocol.We can simulate two other protocolsBy modifying behavior at a write commitment,we can select two other protocols[26,25]at each write commit-ment.Broadcast At a write commitment,the writing node sends written contents to all nodes.The node does not set the dirty bit table entry.Home Only The writer updates the page-home without making a copy.This is achieved by omitting the valid bit table checking of the corresponding shared write. The broadcast protocol can reduce the communication la-tency and alleviate false sharing.Broadcast is also use-ful to efficiently execute a program which is not properly labeled[16].At the release operation after broadcasting, the sender node must wait for acknowledgments from all nodes.Home only protocol can reduce page fault traffic at fetch-on-write.The contents of the page and the state of the valid bit table entry are temporarily inconsistent until the succeeding synchronization.When a home only write and ordinary page accesses occur in the same page,this may cause incorrect page contents.We introduce the home only acknowledge table which records the page-home node for home only write commitments.When a page fault occurs, the node checks this table and waits for an acknowledgment from the page-home node.To perform the protocol optimization,we have manually specified the type of write commitments in the bottleneck part of a generated source program.When we implement the home only protocol using a virtual memory mechanism, we have to explicitly check the valid bit table at conflicting writes to avoid frequent page faults.4.Performance Study with SPLASH-2We used three kernels(LU-Contig,Radix,FFT)and five applications(Barnes,Raytrace,Water-Nsq,Water-Sp, Ocean)from SPLASH-2.pilation TimeAt redundancy elimination,we calculated availability with bottom up traversal of the call graph,and calculated ancitipatability intraprocedurally.We show the compila-tion time of each program in Figure1.The compiler is run on Sun SPARCstation20(with50MHz SuperSPARC) +SunOS4.1.3.“Scalar dataflow”represents the time to detect induction variables.Without type-safe assumption, points-to analysis takes from1.4to4.2times longer time forTable2.Input problem size and sequential ex-ecution time(in seconds)program problem size sequentialLU-Contig10242doubles115.67Radix1M integer keys 4.32FFT64K complex doubles 2.10Barnes16K bodies54.68Raytrace balls4,1282pixels349.38Water-Nsq4096molecules800.08Water-Sp4096molecules88.37Ocean1302ocean7.09 programs with structures containing pointers(Barnes,Ray-trace,and Water-Sp)and for a program with pointer casting (Ocean).4.2.Runtime SystemWe show the problem size of each program and the se-quential execution time on one node.Each node of the AP1000+consists of50MHz SuperSPARC(20KB I-cache and16KB D-cache)and16MB memory.The nodes are linked by2D torus network whose bandwidth is25MB/s per link.The small problem size of Ocean is caused because of the limit of physical memory size.The page table checking is implemented by software.If we use a virtual memory mechanism,there is no checking overhead when the page is valid.Coalescing and redun-dancy elimination are also applicable to the software page table checking.We manually applied redundancy elimina-tion to checking codes using a similar interprocedural algo-rithm to that of write commitments.We selected4KB page size for kernels,and1KB for applications.We used gcc 2.7.2(the optimizing level is-O2)as the backend compiler.We modified the source codes of FFT and Raytrace.The transpose operation of the original FFT is written so that a receiver reads the parts of the array.But their page-home nodes are not receivers but senders.This causes a severe false sharing.We rewrote the procedure Transpose so that a sender writes to the page-home of receivers.In the original Raytrace,lock acquisition for ray ID is a bottle-neck for the execution.This ID is not used for any actual computation.We removed this lock operation.For each program,we specified a page-home and a synchronization-home according to optimization hints of SPLASH-2.We applied protocol optimization to Radix,FFT,Barnes,and Raytrace.In Figure3,we show effects of compiler optimization on32nodes execution.The left bar of each program is thepilation time of SPLASH-2(in seconds)RadixSpeedup1248163264FFTSpeedup1248163264128Figure4.Effects of protocol optimization to Radix and FFTspectively mean executions without the broadcast protocol, coalescing,and the home only protocol.Though write com-mitments in the innermost loop cause a large overhead,this part can be parallelized.Without the home only proto-col,the performance is saturated over16nodes because of heavy traffic.The broadcast protocol is effective also over16nodes.The rightfigure shows the speedup ratio of FFT.“Orig”means the execution of the original SPLASH-2code.In FFT,the code restructuring of Transpose and protocol selection raise the maximal speedup ratio from 1.49to18.1.In Figure5,we show speedup ratio of the programs with compiler optimization and protocol selection.Because of the low overheads of our runtime system and the uti-lization of the communication bandwidth,Raytrace,LU-Contig,Water-Ns,and Water-Sp show high speedup ratios and a good scalability.Both in Radix and FFT,an appropri-ate protocol selection is crucial for scalability.The perfor-mance of Barnes is saturated over32nodes.In Radix and Barnes,the principal overhead is synchronization because of the problem decomposition.Only Ocean slows down owing to the page fault handling which is an overhead of the runtime system.This is mainly because of the small size of the problem.As a whole,both compiler optimiza-tion and appropriate protocol specification are essential for scalability of the input problem.5.Related WorkThe computation power of recent machines enables the application of interprocedural analysis to practical prob-lems(e.g.interprocedural points-to analysis[14,31],in-terprocedural array dataflow analysis[17],and interproce-dural partial redundancy elimination[2]).So far,these ad-vanced analyses have not been used for explicit parallel shared memory programs.Existing research about cooperation between optimizing compilers and software DSM can be divided in three kinds. Thefirst is that a parallelizing compiler targets software DSM[21,13,24].For parallelizable programs,the com-piler can use precise communication information.Message vectorization is applicable to regular communication.The compiler can use code generation techniques for inspec-tor/executor mechanism.Software DSM does not require complex code generation for multi-level indirection.The runtime library has the benefit of message vectorization, synchronization messages,and support for sender initiated communication.However,this policy is only applicable to automatically parallelizable programs.The second is that a programmer declares shared data and association between data and synchronization[3,10,30, 6].The programmer can select appropriate protocols for each data.The runtime system can utilize application spe-cific information.Since this model hides a memory model from users,the system does not suffer from false sharing.Speedup124816Speedup1248163264128Figure5.Speedup ratio up to16nodes(left)and up to128nodes(right)However,the message packing/unpacking mechanism must be implemented effiers also have to adjust paral-lel programs to the provided programming model.The third is that a compiler directly analyzes a shared memory program.Our system and Shasta[29]are classi-fied in this kind.The Shasta compiler uses two optimizing techniques to reduce software overheads.One is a special flag value which indicates that the content is invalid.If the loaded value is not equal to theflag value,we know that the content is valid without using the page table checking.The other is batching to combine multiple checking for the same entry of the directory.These optimizations are intraproce-dural.Since they do not perform loop level optimization, their system requires both high network bandwidth and low latency.6.SummaryWe have shown that compiler support enables efficient software DSM which can utilize communication bandwidth as much as possible.We designed an interface between a shared memory program and a runtime library,and estab-lished a coalescing and redundancy elimination problem of write commitments.Our framework enabled applying inter-procedural optimizations to a shared memory program.We have described the interprocedural optimization scheme and an efficient implementation of the runtime system.We have shown that the appropriate write protocol selection is one important application specific information for the efficient software DSM.The redundancy elimination scheme in this paper de-creases the number of write commitment as much as pos-sible and makes the size of the write commitment as large as possible.Therefore,it issues write commitments as late as possible.This policy is suitable for the runtime system on AP1000+,since AP1000+has a fast communication net-work.However,this is not always optimal,especially on machines with slower communication facilities.Our future work is to reflect this tradeoff of the platform into dataflow equations.AcknowledgmentsWe would like to thank the referees for their valuable comments and aadvice.This work is partly supported by Advanced Information Technology Program(AITP)of Information-technology Promotion Agency(IPA)Japan. References[1]H.Abelson,G.J.Sussman,and J.Sussman.Structure andInterpretation of Computer Programs.The MIT Press,Cam-bridge,MA,1985.[2]G.Agrawal,J.Saltz,and R.Das.Interprocedural PartialRedundancy Elimination and its Application to Distributed。