Optimal orientation of striped states in the quantum Hall system against external modulatio
- 格式:pdf
- 大小:123.79 KB
- 文档页数:4
optimal fingerprint methodThe Optimal Fingerprint Method (OFM) is a technique used in the field of image processing and computer vision for fingerprint recognition. The goal of OFM is to extract and represent fingerprint images in a way that allows for efficient and accurate recognition and matching.The basic idea behind OFM is to divide the fingerprint image into small regions or blocks, and then提取每个区域的特征。
These features can include things like the orientation and frequency of the指纹纹路, as well as the presence or absence of certain features such as ridges and valleys.Once the features have been extracted from each region, they are combined into a vector or matrix that represents the entire fingerprint image. This vector or matrix is then used as input to a recognition algorithm, which compares it to other fingerprints in a database to determine whether there is a match.One of the advantages of OFM is that it is relatively simple and efficient, and can be implemented using standard image processing techniques. Additionally, it is able to handle a wide variety of fingerprint images, including those that are distorted or have noise.However, OFM also has some limitations. For example, it may not be able to accurately represent fingerprints that have very complex or irregular patterns. Additionally, it may be vulnerable to attacks such as spoofing or presentation attacks.Overall, the Optimal Fingerprint Method is a useful technique for fingerprint recognition, but it is important to consider its limitations and potential vulnerabilities when using it in practical applications.。
AMD RAID Installation Guide1. AMD BIOS RAID Installation Guide (2)1.1 Introduction to RAID (2)1.2 RAID Configurations Precautions (4)1.3 UEFI RAID Configuration (6)2. AMD Windows RAID Installation Guide (19)2.1 Create a RAID volume under Windows (19)2.2 Delete a RAID array under Windows. (26)1.AMD BIOS RAID Installation GuideThe BIOS screenshots in this guide are for reference only and may differ from the exact settings for your motherboard. The actual setup options you will see shall depend on the motherboard youpurchase. Please refer to the product specification page of the model you are using for information on RAID support. Because the motherboard specifications and the BIOS software might be updated, the content of this documentation will be subject to change without notice.AMD BIOS RAID Installation Guide is an instruction for you to configure RAID functions by using the onboard FastBuild BIOS utility under BIOS environment. After you make a SATA driver diskette, press [F2] or [Del] to enter BIOS setup to set the option to RAID mode by following the detailed instruction of the “User Manual” in our support CD, then you can start to use the onboard RAID Option ROM Utility to configure RAID.1.1 Introduction to RAIDThe term “RAID” stands for “Redundant Array of Independent Disks”, which is a method combining two or more hard disk drives into one logical unit. For optimal performance, please install identical drives of the same model and capacity when creating a RAID set.RAID 0 (Data Striping)RAID 0 is called data striping that optimizes two identical hard disk drives to read and write data in parallel, interleaved stacks. It will improve data access and storage since it will double the data transfer rate of a single disk alone while the two hard disks perform the same work as a single drive but at a sustained data transfer rate.WARNING!!Although RAID 0 function can improve the access performance, it does not provide any fault tolerance. Hot-Plug any HDDs of the RAID 0 Disk will cause data damage or data loss.RAID 1 (Data Mirroring)RAID 1 is called data mirroring that copies and maintains an identical image of data from one drive to a second drive. It provides data protection and increases fault tolerance to the entire system since the disk array management software will direct all applications to the surviving drive as it contains a complete copy of the data in the other drive if one drive fails.3RAID 5 (Block Striping with Distributed Parity)RAID 5 stripes data and distributes parity information across the physical drives along with the data blocks. This organization increases performance by accessing multiple physical drives simultaneously for each operation, as well as fault tolerance by providing parity data. In the event of a physical drive failure, data can be re-calculated by the RAID system based on the remaining data and the parity information. RAID 5 makes efficient use of hard drives and is the most versatile RAID Level. It works well for file, database, application and web servers.RAID 10 (Stripe Mirroring) RAID 0 drives can be mirrored using RAID 1 techniques, resulting in a RAID 10 solution for improved performance plus resiliency. The controller combines the performance of data striping (RAID 0) and the fault tolerance of disk mirroring (RAID 1). Data is striped across multiple drives and duplicated on another set of drives.41.2 RAID Configurations Precautions1.Please use two new drives if you are creating a RAID 0 (striping) array for performance. It isrecommended to use two SATA drives of the same size. If you use two drives of different sizes, the smaller capacity hard disk will be the base storage size for each drive. For example, if one hard disk has an 80GB storage capacity and the other hard disk has 60GB, the maximum storage capacity for the 80GB-drive becomes 60GB, and the total storage capacity for this RAID 0 set is 120GB.2.You may use two new drives, or use an existing drive and a new drive to create a RAID 1 (mirroring)array for data protection (the new drive must be of the same size or larger than the existing drive). If you use two drives of different sizes, the smaller capacity hard disk will be the base storage size. For example, if one hard disk has an 80GB storage capacity and the other hard disk has 60GB, the maximum storage capacity for the RAID 1 set is 60GB.3.Please verify the status of your hard disks before you set up your new RAID array.WARNING!!Please backup your data first before you create RAID functions. In the process you create RAID, the system will ask if you want to “Clear Disk Data” or not. It is recommended to select “Yes”, and then your future data building will operate under a clean environment.1.3 UEFI RAID ConfigurationSetting up a RAID array using UEFI Setup Utility and installing WindowsSTEP 1: Set up UEFI and create a RAID array1.While the system is booting, press [F2] or [Del] key to enter UEFI setup utility.2.Go to Advanced\Storage Configuration.3.Set “SATA Controller(s)” to <RAID>.4.Go to Advanced\AMD PBS and set “NVMe RAID mode” to <Enabled >.5.Go to Boot\CSM and set “CSM” to <Disabled>.6.Press [F10] to save your changes and exit, and then enter the UEFI Setup again.7.After saving the previously changed settings via [F10] and rebooting the system, the “RAIDXpert2Configuration Utility” submenu becomes available.8.Go to Advanced\RAIDXpert2 Configuration Utility\Array Management, and then delete the existing diskarrays before creating a new array.Even if you have not configured any RAID array yet, you might have to use “Delete Array” first.9.Go to Advanced\RAIDXpert2 Configuration Utility\Array Management\Create Array9A. Select “RAID Level”9B. Select “Select Physical Disks”.9C. Change “Select Media Type” to “SSD” or leave at “BOTH”.9D. Select “Check All” or enable specific drives that you want to use in the array. Then select “Apply Changes”.9E. Select “Create Array”.10.Press [F10] to save to exit.*Please note that the UEFI screenshots shown in this installation guide are for reference only. Please refer to ASRock’s website for details about each model.https:///index.aspSTEP 2: Download driver from ASRock's websiteA.Please download the “SATA Floppy Image” driver from ASRock's website(https:///index.asp) and unzip the file to your USB flash drive.Normally you can also use the RAID driver offered via the AMD website.STEP 3: Windows installationInsert the USB drive with Windows 10 installation files. Then restart the system. While the system is booting, please press [F11] to open the boot menu that is shown in this picture. It should list the USB drive as a UEFI device. Please select this to boot from. If the system restarts at this point, then please open the [F11] boot menu again.1.When the disk selection page shows up during the Windows installation process, please click <LoadDriver>. Do not try to delete or create any partition at this point.2.Click <Browse> to find the driver on your USB flash drive. Three drivers must be loaded. This is the first.The folder names might look different depending on the driver package that you are using.3.Select “AMD-RAID Bottom Device” and then click <Next>.4.Load the second driver.5.Select “AMD-RAID Controller” and then click <Next>.6.After the second driver is loaded, the RAID disk will show up. Please do not forget to load the thirddriver.7.Select “AMD-RAID Config Device” and then click <Next>.8.Select unallocated space and then click <Next>.9.Please follow the Windows installation instructions to finish the process.10.After the Windows installation is finished, please install the drivers from ASRock’s website.https:///index.asp11.Go to Boot menu and set “Boot Option #1” to <Windows Boot Manager (AMD-RAID)>.2. AMD Windows RAID Installation GuideCaution:This chapter describes how to configure a RAID volume under Windows. You can use for the following scenarios:1.Windows is installed on a2.5” or3.5” SATA SSD or HDD. You want to configure a RAID volume withNVMe M.2 SSDs.2.Windows is installed on an NVMe M.2 SSD. You want to configure a RAID volume with 2.5” or3.5”SATA SSDs or HDDs.2.1 Create a RAID volume under Windows1.Enter the UEFI Setup Utility by pressing <F2> or <Del> right after you power on the computer.2.Set the “SATA Controller(s)” option to <RAID>. (If you are using NVMe SSDs for RAID configuration,please skip this step)3.Go to Advanced\AMD PBS and set “NVMe RAID mode” to <Enabled >. (If you are using 2.5” or 3.5” SATAdrives for RAID configuration, please skip this step)4.Press “F10” to save the setting and reboot to Windows.5.Install the “AMD RAID Installer” from the AMD website:https:///en/supportSelect “Chipsets”, select your socket and chipset, and click “Submit”.Please find “AMD RAID Installer”.6.After installing the “AMD RAID Installer”, please launch “RAIDXpert2” as administrator.57.Find “Array” in the menu and click on “Create”.8.Select the RAID type, the disks which would like to use for RAID, volume capacity and then create theRAID array.9.In Windows open “Disk Management”. You will be prompted to initialize the disk. Please select “GPT”and click “OK”.10.Right click at the “Unallocated” section of the disk and create a new simple volume.11.Follow the “New Simple Volume Wizard” to create a new volume.12.Wait a bit for the system to create the volume.13.After creating the volume the RAID is available to use.2.2 Delete a RAID array under Windows.1.Select the array which you would like to delete.2.Find “Array” in the menu and click on “Delete”.3.Click “Yes” to confirm.。
重掺<100>硅单晶抛光片条纹状起伏缺陷研究王云彪,张为才,武永超,陈亚楠(中国电子科技集团公司第四十六研究所,天津 300220)摘 要:重掺<100>硅单晶片抛光后经微分干涉显微镜观测,抛光片边缘区域存在条纹状起伏缺陷。
通过分析条纹状起伏缺陷与重掺硅单晶中杂质的分布状况和<100>晶面本身腐蚀特性的关系,阐述了条纹状起伏缺陷形成的机理。
通过工艺试验,对比了不同工艺条件下抛光片表面微观形貌状况,分析了抛光过程中各工艺条件对表面条纹起伏缺陷的影响,采用3步抛光工艺,得到了表面平整和一致性好的抛光片表面,抛光片边缘无条纹起伏缺陷。
关键词:条纹起伏缺陷;微观形貌;抛光片中图分类号:TN305 文献标识码:A 文章编号:1001-3474(2012)05-0312-04Research of Striped Rolling Defects on Heavily Doped <100> Polished Silicon WafersWANG Yun-biao, ZHANG Wei-cai, WU Yong-chao, CHEN Ya-nan(No.46 Research Institute of CETC, Tianjin 300220, China)Abstract: Using differential interference contrast microscope observed striped rolling defects on the edge regions of Heavily doped < 100 > polished silicon wafers. Expounded the formation mechanism of the stripe rolling defects by analyzing the relationship between stripe rolling defects and the impurities distribution in heavily doped silicon crystal and the surface corrosion characteristics of <100> crystal orientation. Compared the surface microtopography with different polishing process conditions, researched the influence of polishing process conditions on surface striped rolling defects. Got polished silicon wafers with highly smooth surface, good consistency and no edge stripe rolling defects using three steps polishing process,.Keywords: Stripe rolling defects; Microtopography; Polished wafers Document Code: A Article ID: 1001-3474(2012)05-0312-04随着半导体工艺技术的不断进步,微机械与微电子电路对硅单晶衬底片的要求越来越高[1,2]。
‘The Boy in the Striped Pyjamas’: OrientationRead pages 1 – 32 and answer the following as you read:General Questions1.When and where is the novel set?1942 in Germany2.Describe the type of language used.Narrate3.Is the story told in first, second or third person? Who is the narrator?No , it isn’t.Comprehension Questions1.Explain why Bruno and his family have to move.Because of his father has a very special job that needs doing there.2.What does Bruno know about his father’s job?He wasn’t entire ly sure what job his father did, he just knows father is a veryimportant solider.3.Thinking about the context, and the narrator, who do you think ‘The Fury’ might be?The Jew4.How do you think Bruno’s mother feels about moving? What evidence is there tosupport your answer?she just simply think5.a) Describe, in your own words, the atmosphere of the new house.b) How does it compare to where the family lived in Berlin?6.What is Bruno’s attitude on the family’s arrival at the new house? Why does he feelthis way? Find evidence to support your answer.7.What are we told, on both pages 1 and 17, about Bruno’s father’s attitude towardsthe maid? What might it suggest about his character?8.How does Bruno feel about the soldier that he sees in his house? What makes himfeel this way?9.Thinking again about the context, what could ‘Out-With’ be?10.Who do you think the children might be that Bruno saw outside of his window?What evidence is there to support your answer?Vocab:Find the definition of the following terms:a)Splutteringb)Desolatec)Restrictiond)Consideration。
农业灾害研究 2023,13(8)溴虫氟苯双酰胺对黄曲条跳甲的生物活性及防治效果分析袁家祥,张惠娇,詹仰进深圳市绿之源有害生物防治有限公司,广东深圳 518112摘要 采取药物喷雾喷洒方法开展田间药效试验,主要检测和评价溴虫氟苯双酰胺对黄曲条跳甲的生物活性,分析药效作用效果和对植株的影响。
结果表明,溴虫氟苯双酰胺安全程度高,杀虫效果明显;施药期间,未对油菜生长产生不良影响;最佳用药剂量为124 g/kg。
结果表明,溴虫氟苯双酰胺生物活性高,能够有效防治曲条跳甲,通过科学配比药物剂量,能够提高对黄曲条跳甲防治效果。
关键词 溴虫氟苯双酰胺;黄曲条跳甲;生物活性;防治效果中图分类号:S436.35 文献标识码:B 文章编号:2095–3305(2023)08–0089-03黄曲条跳甲在我国分布较为广泛,主要危害菜、豆等蔬菜类,造成的经济损失较为严重。
黄曲条跳甲属于寡食性昆虫,喜好油菜、白菜、萝卜等,侵蚀能力强,主要取食上述蔬菜的叶片,影响蔬菜茁壮成长,引发虫害,最终造成植物植株死亡。
黄曲条跳甲繁殖能力也很强,其幼虫主要集中在植物的根茎部位,损害甚至是咬断植物的根系,导致软腐病的传播速度加快,蔬菜生产产量和品质降低,难以满足消费者对绿色无公害蔬菜需求。
现阶段,主要采取化学防控方法治理黄曲条跳甲,防治效果较好,但在药效方面需要进一步测评。
研究发现,溴虫氟苯双酰胺对黄曲条跳甲有一定的防治效果,通过开展田间试验测定,可进一步明确用药剂量,最大限度上提高黄曲条跳甲防治效果,降低蔬菜损失,提高种植户经济效益[1]。
基于此,就溴虫氟苯双酰胺对黄曲条跳甲生的生物活性展开分析,重点研究溴虫氟苯双酰胺用药安全性,在提高黄曲条跳甲综合治理水平同时,保证蔬菜食品安全性。
1 材料与方法1.1 试验药剂本试验选取溴虫氟苯双酰胺杀虫剂作为试验药剂,并选用常规杀虫剂作为对照进行安全性对比。
1.2 昆虫样本研究时间为试验田黄曲条跳甲高发期,研究样本为黄曲条跳甲。
optimal和optimum英文辨析《Optimal vs. Optimum: A Comparative Analysis》In the English language, there are often multiple words that have similar meanings but subtle differences in usage. Two such words are “optimal” and “optimum”. While they both refer to the best or most favorable option, there are some key distinctions between the two.The word “optimal” is an adjective that means the most desirable or best possible. It implies that there is a range of options, and the one being described as optimal is the one that offers the greatest benefit or advantage. For example, “The optimal solution to this problem would be to implement a new system.” Here, “optimal” suggests that there are other potential solutions, but the one being proposed is the best among them.On the other hand, “optimum” is also an adjective, but it is often used in a more specific or technical context. It refers to the point or condition at which something is at its best or most efficient. For instance, “The optimum temperature for this chemical reaction is 50 degrees Celsius.”In this case, “optimum” indicates a specific value or range that is considered the most favorable for a particular process or outcome.One way to think about the difference between “optimal” and “optimum” is that “optimal” is more subjective and depends on the specific circumstances or goals, while “optimum” is more objective and based on specific criteria or measurements. Another difference is that “optimal” can be used to describe a wide range of situations, while “optimum” is often used in more specialized fields such as science, engineering, or economics.In some cases, the two words can be used interchangeably, but it is important to be aware of the subtle differences in meaning and usage.Using the wrong word can lead to confusion or a less precise expression of ideas. For example, saying “The optimum solution to this problem is to do nothing” might sound odd, as “optimum” typically implies a specific action or condition that is considered the best. In this case,“optimal” would be a more appropriate choice.To further illustrate the differences between “optimal” and “optimum”,let’s consider a few examples:In a business context, finding the optimal marketing strategywould involve considering various factors such as target audience,budget, and competition. The goal is to identify the approach that is most likely to lead to success. On the other hand, determining the optimum inventory level would involve analyzing data such as sales trends, lead times, and carrying costs to find the level that minimizes costs while meeting customer demand.In a medical setting, choosing the optimal treatment plan for a patient would depend on their specific condition, medical history,and personal preferences. The doctor would aim to select thetreatment that offers the best chance of recovery with the least side effects. However , when it comes to setting the optimumdosage of a medication, it would involve precise calculations based on the patient’s weight, age, and other factors to ensure the most effective and safe treatment.In a sports context, an athlete might strive for the optimalperformance by training hard, eating well, and getting enoughrest. This would involve finding the right balance between different aspects of their training and lifestyle. On the other hand, a coach might look for the optimum lineup or strategy for a particulargame based on the strengths and weaknesses of the team and the opponent.While “optimal” and “optimum” are similar in meaning, they have distinct nuances that can affect their usage. Understanding thesedifferences can help us communicate more precisely and effectively in various contexts. Whether we are discussing business, science, or any other field, choosing the right word can make a significant difference in how our ideas are understood. So, the next time you are faced with a choice between “optimal” and “optimum”, take a moment to consider • • •the specific context and intended meaning to ensure you are using the most appropriate word.。
optimal selection英语缩写English: "Optimal Selection is often abbreviated as 'OS'. It refers to the process of choosing the best option or decision among various alternatives. This concept is widely used in fields such as decision theory, operations research, and management science. Optimal Selection involves evaluating different choices based on specific criteria, such as cost-effectiveness, efficiency, risk, and performance. It aims to identify the option that maximizes benefits or achieves the desired outcome while minimizing drawbacks or costs. This process typically entails thorough analysis, mathematical modeling, and sometimes simulation to assess the potential impact of each alternative. By applying rigorous methodologies, Optimal Selection helps organizations make informed decisions and allocate resources efficiently, leading to improved performance and competitive advantage."中文翻译: “Optimal Selection通常缩写为'OS'。
描写动物园的路线图的英语作文Zoological Amble.Embark on a fascinating expedition through the rich tapestry of wildlife at our beloved zoological park, a sanctuary where nature's wonders converge. Our comprehensive guide will lead you on an unforgettable journey, traversing diverse habitats and encountering the captivating creatures that call this extraordinary realm home.Entrance and Orientation.Commence your adventure at the grand entrance, where towering trees stand as majestic sentinels. Proceed towards the orientation center, where interactive exhibits offer a glimpse into the zoo's mission of conservation, research, and education.African Savanna.Venturing deeper into the sprawling park, you will find yourself immersed in the vibrant African savanna. Observe the majestic lions, their golden manes shimmering in the sunlight, as they survey their vast domain. Gaze upon the graceful zebras, their striped coats contrasting sharply with the tawny grasslands. Watch as playful giraffes reach high into the acacia trees, their elegant necks defying gravity.Asian Wetlands.As you continue your exploration, you will encounter the tranquil Asian wetlands. Here, the gentle ripples of water create a serene symphony, luring wading birds to its shores. Watch as elegant egrets balance delicately on lily pads, their long legs extended as they search for sustenance. Marvel at the vibrant plumage of kingfishers, their iridescent feathers flashing as they dive for prey.Amazon Rainforest.Transport yourself to the heart of the Amazon rainforest, a verdant paradise teeming with life. The dense canopy overhead casts intricate patterns of light and shadow on the lush undergrowth below. Listen attentivelyfor the chorus of exotic bird calls, each one a distinct melody echoing through the humid air. Spot elusive monkeys swinging through the trees, their agile movements a testament to their arboreal prowess.Australian Outback.Journey to the rugged Australian outback, where the harsh beauty of the landscape mirrors the resilience of its inhabitants. Watch as kangaroos bound effortlessly across the plains, their powerful tails propelling them forward. Observe the solitary wombat, its heavyset frame waddling along in search of food. Marvel at the unique platypus, a curious fusion of mammalian and avian characteristics.Arctic Tundra.Encounter the frigid expanse of the Arctic tundra,where white-furred Arctic foxes camouflage seamlessly against the snow-laden landscape. Watch as playful polar bears frolic in the water, their thick coats providing insulation against the icy temperatures. Marvel at the majestic antlers of reindeer, as they graze peacefully in their icy habitat.Conclusion.As your zoological expedition draws to a close, take a moment to reflect on the incredible diversity of life that you have witnessed. From the majestic lions of the savanna to the elusive platypus of the outback, each animal embodies the intricate interconnectedness of our natural world.Let the memories of this extraordinary journey inspire you to become an ambassador for wildlife conservation, ensuring that future generations can continue to marvel at the wonders of the animal kingdom.。
暑假去餐厅打工的英语作文全文共3篇示例,供读者参考篇1My Summer Job at the Local DinerWhen summer break rolled around last year, I knew I wanted to find a job to keep myself busy and make a little spending money. After scouring the classifieds and online job boards, I landed a gig as a server at the neighborhood diner just a few blocks from my house. Little did I know that those few months waiting tables would turn into one of the most valuable experiences of my life so far.On my first day, I was a bundle of nerves as I stepped through the doors of Tommy's Diner for orientation and training. The loud chatter of the busy breakfast rush filled my ears as the smells of greasy bacon, fresh coffee, and warm pancake batter wafted through the air. Mrs. Thompson, the gruff butkind-hearted manager, handed me my uniform - a classic blue and white striped button-down shirt, black pants, and an apron covered in drink stains from many a spilled milkshake."Listen up, newbie," she barked in her raspy voice. "This here's the diet plate..." She pointed to a sad-looking slice of ground turkey covered in congealed gravy between blanched broccoli and a stale dinner roll. I gulped hard, hoping I wouldn't have to eat that myself anytime soon.The next few hours were a whirlwind as Mrs. Thompson showed me the ropes - how to properly bus tables, operate the antiquated cash register, and memorize the dizzying array of dishes on the laminated menus. By the end of my first shift, my feet were killing me and I had gotten an appetizer plate dropped squarely in my lap by a disgruntled customer. So much for beginner's luck.But like any new challenge, the more I learned and grew accustomed to the controlled chaos of the diner, the more I began to enjoy the work. Each shift brought new and interesting customers whose life stories I became privy to in snippets of overheard conversation. There was Old Mike, a cigar-smoking neighborhood fixture who came in daily at 5 AM on the dot for his short stack of pancakes and black coffee. Edna and her gaggle of blue-haired friends from the local bridge club met for a petite lunch every Thursday without fail. And then there were the working stiffs - beat cops, plumbers, and cable guys - whostopped in mid-day for a no-frills sandwich and a bottomless cup of joe.I took pride in anticipating their orders and having their regulars ready for them even before they asked. It made me feel like I had truly become part of the unshakable routine that binded Tommy's to the very fabric of our humble community.Of course, not every customer interaction was pleasant. There were the chronic complainers who seemed to delight in scrutinizing every crumb on their plates for something to gripe about. Caffeine-deprived morning grouches who'd snap their fingers impatiently at me while I rushed around with a trayful of steaming mugs. And demanding families with screaming kids whose parents let them run rampant and destroy any shred of order in my meticulously cleaned sections. I quickly learned to grow a thick skin against the rude behavior that came with the territory.But for every ill-tempered customer, there were far more kindhearted souls who renewed my faith in human decency. So many times, a quiet older man would slip me a humble tip along with heartfelt gratitude and wisdom from his years on this earth. Working moms with restless kids would offer an apologetic smile and patience as I attempted to juggle getting their orders correct.And young couples on dates often stayed after settling their checks just to chat with me for a while about school, movies, or where life would hopefully lead us. Those were the moments that made every headache worthwhile.In between serving tables, I mopped up spills, sorted and restocked condiments, cleaned tables, sliced pies and cakes, brewed never-ending pots of coffee, and even jumped on the grill a few times when the cook got overly swamped. Every shift brought a fresh array of challenges and responsibilities that tested both my hustle and my ability to somehow keep my cool. There were nights when I was so dead on my feet that I'd collapse on my bed still reeking of diner smells and hamburger grease. But I always woke up the next morning oddly exhilarated to do it all over again.When summer's end drew near, I was actually sad to bid farewell to my job at Tommy's. That grimy, outdated little diner had been put through its paces over the decades by an endless parade of servers just like me. Its worn floors, nicked lunch counter, and heavy oak doors held the echoes of a thousand conversations, stories, and slices of indelible neighborhood history.On my last day, Mrs. Thompson pulled me aside and handed me a grease-stained envelope with my final paycheck. "You did good kid," she said gruffly, almost cracking a smile. "Most newbies don't make it past the first month here. Stick with that strong work ethic of yours and you'll go places." As I left through those heavy oak doors, paycheck in hand, her words stuck with me like a priceless parting gift.My stint at Tommy's Diner imparted more invaluable lessons than I could possibly list. But here are some of the key takeaways I'll forever carry with me:Accountability and responsibility. When I showed up for my shift, it was on me to give 100% effort and own any mistakes I made. No more blaming others or making excuses.Customer service and people skills. Dealing with the public in a high-stress environment forces you to develop patience, listening ability, and conflict resolution like nothing else.Multitasking and prioritizing. With a million things coming at me at once, I learned how to quickly sort urgent needs from things that could wait while staying focused on executing each task thoroughly.Appreciation for hard work. Realizing what goes into an honest day's labor - being on your feet for hours, the physical toll, and the dedication required - gave me a deep respect for the working class that I'll never take for granted.So while my friends went to the beach, traveled with family, and slept away their summers, I spent mine getting an invaluable jumpstart on developing the core skills and mentality for success in the real world. Sure, I worked too hard and made too little money to actually call it a "vacation" in the traditional sense. But if Tommy's Diner taught me anything, it's that the biggest rewards in life rarely come from just sitting idle and coasting through it. No, the real prizes are reserved for those willing to roll up their sleeves, put in the sweat equity, and earn them through hard work and perseverance. And you'd better believe I wore those diner-stained shirts like badges of honor.篇2My Summer Job at the Local DinerAs the final school bell rang on that warm June afternoon, a wave of relief washed over me. No more classes, homework, or tests for a couple of glorious months – the summer was finally here! I had huge plans to spend these sunny days lounging bythe pool, hanging out with friends, and watching an unhealthy amount of Netflix. However, my dream of the perfect summer quickly faded when my parents dropped a bombshell – I needed to get a summer job."It will teach you responsibility and the value of a dollar," my dad lectured as I rolled my eyes. As if getting an after-school job during the year didn't already teach me those lessons. But there was no use arguing with my parents, their minds were made up. I spent the next few days scouring job listings and dropping off applications around town.To my surprise, I managed to land a job as a server at our local diner just a few blocks from my house. I had heard horror stories of rude customers and long hours on your feet, but I was desperate for cash to fund my summer adventures. My first day on the job, I woke up full of nervous energy and butterflies in my stomach. What had I gotten myself into?I arrived at the diner promptly at 7am in my freshly-pressed uniform of an obnoxiously bright yellow polo and black pants. The grizzled manager, Joe, gave me a once-over and grumbled, "Just don't drop anything, kid." He then quickly showed me around - the kitchen, the register, and the sections I'd beresponsible for. Within a couple of hours, the breakfast rush was in full swing and I was in the weeds.Customer after customer piled in, menus scattered everywhere, plates teetering precariously as I tried to balance them all on my arms. Sweat dripped down my forehead as I hurriedly took orders, delivered food, and tried to keep up with the constant stream of demands. An older couple waved me over, the man snapping, "We've been waiting 20 minutes for our check!" I scrambled to get them their bill, mentally crossing my fingers that they wouldn't stiff me on the tip.By the end of that first hellish 8-hour shift, my feet were barking and I had never been so relieved to sit down in my life. I dragged myself home, collapsed on my bed, and wondered what possessed me to take this soul-crushing job. However, a proud grin crept across my face when I opened my paycheck at the end of that first week – I had managed to earn over 200 in just a few days' work.As the summer pressed on, some days at the diner were better than others. I'll never forget the utter chaos of the Sunday breakfast crowd, folks lined up out the door for a coveted booth and plates of pancakes, eggs, and bacon. I learned to weave between tables like a pro, balancing heavy trays of food withoutspilling a drop. There were also the dreadfully slow weekday lunch shifts where I spent hours scrubbing ketchup and syrup off every surface while reruns of Judge Judy played on the TV.My personal favorite times were the weekend nights when families and friends would pack the diner for a casual dinner out.I loved observing the jovial banter and conversation that flowed over platefuls of burgers, fries, and milkshakes. An elderly couple, clearly regulars, always requested a booth by the window so the wife could read her paperback while the husband devoured his patty melt. A group of rowdy construction workers, covered in dirt and sweat, would pile in after their shift ended and trade stories over bottles of Bud.I marveled at how this simple diner seemed to be a community hub, a place where people from all walks of life could come together and simply enjoy good food and good company. In those moments, I felt proud to play a small role in that experience as I topped off coffees and cleared away plates. My initial fears of dealing with rude customers quickly faded as I learned that most people, when treated with kindness and respect, responded in kind.Of course, there were exceptions to that rule. I'll never forget the absolute meltdown one frazzled mother had when we wereout of the chocolate milk her little princess wanted to drink. She threatened to get me fired as her two kids screamed at the top of their lungs. "Thanks for the wonderful life lesson," I muttered under my breath as I cleaned tiny globs of hot fudge off the floor. Moments like that made me never want to work in customer service again.However, those cringeworthy instances were overshadowed by the invaluable lessons I learned over those few short months. Never again would I take servers for granted or treat them as just faceless robots – the immense physical and mental stamina it took to be on your feet for hours at a time while juggling multiple things was astounding. I finished the summer with a newfound sense of empathy, confidence, and a healthy respect for honest hard work.As I looked back on those long days of tottering between packed booths, memorizing orders, and feeling sweat trickle down my back, I felt an immense sense of pride. For minimum wage, I had stepped out of my comfortable world and into the hustle and bustle of the restaurant industry. I interacted with people from all different backgrounds, handled difficult situations with grace, and walked away with hilarious stories and lasting memories.Those couple of hundred dollars I earned each week were undoubtedly the hardest dollars I've made in my young life. But that hard-earned cash, smudged with syrup stains and burger grease, allowed me to fund plenty of adventures with friends, fuel spontaneous road trips, and finally buy that videogame system I had been saving for. Most importantly though, my summer job allowed me to gain a true appreciation for a dollar and the importance of hard work, just like my parents said it would. I definitely didn't love every second of it, but looking back, I wouldn't have had it any other way.篇3Working at the Local Diner: A Summer Job ExperienceWhen summer rolled around after my junior year of high school, I knew I needed to find a job to earn some spending money. After scouring the job listings, I landed a gig working as a server at the neighborhood diner near my house. Little did I know, this seemingly simple summer job would end up being one of the most valuable experiences of my life so far.On my first day of training, I was a bundle of nerves. Having no prior experience in the restaurant industry, I was worried I would be in over my head. However, my trainer Sarah wasincredibly patient and walked me through all the basics - how to properly greet customers, present menus, take orders, carry heavy trays, operate the register, and more. She made it clear that being a server was equal parts customer service and multitasking.After the first few days of shadowing Sarah, it was time for me to take on my own section of tables. I was terrified of messing up orders or dropping plates of hot food. My hands were shaking as I approached my first table of customers - a family of four. But I took a deep breath, put on a smile, and did my best to make them feel welcomed. To my relief, I got through taking their order without any major blunders. From there, it was a constant hustle - rushing back and forth between tables and the kitchen, refilling drinks, delivering food, and collecting payments.There was definitely a steep learning curve those first couple of weeks. I made plenty of mistakes - mixing up orders, spilling water on customers, dropping the occasional plate on the floor. But my coworkers and managers were all very supportive and understanding. They reassured me that messing up was all part of the process of gaining experience. I was picking up new skillsrapidly just from being thrown into the deep end of the diner rush.As the summer went on, I grew more and more comfortable in my role. I became a pro at juggling multiple tasks, thinking on my feet, and developing a rapport with customers through friendly conversation. I got to know the regular customers and would have their usual orders memorized. I took pride in giving excellent service and making diners feel welcome.Of course, the job wasn't without its challenges. I had to deal with hangry customers, messy kids, leaving late at night after closing, and being on my feet for hours at a time. There were times when it was overwhelming and exhausting. But I powered through and gained incredible stamina, time management abilities, and problem-solving skills.One skill I hadn't anticipated developing was my math abilities. Having to quickly calculate bills, taxes, and change in my head really gave my brain a workout. By the end of the summer, I was a whiz at mental math. I also became very conscious of customer psychology - how to diffuse tense situations, read people's moods and needs, and provide the best possible service to ensure big tips.More than just picking up hard skills though, this job taught me so much about life. I gained a newfound appreciation for all the hardworking people in the service industry. Jobs like serving tables are often undervalued, but they require true skill and effort. I saw firsthand how servers have to remain positive and energetic even when dealing with rude or demanding customers. It was humbling and made me more empathetic.I also learned the importance of teamwork. During the dinner rush when we were completely slammed, everyone had to work together like a well-oiled machine - servers, cooks, bussers, hosts, and managers all supporting each other. We had to communicate clearly, have each other's backs, and motivate one another to power through. Being part of that kind of team atmosphere showed me what it's like to work toward a common goal alongside coworkers who become like a second family.Financially, the job allowed me to earn a nice chunk of money for things like gas, car insurance, and spending cash for hanging out with friends. But the experience itself was the true reward. Working at the diner instilled incredible work ethic and customer service values in me that will undoubtedly come in handy wherever my future career path leads.When I think back on that summer, I have fond memories of getting to know the colorful characters who were regulars at the diner counter. I reminisce about the hilariously awkward moments and mishaps that came with being a new server. Most of all, I'm grateful for the invaluable lessons about responsibility, perseverance, and human interaction that stuck with me.So while being a server at a local diner may have seemed like a simple throwaway summer job, it ended up shaping me in profound ways. I gained confidence, skills, and a deeper understanding of the real world beyond academics. Those three months of non-stop hustle gave me a taste of what it's like to truly work hard and be a part of the workforce. It was the first step in preparing me for life after school. For that, I'll always cherish my time at the neighborhood diner.。
Best in Class Outlet DensityOptimal Form FactorsSuperior ReliabilityNORTH AMERICA and INTERNATIONALTMTMTMWelcome to the Panduit Basic PDUsInput Apparent OutletInput Apparent Outlet North AmericaInternationalPanduit Basic PDUs provide high power density, hightemperature rating, exceptional reliability and cost-effective power distribution and are available in single or three phase (Delta or WYE) inputs and come with a variety of IEC C13, IEC C19, or NEMA compliant outlet configurations.Our PDU’s have the Best in Class outlet densities and optimized form factors currently available in the marketplace. They feature up to 48 outlets and mount vertically (0U) or horizontally (1U or 2U) in a data center/enterprise cabinet or rack. Using standard mounting buttons or included hardware, the PDUs have been validated to fit in 100% of Panduit’s cabinets.2Features3World Class Quality and ReliabilityUtilizing high temperature, premium components Extensive product testing Design and Manufacturing best practicesMultiple Form FactorsVertical 0U Horizontal 1U or 2UAvailable in various compact sizes to maximize cabinet spaceHigh Operating Temperature60° C at full loadCabinet CompatibilityDesigned to fit into industry standard cabinets and 100% of Panduit CabinetsHigh Outlet DensityMaximizes spacial constraints of densely packed andincreasingly power demanding IT equipment.Locking OutletsCable tie accepting outlets for retention of standard power cords.W-lock and V-lock compatible for secure connections on both ends of the power cords.Form FactorState-of-the-art form factor leveragesPremium Circuit BreakersDesigned to withstand hot aisle Color-Coded CircuitsEasily identify circuits to aid in fault isolation and load balancing.Basic 1-Phase PDU4Input CurrentApparent Power Outlet1-Phase5Height Width Depth (inches)at Outlet Panduit SKUBasic 3-Phase PDUInput Current Apparent Power Outlet6Height Width Depth (inches)at Outlet Panduit SKU3-Phase7Complete Warranty & Compliance information can be found at CustomerService:*******************.777.3300TechnicalSupport:****************************.405.6654。
Typical ApplicationsSugars, Glucose, Starches, Chocolate, Chemicals,Surfactants, Food oil, Milk and CreamR o t a r y T a n k e r P u m pHighfield Industrial Estate, Edison Road,Eastbourne, East Sussex, BN23 6PT, ENGLAND Phone:+44 1323 509211 G Fax:+44 1323 507306E-Mail:******************G Web:Kalrez ®and Viton ®are the registered trademarks of DuPont Performance Elastomers ™.Isolast ®is the registered trademark of Busak+Shamban. Johnson Pump (UK) reserve the right to alter specification without notice.RTP 20and RTP 30Rotary Tanker Pump specificationsBBECDT S AQM P RNLH BH TH SXXVW4HOLES 'U' DIACONNECTION SIZE246810121020304050607080Flow m³/hrP r e s s u r e B a rRTP30RTP20Available in a 1.0 or 1.28 litres/revdisplacement, which means, using the RTP 30,a 30,000-litre tanker can be unloaded in just 23 minutes ensuring that minimum time isspent with the vehicle idle.RTP 20and RTP 30Flow rate comparisonTechnical specificationsRTP 20RTP 30A 50 8080 100B1117131 135B2139 149163 178B3131145B4131 139153B5139 144158C142157D 275305E 2229HB 8897HS 88100HT 196217L 299311M4262N 174161P90124Q 1414R117152S 222243T195214U1113V 303329W330358X5460Weight RTP 20: 49kgRTP 30: 67kgRTP 20 and RTP 30measurements in millimetres (mm)An Original Quality Product fromThe all new RTP 20and RTP 30Rotary Tanker PumpsDesigned specifically for the road tanker industryAPPLICATIONS:Sugars Glucose Starches Chocolate Chemicals Surfactants Food oil Milk CreamSuperb operator and maintenace benefits:SEALED FOR SAFETYIn designing the RTP 20and the RTP 30we have concentrated on reliability and low maintenance.The gearbox is sealed and pre-filled with semi liquid grease, eliminating the need for routine inspection and filling. All seal options are accessed from the front of the pump allowing them to be inspected and changed without removing the pump from the tanker or pipework, the front loading ‘O’ring seal making it ideal for strip cleaning and ease of maintenance.SO VERSATILEEqually at home in hygienic applications, the pump is also suitable for industrial and chemical use, the Din 24960 seal housing (RTP 30)and different rotor settings giving the pump flexibility in application and assignment. Different seal types may be fitted without the need for modification, adding to the flexibility of the RTP .STAYS CLEANER – LONGER!The strategic placement of the product seals in the rear of the rotors, mean that they can be easily removed for strip cleaning and refitted by the operator in minutes. The flush front cover, rotor retention devices and the fully swept pump chamber combine to ensure that once cleaned, the pump will remain cleaner, longer than conventional lobe pumps.RTP 20constructionJohnson Pump (UK) Ltd., have refined proven rotary lobe pump principles to produce new rugged, robust and reliable pumps specifically to satisfy the needs of the road tanker industry.316L stainless steel contact partsSelf draining rotorcase constructionSealed rotor retainers hexagon nut styleSingle or double ‘O’ring seal on hard coated sleeveUniversl mounting bolt on feetHelical timing gearsSAE ‘A’type flange 2or 4 bolt fixingInternal drive splineCompact modular aluminium gearcaseDouble taper roller bearingsLow weight – High displacement –Compact sizeA tanker operator wants to carry payload,not pump! RTP pumps have excellentdisplacement / weight ratios, meaning more in the tank and less in the cabinet, RTP’s being about 30% shorter than many competitor pumps.Low Life Cycle CostsMaintenance is virtually zero on these pumps giving excellent low whole life cost benefits.Low NPSH requirementStandard full bore 75 mm (3.0”) inlet /outlet ports on the RTP 20&RTP 30pumps with 100 mm (4.0”) full bore inlet / outlet ports available on the RTP 30.This means that the flow of fluid to the pump rotors is not reduced as it is with many of ourcompetitor’s pumps. The bore is the same all the way to the rotors, not tapered to a 75mm (3.0”) hole like others. This allows high viscosity fluids to be pumped at twice the flowrate thus allowing faster discharges.Nominal pressure to 12 barThe rugged design allows the pumps to have high differential pressures, a must when handling viscous liquids at high flows,again allowing faster discharges.316L stainless steel wetted parts so whether it is hygiene you need or chemical compatibility,the RTP has the answer.Low maintenanceAll RTP pumps are grease filled for zero maintenance at the gearbox end. No more trying to fill with oil in a cramped cabinet on a cold winters day.Meets 3A and FDA standard.A wide range of seal combinationsmake the RTP suitable f0r all applications,industrial or hygienic, food or chemical with modular front-loading and unloading seals including: Silicon carbide faced mechanical seals, lip-seals or dynamic ‘O’ring seals.The RTP 30has standard short DIN24960seal envelope.Ideally suited for CIP /SIP or Strip cleaning The RTP pump can be cleaned in place,sterilised in place or indeed striped down for hand cleaning. A method to suit every application.Close coupled mounting for hydraulic drive 2or 4 bolt SAE ‘A’flange with keyway,6or 14 tooth spline or an external shaft for electric motor drive.Universal mountingWith its bolt on feet, the RTP can bemounted with the drive shaft and the ports in any orientation ensuring total flexibility of installation.Integral relief valve availableA safety relief valve may be fitted to protect the pump against over pressurising due to product solidifying in the discharge line or a valve being shut.Front cover and rotor case heating /cooling availableFor handling heat sensitive products such as chocolate, heating can be applied to the pump head to ensure that the liquid does not solidify in the pump.。
外文翻译--基于优化的牛顿—拉夫逊法和牛顿法的潮流计算英文文献Power Flow Calculation by Combination of Newton-Raphson Method and Newton’s Method in Optimization.Andrey Pazderin, Sergey YuferevURAL STATE TECHNICAL UNIVERSITY ? UPIE-mail: pav@//0>., usv@//.Abstract--In this paper, the application of the Newton’s method in optimization for power flow calculation is considered. Convergence conditions of the suggested method using an example of a three-machine system are investigated. It is shown, that the method allows to calculate non-existent state points and automatically pulls them onto the boundary of power flow existence domain. A combined method which is composed of Newton-Raphson method and Newton’s method in optimization is presented in the paper.Index Terms?Newton method, Hessian matrix, convergence of numerical methods, steady state stabilityⅠ.INTRODUCTIONThe solution of the power flow problem is the basis on which otherproblems of managing the operation and development of electrical power systems EPS are solved. The complexity of the problem of power flow calculation is attributed to nonlinearity of steady-state equations system and its high dimensionality, which involves iterative methods. The basic problem of the power flow calculation is that of the solution feasibility and iterative process convergence [1].The desire to find a solution which would be on the boundary of the existence domain when the given nodal capacities are outside the existence domain of the solution, and it is required to pull the state point back onto the feasibility boundary, motivates to develop methods and algorithms for power flow calculation, providing reliable convergence to the solution.The algorithm for the power flow calculation based on the Newton's method in optimization allows to find a solution for the situation when initial data are outside the existence domain and to pull the operation point onto the feasibility boundary by an optimal path. Also it is possible to estimate a static stability margin by utilizing Newton's method in optimization.As the algorithm based on the Newton’s method in optimization has considerable computational cost and power control cannot be realized in all nodes, the algorithm based on the combination of the Newton-Raphson met hods and the Newton’s method in optimization is offered to be utilizedfor calculating speed, enhancing the power flow calculation.II. THEORETICAL BACKGROUNDA.Steady-state equationsThe system of steady-state equations, in general, can be expressed as follows: where is the vector of parameters given for power flow calculation. In power flow calculation, real and reactive powers are set in each bus except for the slack bus. Ingeneration buses, the modulus of voltage can be fixed. WX,Y is the nonlinear vector function of steady-state equations. Variables Y define the quasi-constant parameters associated with an equivalent circuit of an electrical network. X is a required state vector, it defines steady state of EPS. The dimension of the state vector coincides with the number of nonlinear equations of the system 1. There are various known forms of notation of the steady-state equations. Normally, they are nodal-voltage equations in the form of power balance or in the form of current balance. Complex quantities in these equations can be presented in polar or rectangular coordinates, which leads to a sufficiently large variety forms of the steady-state equations notation. There are variable methods of a nonlinear system of steady-state equations solution. They are united by the incremental vector of independent variables ΔX being searched and the condition of convergence being assessed at each iteration.B. The Newton's method in optimizationAnother way of solving the problem of power flow calculation is related to defining a zero minimum of objective function of squares sum of discrepancies of steady-stateequations:2?The function minimum 2 is reached at the point where derivatives on all required variables are equal to zero: 3It is necessary to solve a nonlinear set of equations 3 to find the solution for the problem. Calculating the power flow, which is made by the system of the linear equations with a Hessian matrix at each iteration, is referred to as the Newton'smethod in optimization [4]: 4The Hessian matrix contains two items: 5During the power flow calculation, the determinant of Hessian matrix is positive round zero and negative value of a determinant of Jacobian .This allows to find the state point during the power flow calculation, when initial point has been outside of the existence domain.The convergence domain of the solution of the Newton's optimization method is limited by a positive value of the Hessian matrix determinant. The iterative process even for a solvable operating point can converge to an incorrectsolution if initial approximation has been outside convergencedomain. This allows to estimate a static stability margin of the state and to find the most perilous path of its weighting.III. INVESTIGATIONS ON THE TEST SCHEMEConvergence of the Newton's method in optimization with a full Hessian matrix has been investigated. Calculations were made based on program MathCAD for a network comprising three buses the parameters of which are presented in Figure 1.Dependant variables were angles of vectors of bus voltage 1 and 2 ,independent variables were capacities in nodes 1 and 2, and absolute values of voltages of nodes 1, 2 and 3 were fixed.Fig. 1 ? The Test schemeIn Figure 2, the boundary of existence domain for a solution of the steady-state is presented in angular coordinates δ1-δ2. This boundary conforms to a positive value of the Jacobian determinant:As a result of the power flow calculation based on the Newton method in optimization, the angle values have been received, these values corresponding to the given capacities in Fig.2 generation is positive and loading is negative.For the state points which are inside the existence domain, the objective function 2 has been reduced to zero. For the state points which are on the boundary of the existence domain, objective function 2 has not been reduced to zero and the calculated values of capacities differed fromthe given capacities.Fig. 2 ? Domain of Existence for a SolutionFig.3 - Boundary of existence domain In Fig.3, the boundary of the existence domain is presented in coordinates of capacities P1-P2. State points occurring on the boundary of the existence domain 6 have been set by the capacities which were outside the existence domain. As a result of power flow calculation by minimization 2 based on the Newton's method in optimization, the iterative process converges to the nearest boundary point. It is due to the fact that surfaces of the equal level of objective function 2 in coordinates of nodal capacities are proper circles for threemachine system having the centre on the point defined by given values of nodal capacitiesThe graphic interpretation of surfaces of the equal level of objective function for operating point state with 13000 MW loading bus 1 and 15000 MW generating bus 2 is presented in Fig.3.Hessian matrix is remarkable in its being not singular on the boundary of existence domain. The determinant of a Hessian matrix 5 is positive around zero and a negative value of the Jacobian matrix determinant. This fact allows the power flow to be calculated even for the unstable points which are outside existence domain. The iterative process based on the system of the linear equations 4 solution has converged to the critical stability point within 3-5 iteration. Naturally,the iterative process based on Newton-Rapson method is divergent for such unsolvable operating points.The convergence domain of the method under consideration has been investigated. What is meant is that not all unsolvable operating points will be pulled onto theboundary of existence domain. A certain threshold having been exceeded the iterative process has begun to converge to the imaginary solution with angles exceeding 360It is necessary to note that to receive a critical stability operating point in case when initial nodal capacities are set outside the boundary of the existence domain, there is no necessity to make any additional terms as the iterative process converges naturally to the nearest boundary point.Pulling the operation point onto feasibility boundary is not always possible by the shortest and optimal path. There are a number of constraints, such as impossibility of load consumption increase at buses, constraints of generation shedding/gaining at stations. Load following capability of generator units is various, consequently for faster pulling the operation point onto the feasibility boundary it is necessary to carry out this pulling probably by longer, but faster path.The algorithm provides possibility of path correction of pulling. It is carried out by using of the weighting coefficients, which define degree of participation of eachnode in total control action. For this purpose diagonal matrix A of the weighting coefficients for each node is included into the objective function 2:All diagonal elements of the weighting coefficient matrix A should be greater-than zero:When initial approximation lies into the feasibility domain, coefficients are not influence on the computational process and on the result.In the figure 4 different paths of the pulling the same operation point onto feasibility boundary depending on the weighting coefficients are presented. Paths are presented for two different operating points.In tables I and II effect of weighting coefficients on the output computation is presented. In tables I and II k1 and k2 are weighting coefficient for buses 1 and 2, respectively.TABLE IWEIGHTING COEFFICIENT EFFECT ON OUTPUT COMPUTATION FOR INITIAL SET CAPACITIES P1 -13000 MW AND P2 15000 MWCoefficients ,MW ,MW ,deg ,deg1,1 -7800 9410 -45 555,1 -8600 8080 -69 250.005,1 -5700 10140 -1 93TABLE IIWEIGHTING COEFFICIENT EFFECT ON OUTPUT COMPUTATION FOR INITIAL SET CAPACITIES P1 -8000 MW AND P2 -5000 MWCoefficients ,MW ,MW ,deg ,deg1,1 -4360 -1680 -92 -800.01,1 -1050 -4920 -76 -941,0.35 5800 0 -99 -71Fig.4 - Paths of pulling the operation point onto the feasibility boundaryIV. COMBINATION OF METHODSIf to compare the Newton’s method in optimization for power flow calculation with newton-Raphson using a Jacobian matrix, the method computational costs on eachiteration will be several times greater as the property of Hessian matrix being filled up by nonzero elements 2.5-3 times greater than with Jacobian one. Each row of Jacobian matrix corresponding to any bus contains nonzero elements corresponding to all incident buses of the scheme. Each row of Hessian matrix contains nonzero elements in the matrix corresponding not only to the neighboring buses, but also their neighbors. However, it is possible to compensate this disadvantage through the combination Newton-Rap son method with Newton’s method in optimization. It means that the part of nodes can be calculated by conventional Newtonmethod, and the remaining buses will be computed by Newton’s method in optimization. The first group of passive nodes consists of buses in which it is not possible to changenodal capacity or it is not expedient. Hence, emergency control actions are possible only in a small group of buses supplying with telecontrol. Most of the nodes includingpurely transit buses are passive. Active nodes are generating buses in which operating actions are provided. Such approach allows to fix nodal capacity for all passive buses of the scheme which have been calculated by Newton-Rap son method. In active buses which have been calculated by Newton’s method in optimization, deviations from set values of nodal capacity are possible. These deviations can be considered as control action. The power flow calculation algorithm based on combination Newton ? Ra phson method and Newton’s method in optimization can be presented as follows:1.The linear equation system with Jacobian matrix is generated for all buses of the scheme.2. The solution process of the linear equation system with Jacobian is started by utilizing the Gauss method for all passive buses. Factorization of the linear equations system is terminated when all passive buses are eliminated. Factorizedequations are kept.3.The nodal admittance matrix is generated from not factorized the part of Jacobian matrix corresponding to active buses. This admittance matrix contains parameters of the equivalent network which contains only active buses.4.The linear equation system with Hessian matrix 4 is generated for the obtained equivalent by Newton’s method in optimization.5.The linear equation system with Hessian matrix is calculated and changes of independent variables are defined for active buses.6.Factorized equations of passive buses are calculated, and changes of independent variables are defined for passive buses.7.The vector of independent variables is updated using the changes of independent variables for all buses.8. New nodal capacities in all buses of the network are defined; constraints are checked; if it necessary, the list of active buses will be corrected.9. Convergence of the iterative process is checked. If changes of variables are significant, it is necessary to return to item 1.Taking into account the number of active buses in the network aren’t large, computationa l costs of such algorithm slightly exceed computational costs of the Newton-Rapson method.V. CONCLUSION1. The power flow calculation of an electric network by minimizingthe square sum of discrepancies of nodal capacities based on the Newton's method in optimizationmaterially increases the productivity of deriving a solution for heavy in terms of conditions of stability states and the unstable states outside the existence domain of the solution.2. During the power flow calculation, the determinant of Hessian matrix is positive around zero and negative value of the Jacobian matrix determinant. The iterative process naturally converges to the nearest marginal state point during the power flow calculation, when the initial operating point has been outside of the existence domain.3. There is a possibility of control action correction for the pulling operation point onto feasibility boundary by using matrix of weighting coefficients.4. Utilization of the combined method for power flow calculation all ows to use all advantages of Newton’s method in optimization and to provide high calculating speed.5. In case when the setting nodal powers are outside the existence domain, there are discrepancies in the active buses, which can be considered as control actions for pulling the state point onto the feasibility boundary. When the initial state point is inside the existence domain, the iterative process converges with zero discrepancies for both active and passive buses.中文翻译基于优化的牛顿??拉夫逊法和牛顿法的潮流计算摘要??在本文中,考虑到了优化的牛顿法在潮流计算中的应用。
Solving the24Puzzle with Instance Dependent PatternDatabasesAriel Felner1and Amir Adler21Dpet.of Information Systems Engineering,Ben-Gurion University of the Negev,Beer-Sheva,84104,IsraelE MAIL:felner@bgu.ac.il2Dept.of Computer Science,Technion,Haifa,32000,IsraelE MAIL:adlera@cs.technion.ac.ilAbstract.A pattern database(PDB)is a heuristic function in a form of a lookuptable which stores the cost of optimal solutions for instances of subproblems.These subproblems are generated by abstracting the entire search space into asmaller space called the pattern space.Traditionally,the entire pattern space isgenerated and each distinct pattern has an entry in the pattern database.Recently,[10]described a method for reducing pattern database memory requirements bystoring only pattern database values for a specific instant of start and goal statethus enabling larger PDBs to be used and achieving speedup in the search.Weenhance their method by dynamically growing the pattern database until memoryis full,thereby allowing using any size of memory.We also show that memorycould be saved by storing hierarchy of PDBs.Experimental results on the large24sliding tile puzzle show improvements of up to a factor of40over previousbenchmark results[8].1IntroductionHeuristic search algorithms such as A*and IDA*find optimal solutions to state-space search problems.They visit states in a best-first manner according to the cost function f(n)=g(n)+h(n),where g(n)is the actual distance from the initial state to state n and h(n)is a heuristic function estimating the cost from n to a goal state.If h(s)is “admissible”(i.e.,is always a lower bound)then these algorithms are guaranteed tofind optimal paths.The domain of a search space is the set of constants used in representing states.A subproblem is an abstraction of the original problem defined by only considering some of these constants and mapping the rest to a“don’t care”symbol.A pattern is a state of the subproblem.The abstracted pattern space for a given subproblem is a state space containing all the different patterns connected to one another using the same operators that connect states in the original problem.A pattern database(PDB)stores the distance of each pattern to the goal pattern.These distances are used as admissible heuristics for states of the original problem by mapping(abstracting)each state to the relevant pattern in the pattern database.Typically,a pattern database is built in a preprocessing phase by searching back-wards,breadth-first,from the goal pattern until the whole abstracted pattern space is2spanned.Given a state S in the original space,an admissible heuristic value for S,h (S ),is computed using a pattern database in two steps.First,S is mapped to a pattern S by ignoring details in the state description that are not preserved in the subproblem.Then,this pattern is looked up in the PDB and the corresponding distance is returned as the value for h (S ).The value stored in the PDB for S is a lower bound (and thus serves as an admissible heuristic)on the distance of S to the goal state in the original space since the pattern space is an abstraction of the original space.PDBs have proven very useful in optimally solving combinatorial puzzles and other problems [1,7,8,3,6,2].5123456789101112131415Fig.1.The 15and 24Puzzles in their Goal StatesThe 15and 24tile puzzles are common search domains.They consist of 15(24)numbered tiles in a 4×4(5×5)square frame,with one empty position -the blank .A legal move swaps the blank with an adjacent tile.The number of states in these domains is around 1013and 1024respectively.Figure 1shows these puzzles in their goal configurations.666667878666Fig.2.Partitionings and reflections of the tile puzzlesThe best existing optimal solver for the tile puzzles uses disjoint PDBs [8].The tiles are partitioned into disjoint sets (subproblems)and a PDB is built for each set.Each PDB stores the cost of moving only the tiles in the given set from any given arrangement to their goal positions and thus values from different disjoint PDBs can be added and are still admissible.An x −y −z partitioning is a partition of the tiles into disjoint sets with cardinalities of x ,y and z .[8]used a 7-8partitioning for the 15puzzle and a 6-6-6-6partitioning for the 24puzzle.These partitionings were reflected about the main diagonal (as shown in figure 2)and the maximum between the regular and the reflected PDB was taken as the heuristic.3 The speed of search is inversely related to the size of the PDB used,i.e.,the number of patterns it contains[5].Larger PDBs take longer to compute but the main problem is the memory requirements.With a given size of memory only PDBs of up to afixed size can be stored.Ordinary PDBs are built such that they are randomly accessed.Thus, storing larger PDB on the disk is impractical and would significantly increase the access time for a random PDB entry unless a sophisticated disk storage mechanism is used.For example a mechanism built on the idea of[11]can be used for storing PDBs on disk. Note,however,that even if the PDBs are stored on disk,disk space is also limited.A possible solution for this was suggested by[4].They showed that instead of hav-ing a unique PDB entry for each pattern,several adjacent patterns can be mapped to only one entry.In order to preserve admissibility,the compressed entry stores the mini-mum value among all these entries.They showed that since values in PDBs are locally correlated most of the data is preserved.Thus,we can build large PDBs and compress them into smaller sizes.A significant speedup was achieved using this method for the 15puzzle and the4-peg Towers of Hanoi problems.There are,however,a number limitations to this technique.First,the entire pattern space needs to be generated.Second,only a limited degree of compressing turned out to be effective.For the tile puzzles,it was only beneficial to compress pairs of patterns achieving a memory fold of a factor of two.The largest PDBs that could be built using this technique for the24puzzle with one gigabyte of main memory was a5−5−7−7 partitioning where the7-tile PDBs were compressed by a factor of two.This did not gain a speedup over the6−6−6−6partitioning offigure2which is probably the best 4-way partitioning of the24puzzle.The motivation for this paper is to use larger PDBs for the24puzzle.We want at least an8−8−8partitioning of this domain.A pattern space for8tiles has25×24...×18=4.36×1010different patterns.Storing three different complete8-tile PDBs would need130gigabytes of memory!!!Using the compressing idea of[4]would not help much and alternative idea should be used.Another way of achieving reduction in memory requirements is to build a PDB for a specific instance of a start and goal states.Some recent works used this idea for solving the multiple sequence alignment problem,e.g.,[9]where the PDB was stored as an Octree.A general formal way for doing this was developed by[10].They showed that for solving a specific problem instance only a small part of the pattern space needs to be generated.In this paper,we call this idea Instant dependent pattern databases(IDPDB). We suggest a number of general enhancements and simplifications to this method and apply them to the24puzzle.Experimental results show a reduction of up to a factor of 40in the number of generated nodes for random instances of this puzzle.2Instant dependent pattern databasesWefirst want to distinguish between the original search space where the actual search is performed from the pattern space which is a projection of the original search space according to the specification of the patterns(seefigure3).Solving a problem involves two phases.Thefirst phase builds the PDB by performing a breadth-first search back-4Fig.3.The projection/abstraction into the pattern space.wards from the goal pattern until the entire pattern space is spanned.The second phase performs the actual search in the original search space.Traditionally,a PDB has a unique entry for each possible pattern.[10]observed that for a given instance of start and goal states only nodes generated by A*(or IDA*) require a projected pattern entry in the PDB since only these nodes are queried for a heuristic value during the search.In this paper we call these nodes the A*nodes and their projections the A*patterns(seefigure3).Ideally,we would like to identify and only generate the exact set of the A*patterns but this is impossible.They defined a focused memory-based heuristic as a memory based-heuristic(PDB)that is computed only for patterns that are projections of states in the original search space that could be explored by A*in solving the original search problem.For building the PDB they also search backwards from the goal pattern but are focused on the specific start and goal states.Instead of the usual breadth-first search which searches in all possible directions they activate A*from the goal pattern to the start pattern.In this paper,we call it the secondary A*in order to distinguish this search from the primary search in the original problem which could be performed by any admissible search algorithm(e.g.,IDA*).For each pattern expanded by the secondary A*,its g-cost represents the cost of a shortest path from this pattern to the goal pattern in the pattern space and can serve as an admissible h-cost for the original search problem.After the start pattern is reached by the secondary A*search only a small number of patterns were generated and there is no guarantee that the entire set of the A*patterns was reached.They noted that we can continue to expand nodes after the secondary A*finds an optimal path from the goal pattern to the start pattern to determine optimal g-costs(and thus admissible heuristics)for additional states.We call this the extended secondary A*phase.We would like to halt the extended secondary A*phase when all the A*patterns are reached but this is a difficult task.They provide a method for identifying a special set of patterns which is a superset of the A*patterns set.We call this set the ZH set(after Zhou and Hansen).They halt the extended secondary A*phase after the complete ZH set is generated and are guaranteed that the entire set of the A* patterns is generated.The definition of the ZH set is as follows.Let U be an upper bound on the cost of the optimal solution to the original problem.The ZH set includes all patterns p i which have f(p i)<U in the secondary A*search.It is obvious that all the A*patterns have f-value in the secondary search smaller than U and thus are all included in the ZH5 set.They also provide a formula for computing and generating the ZH set for a set of disjoint additive PDBs.Let U again be an upper bound on the solution.Let L=j (h j(S))be the additive heuristic of the initial state S.Let∆=U−L.They provethat a disjoint PDB,P DB j only needs to be calculated for projected patterns p i having f(p i)<h j(S)+∆.See[10]for more details and proofs.Fig.4.The different layers of the pattern space.Figure4shows the relations between different sets of patterns.The innermost set includes the patterns generated during thefirst stage of secondary A*(until the optimal path from the goal pattern to the start pattern is found).The next set includes the A*-patterns,i.e.,the patterns queried during the search in the original problem.The next set includes the ZH set which[10]stored in their PDB.The outmost set is the complete pattern space.They recognized that continuing the extended-A*until the complete ZH set is gen-erated is not always possible due to time/memory limitations.Thus,they introduced theirγfactor where0<γ≤1.They stopped the extended-A*when its f-cost exceeds γ×U.Withγ<1generating all the A*patterns is not guaranteed.Therefore,when a state on the original space is reached whose projected pattern is not in the PDB due to theγcutoff,they suggest using a simple quickly computed admissible heuristic in-stead.They implemented their idea with different values forγon the multiple-sequence alignment and obtained impressive results.Note that ordinary PDBs are usually stored in a multi-dimensional array with an entry for each possible pattern.For IDPDB,we need a more sophisticated data structuree.g.,a hash table,as only a subset of the patterns is stored.2.1Weaknesses of the ZH method.There are a number of weaknesses in the ZH set approach.61)Their method needs afixed amount of memory.Onceγis chosen all the nodes with f≤γ×U are stored.This is problematic as it is difficult to determine the exact value forγso that the available memory would be fully used and not be exhausted. Identifying the ZH set and then simplifying it byγseems not natural and ad hoc.2)An upper bound,U,for an optimal solution to the original search problem cannot always be found.Furthermore,we need a strict upper bound so as to reduce the ZH set as much as possible.3)[10]tried out their method only on multiple sequence alignment.The search space of this domain(and also the projected pattern space)has the property that the number of nodes in a given depth d is polynomial in d.This is because the problem is formalized as an n-dimensional lattice with l n nodes where n is the number of sequence to be aligned and l is the length of the sequences.Even with relatively mediocre U bounds,their ZH set might be significantly smaller than the entire pattern space.This is not true in domains such as the tile puzzles where the number of nodes at depth d is exponential in d.Given any U the ZH set might include the entire domain.To support these claims experimentally we applied the formula they provide for cre-ating the ZH set for disjoint PDBs on the15puzzle.This formula uses an upper bound on the optimal solution.We used the best upper bound possible-the exact optimal so-lution.We calculated the ZH set with this strict upper bound for a5-5-5and a6-6-3 partitionings of the15puzzle on the same1000random instances from[8].The entire 5-5-5PDB includes1,572,480entries,half of them were queried during the search.The average ZH set over the1000instances has1,227,134pattern-78%of the entire pat-tern space.For many difficult instances of this domain,the ZH set actually included the entire pattern space.On those instances the ZH method is useless.Note again that this is when we used a strict upper bound of the actual optimal solution length.For more realistic larger upper bounds the ZH will be even larger.Similar results were obtained for a6-6-3partitioning.3Dynamically growing the PDBsWe suggest the following enhancement to Zhou and Hansen’s idea.Our enhancement is at least as strong as their method but is simpler to implement and easier to understand. In addition,our idea canfit any size of available memory.The main point of our idea is to dynamically grow the PDB until main memory is exhausted.Our idea is much moreflexible than the method of[10]as it can work with any size of memory and we do not need to decide when to halt the secondary A* extension in advance.Furthermore,we do not need to calculate any upper bounds nor have to build the ZH set.In the preprocessing phase,we continue generating patterns in the extended secondary A*until memory is exhausted.We then start the primary search phase and for each pattern not in the PDB,we use a simple quickly computed admissible heuristic instead.The following enhancement can better utilize main memory after it was exhausted. There are two data structures in memory.Thefirst is the PDB which is identical to the closed list of the extended secondary A*.The second is the open-list of the extended secondary A*.However,at this point we can remove the open-list from main memory7 thus freeing a large amount of memory for other purposes such as other PDBs.In fact, if x is the f-value of the best node in the open list and is also the value of the last node expanded,then all nodes in the open-list with values of x can be added to the PDB before freeing the memory.This is actually expanding these nodes without actually generating their children.Another way of saving memory is to use IDA*for the secondary search.Here,each new pattern generated is matched against the PDB and if it is missing a new PDB entry is created.However,in the pattern space of eight tiles presented below there are many small cycles since all the other tiles can be treated as blanks.This causes IDA*to be ineffective in this specific pattern space because it cannot prune duplicate nodes due to its depth-first behavior.3.1On Demand pattern databasesA version of the above idea is called on-demand pattern database.Here,we add pat-terns to the PDB only when they are required during the search.This prevents us from generating large PDBs with patterns that will not be queried.First,we run the secondary A*from the goal pattern to the start pattern until the start pattern is chosen for expansion.Each pattern expanded by this search is inserted into the PDB.At this point,the preprocessing phase ends and the primary search can begin because the start pattern is already in the PDB.We continue the primary search as long as projected patterns of new nodes are in the PDB(i.e.,were expanded by the secondary A*search).When we reach a pattern p not in the PDB and still have free memory,we continue to extend the secondary A*phase until this pattern p is reached and we can return to the primary A*phase.When memory is exhausted the PDB has reached itsfinal size and the secondary A* is terminated.From this point,each time a heuristic is needed and the relevant pattern is not in the PDB,we consult the quickly computed heuristic.4Implementation on the24puzzleWhile the above idea is a general one we made some domain dependent enhancements and took special steps to bestfit the IDPDB idea to the24puzzle.Generating a PDB consumes time.However,the time overhead of preprocessing the PDBs is traditionally omitted as it is claimed that it can be amortized over the solving of many problem instances.For example,it takes a couple of hours to generate the7-8 disjoint PDB which was used to solve the15puzzle[8].Yet,the authors ignored this time and only reported the time of the actual search which is a fraction of a second.We cannot simply omit the time overhead of generating IDPDBs as a new PDB has to be built for each new instance.Therefore,it is irrelevant to apply this idea to small domains such as the15puzzle where the running time of the actual search is much smaller than the time overhead of generating the PDB.We cannot see how this method will improve previous running times for such domains.The24puzzle is a different story since it is1011times larger.Generating the6-6-6-6PDB also takes a number of hours.However,a number of weeks were required to solve many of the instances of[8].8Here,the time overhead of generating the PDB can also be omitted when compared to the overall time needed to solve the entire problem.The above general method is for generating one PDB.When we use disjoint data-bases,such as the8-8-8partitionings for the24puzzle(see below),values from the different PDBs are added and therefore three values for each state of the original search space are required.Thus,the on-demand version of IDPDB activates three secondary A*searches in parallel,one for each PDB3.Note,that since each move in the tile puzzle domain moves only one tile then at each step we only need to consult the one PDB that includes the tile that has just been moved.Values from the other PDBs can be inherited from the parent and remain identical.4.1On Demand versus preprocessingThe weakness of the on-demand approach for the24puzzle is that three open-lists are maintained at all times but this is wasteful since the open-list can be deleted after memory is exhausted.In the special case of the tile puzzles an open list might have10 times more nodes than the closed list.A better way to utilize memory for this domain is to perform the complete secondary A*in the preprocessing phase until memory is exhausted.At this point the closed list which includes all the patterns with valid heuristics is stored in afile on the disk and the entire memory is released.This mechanism is repeated for each PDB until a relevant file with heuristic values is stored on the disk.Memory is better utilized as only one open-list is maintained in memory at any point of time.Furthermore,during the course of the primary search there are no open lists of the secondary searches in main memory. Now,we can load values from the diskfiles into memory and have a PDB for each of them.In both the on-demand and the preprocessing variations when an entry was missing from the PDB we took the Manhattan distance(MD)as an alternative simple heuris-tic for the tiles of the missing entry.In addition,for most variations reported below we also stored the benchmark6-6-6-6PDBs(which needed244megabytes).We then compared the heuristic obtained from the8-8-8PDB to the6-6-6-6heuristic and took the maximum between them.4.2Improvement1:Internally partition the PDBNote that each of the8-tile sets offigure5can be internally partitioned into a6-2 partitioning where the6-tile partition is one of the6-6-6-6partitions offigure2.For example,the6-2partitioning is shaded in gray for partition a offigure5.Instead of taking the MD for the eight tiles of a missing entry,we can use the6-2partition of these tiles.For those eight tiles we added the value of the corresponding6-tile pattern from the6-6-6-6PDB to a value of the2tiles from a new2-tile PDB which was also generated.3Furthermore,this is true for using other combinations of multiple PDBs such as taking the maximum over different PDB values9Fig.5.Different Partitioning to 8-8-84.3Improvement 2:Hierarchical PDBsNote that once there is a simple heuristic in hand,the new PDB heuristic only requires storing entries for patterns which their PDB values are larger than the simple heuristic.This suggests an hierarchy of PDBs.First,you store a small weak PDB.Then,for the stronger PDB you store only those entries having values larger than the weaker PDB.We used this idea as follows.For any 8-tile PDB we only need values which are larger than the 2-6partitioning described above.Values of the 8-tile PDB which are not larger can be omitted and retrieved from the 6-2PDBs.This was very effective as only 18%of the values of an 8-tile PDB were larger than the corresponding 6-2PDB and had to be stored.The overhead for this was the need to generate and store the relevant 6-tile and 2-tile PDBs but we stored the 6-6-6-6PDBs anyway as described above and the overhead of generating and storing a number of 2-tile PDBs is very low.Here we only stored two levels using this hierarchical approach.Future work can take this further by building an hierarchy of PDB heuristics where each PDB is built on top of the previous one in the hierarchy.Note that the basics of this approach were used in [8]for the 15puzzle where the weaker heuristic was MD and only additions above MD were stored in the PDB.Thus,values for patterns equal to MD were stored as 0.Here we further improve on this approach and omit values of 0.4.4Improvement 3:Multiple partitioningThe bottleneck for the IDPDB method is the memory requirements of the secondary A*search.This phase terminates when memory is exhausted.The memory requirements for the primary search phase is much smaller especially after applying all the improve-ments above.It is well known that PDBs are better utilize by having multiple PDBs and taking the maximum value among them as the heuristic [6,7].Since so much mem-ory was released we were able to use the extra memory for storing six different 8-8-8partitionings illustrated in figure 5and taking their maximum as the heuristic.10Method Nodes Entries Mem Hits16-6-6-64,756,891,097255,024,00024410026-6-6-61,107,142,063255,024,0002441001GigabyteOD1,434,852,4117,178,14312527.7OD+6938,256,5167,178,14336928.9PP856,917,58825,071,42965666.4Imp1714,722,20025,071,42965668.5Imp2714,722,2004,538,01532716.668-8-8198,851,45015,638,294505-89-9-6175,100,71931,199,159754-2GigabytesPP713,536,97966,690,1521,32280.6Imp1613,844,59966,690,1521,32283.3Imp2613,844,59913,565,10047221.268-8-8130,890,13148,907,7201,037-89-9-6100,964,44382,143,8611,569-Table1.ISPDB with8-8-8and9-9-6partitionings5Experimental resultsWe implemented all the above variations and improvements on the same random in-stances of the24puzzle used by[8].We experimented with the8-8-8partitionings of figure5and also with a set of9-9-6partitionings.We used a1.7MHz Pentium4PC with one gigabyte of main memory and also with two gigabytes.The primary search was performed with IDA*.We sorted the50instances from[8]in increasing order of solution length.Table1 presents the average results on the ten”easiest”instances.Thefirst column indicates the variation used and the second column counts the number of generated nodes.The third column,Entries,is the total number of8-8-8(or9-9-6)PDBs entries that werefinally stored.The Mem,gives the total amount of memory in megabytes used for all the PDBs consulted by this variation(including the6-6-6-6PDB when applicable).Finally,the last column indicates the percentage of times where the8-8-8(9-9-6)PDB had a hit. We define a hit as a case where a PDB is consulted and actually had an entry for the specific pattern.This is opposed to a miss where that entry was not available and the simpler heuristics were consulted.Often,a stronger heuristic consumes more time per node.Thus,the overall time improvement to solve the problem with a stronger heuristic is less than the reduction in the number of generated nodes.Nevertheless,the actual time is influenced by the effectiveness and effort devoted to the current implementation.For example,using a better hash function or sophisticated data structures for storing entries in the PDB might further improve the running time.A number of methods for reducing the constant time per node when using multiple PDBs lookups were provided by[6].Using as many of these methods further reduces the overall time.The actual time also depends on the hardware and memory abilities of the machine used.We noted that the number of nodes11 per second in all our variations was always between one to two Million.Since the nodes improvement reported below is significantly greater we decided to omit the time reports and concentrate only on the number of generated nodes.As discussed above,we can also omit the time of generating the PDBs which took between30to80minutes for our different variations.This is negligible when compared to the actual search time which was a around18hours on average for the random50instances.Thefirst row of table1uses one6-6-6-6partitioning.The second row is the bench-mark results taken from[8]where the6-6-6-6partitioning was also reflected about the main diagonal and the maximum between the two was used.This reduces the search effort by a factor of4.In the next bunch of rows we had one gigabyte of main memory for the secondary A*.Thefirst row(OD)is the simple case where only a single8-8-8partitioning(of figure5.a)was used and the extended secondary A*search was performed on demand. In a case of a miss in a PDB,we calculated the Manhattan distance(MD)for the tiles in this particular PDB.Here,only7,178,143entries were generated since the open lists of the different8-tile PDBs were stored in memory during the primary search.Note that the hit ratio here is low(27.7)as the size of the PDB is comparably small.Even this simple variation of one8-8-8partitioning reduced the number of generated nodes by more than a factor of three when compared to the one6-6-6-6partitioning.The second row(OD+6)also generated the PDBs on demand.However,here,(and in all the successive rows)we took the maximum between the8-8-8and the6-6-6-6PDBs.This variation outperformed the one6-6-6-6version(line1)by a factor of 5and the benchmark two6-6-6-6version(line2)by18%.The next line(PP)is the preprocessing variation where the entire secondary search for each8-tile PDB was per-formed a priori until memory was exhausted.Here more patterns were expanded by the secondary A*search and therefore we could load25,071,429values to the PDBs. This reduced the number of generated nodes to856,917,588.Note that the hit ratio was increased to66%.While the total number of patterns for three8-tile sets is129Billion entries we stored only25Million(a fold factor of4,600)and yet had the relevant value in66%of the times.The fourth line(Improvemnet1)used the6-2partitioning instead of MD when a miss occurred.Thefifth line(Improvement2)only loaded the8-tile values that are larger than the corresponding6-2partitioning.Here we can see a reduction of a factor of four in the number of stored entries.In both variations the number of generated nodes was reduced to714,722,200.With improvement2,the hit ratio dropped to16.6since many of the entries were removed as they were no larger than the6-2partitionings. Note that improvement2squeezed the PDB to a small size which enabled us to store multiple PDBs below.In the next line full advantage of main memory was taken and six different8-8-8partitionings were stored.This reduced the number of nodes to198,851,450.In the last line we used89-9-6PDBs.Here the number of nodes is175,100,719,eight times smaller then benchmark results of two6-6-6-6partitionings.Here we did not report the hit ratio as it was difficult to define it for multiple lookups.We then report similar experiments performed when we had two gigabytes of main memory for generating the PDBs.Here,more patterns were generated and thefinal。
Chemiluminescent Peroxidase Substrate-3Product Codes CPS-3-50, CPS-3-100, and CPS-3-500Storage Temperature 2-8 °CTECHNICAL BULLETINProduct DescriptionChemiluminescent Peroxidase Substrate-3 (CPS-3) can be used for sensitive detection of peroxidase labeled materials in a variety of Western blotting applications. This substrate is an enhanced luminol-based product with a stabilized peroxide buffer solution, providing nanogram sensitivity with minimal background interference.ComponentsThe Chemiluminescent Peroxidase Substrate is available in 3 package sizes each containing the Chemiluminescent Reagent (Product Code C 7364) and the Chemiluminescent Reaction Buffer(Product Code C 7239).Package Size C 7364 C 723950 ml 25 ml 25 ml100 ml 50 ml 50 ml500 ml 250 ml 250 ml Precautions and DisclaimerThis product is for R&D use only, not for drug, household, or other uses. Please consult the Material Safety Data Sheet for information regarding hazards and safe handling practices.Preparation InstructionsPrepare the Working Solution by mixing 1 part of the Chemiluminescent Reagent (Product Code C 7364) with 1 part of the Chemiluminescent Reaction Buffer (Product Code C 7239). Mix well and equilibrate at room temperature for 30 minutes before use. It is recommended to use 0.043 to 0.125 ml per cm2 of blotting membrane. Storage/StabilityThe components should be stored at 2-8 °C and are stable unmixed for a minimum of 18 months when stored in the original container and protected from light. After mixing, the Working Solution is stable for at least 45 days if stored in a tightly capped, light protected container at 2-8 °C.ProcedureChemiluminescent Peroxidase Substrate-3 is sensitive and care must be taken to optimize the concentration of individual assay components (antibodies, conjugates, etc). In a Western blot, an optimized system is needed to minimize background reactivity associated with nonspecific immunochemical interactions. The following is a general guideline for the use of this product. The protocol starts with a post-transfer Western blot membrane.Notes:• For optimal results, the concentration of individual assay components such as the primary andsecondary antibody dilution must be optimized forminimal background and maximum signal.• This product is designed for use only in Western blotting.• Where appropriate, steps 1 through 8 below should be performed with gentle agitation on a rocker or an orbital shaker such that the membrane is freelyfloating.• All incubations should be performed at room temperature.• Gloves must be worn when working with the membrane to avoid contamination.• Azide inhibits horseradish peroxidase (HRP) and should not be used as a buffer preservative for any assay components.21. Remove the membrane from the Western blottingapparatus and wash for 1 minute in either Tris-buffered Saline with TWEEN 20 (TBST, ProductCode T 9039) or phosphate buffered saline withTWEEN 20 (PBST, Product Code P 3563). Notethat either a TBS or PBS system can be used forWestern blotting.2. Block the membrane with an appropriate agent, forat least 30 minutes with gentle agitation. For mostroutine applications, either TBS + 3% milk (Product Code T 8793), PBS + 3% milk (Product Code P 2194), or PBS + 1 % BSA (Product Code P 3688) are recommended.3. Pour off the blocking solution. Dilute the primaryantibody in fresh blocking solution and immediately add to the blot. The final concentration of primaryantibody in this solution usually ranges from 0.2-20 µg/ml.4. Incubate the membrane with the primary antibodysolution for at least 30 minutes but no longer than 2 hours, employing gentle agitation at roomtemperature.5. Pour off the primary antibody solution. Wash theblot 3-5 times for 5 minutes each with blockingsolution, TBST, or PBST - to remove any unbound primary antibody.6. Pour off the last wash. Dilute the secondaryantibody-HRP conjugate 1:1,000 to 1:500,000 infresh blocking solution and pour onto the blot.7. Incubate the membrane with the secondaryantibody-HRP conjugate solution for at least 30minutes but no longer than 2 hours, employinggentle agitation at room temperature.8. Remove the blocking solution containing thesecondary antibody and wash the membrane 3-5times for 5 minutes each with TBST or PBST.9. The membrane should then be removed from thewash buffer and any excess liquid drained. Keepthe membrane damp; do not allow the membraneto dry out.10. Place the membrane on a flat sheet of plastic film(or on any clean plastic surface). 11. Add Chemiluminescent Peroxidase Substrate-3Working Solution and incubate for 5 minutes atroom temperature without agitation. Agitation orexcessive movement of the substrate may result in smearing of the substrate signal across the blot. 12. Quickly drain off any excess substrate and placethe membrane in a holder, or wrap in plastic film.13. Expose BioMax light film to the blot for timesranging from 5 seconds to 10 minutes. It is best to perform a quick exposure of 10 to 30 seconds todetermine the exposure time needed. If the signal is too intense even at the short exposure times,allow the signal to decay for 15 minutes up toseveral hours - and then re-expose the film. Related ProductsProduct Name Package Size Product Code TBS 10 packets T 6664 PBS 10 packets P 3813 TBS + 3% milk 10 packets T 8793 PBS + 3% milk 10 packets P 2194 PBS + 5% milk 10 packets P 4739 PBS + 1% BSA 10 packets P 3688 TBS + TWEEN 20 10 packets T 9039 PBS + TWEEN 20 10 packets P 3563Anti-Mouse IgG –Peroxidase2 ml A 9044Anti-Rabbit IgG –Peroxidase1 ml A 05453Troubleshooting GuideProblem TypeCause Solution An insufficient number of wash steps were performed after the primary and/or secondary antibody incubation. Double the number of wash steps. Too much primary antibody used. Lower the concentration of primary antibodyused. Wash more frequently and/or for longer times after the primary antibody incubation.Too much background signal observed.Too much secondary antibody used. Lower the concentration of secondaryantibody used. Wash more frequently and/or for longer times after the secondary antibody incubation.Image is reversed on film (dark background and light bands). Too much secondary antibody used. Lower the concentration of secondary antibody used. Wash more frequently and/or for longer times after the secondary antibodyincubation.Bands on membrane have brown or yellow tone. Too much secondary antibody used. Lower the concentration of secondary antibody used. Wash more frequently and/orfor longer times after the secondary antibody incubation.Too much primary antibody used. Lower the concentration of primary antibody used. Wash more frequently and/or for longertimes after the primary antibody incubation.Non-specific bands show up on membrane. Too much secondary antibody used. Lower the concentration of secondaryantibody used. Wash more frequently and/or for longer times after the secondary antibody incubation.Membrane is stippled. Secondary antibody has some precipitate formation. Filter or centrifuge secondary antibody toremove precipitate.Unknown Run a positive control of 1-10 ng/lane on the blot to ensure that the detection system is working properly.Protein levels are too low for detection. Increase exposure time of film and increaselevel of protein loads.Not enough primary antibody used. Use a higher concentration of primaryantibody.No signal is seen with chemiluminescent reaction on membrane. Not enough secondary antibody used. Use a higher concentration of secondaryantibody.TWEEN is a registered trademark of the ICI Group.NW/PHC 12/04Sigma brand products are sold through Sigma-Aldrich, Inc.Sigma-Aldrich, Inc. warrants that its products conform to the information contained in this and other Sigma-Aldrich publications. Purchaser must determine the suitability of the product(s) for their particular use. Additional terms and conditions may apply. Please see reverse side ofthe invoice or packing slip.。
软件学报ISSN 1000-9825, CODEN RUXUEW E-mail: jos@Journal of Software,2013,24(2):391−404 [doi: 10.3724/SP.J.1001.2013.04241] +86-10-62562563 ©中国科学院软件研究所版权所有. Tel/Fax:∗非结构网格的并行多重网格解算器李宗哲, 王正华, 姚路, 曹维(国防科学技术大学计算机学院软件研究所,湖南长沙 410073)通讯作者: 李宗哲, E-mail: lzz144@, 摘要: 多重网格方法作为非结构网格的高效解算器,其串行与并行实现在时空上都具有优良特性.以控制方程离散过程为切入点,说明非结构网格在并行数值模拟的流程,指出多重网格方法主要用于求解时间推进格式产生的大规模代数系统方程,简述了算法实现的基本结构,分析了其高效性原理;其次,综述性地概括了几何多重网格与代数多种网格研究动态,并对其并行化的热点问题进行重点论述.同时,针对非结构网格的实际应用,总结了多重网格解算器采用的光滑算子;随后列举了非结构网格应用的部分开源项目软件,并简要说明了其应用功能;最后,指出并行多重网格解算器在非结构网格应用中的若干关键问题和未来的研究方向.关键词: 非结构网格;computational fluid dynamics(CFD);并行计算;多重网格;高效解算器中图法分类号: TP316文献标识码: A中文引用格式: 李宗哲,王正华,姚路,曹维.非结构网格的并行多重网格解算器.软件学报,2013,24(2):391−404. http://www.jos./1000-9825/4241.htm英文引用格式: Li ZZ, Wang ZH, Yao L, Cao W. Parallel multigrid solver for unstructured grid. Ruanjian Xuebao/Journal ofSoftware, 2013,24(2):391−404 (in Chinese). /1000-9825/4241.htmParallel Multigrid Solver for Unstructured GridLI Zong-Zhe, WANG Zheng-Hua, YAO Lu, CAO Wei(Institute of Software, College of Computer, National University of Defense Technology, Changsha 410073, China)Corresponding author: LI Zong-Zhe, E-mail: lzz144@, Abstract: As an unstructured-grid high efficient solver, the multigrid algorithm, with its serial and parallel application, can achieve theoptimal properties of being on time and having space complexity. To illustrate the numerical simulation process of an unstructured grid,this paper begins with the discretization of governing equations and points out that the multigrid algorithm is mainly used for solvinglarge scale algebraic equation, which is derived from the time marching scheme. For the multigrid algorithm, the study briefly describesits basic structure and efficient principle. Secondly, the paper reviews research that trend about the geometric multigrid and algebraicmultigrid and discusses the basic design principles and hot topics on parallelization. At the same time, for the practical application ofunstructured grid, the paper summarizes and classifies many smoothers, followed by examples of open source software about unstructuredgrid industrial application. Finally, some applications and key problems in this field are highlighted, as well as the future progress ofparallel multigrid solver on unstructured grid.Key words: unstructured grid; computational fluid dynamics (CFD); parallel computing; multigrid; efficient solver非结构网格作为计算流体力学(computational fluid dynamics,简称CFD)的一个重要研究分支,对复杂几何外形具有较强的适应性,弥补了结构网格不能在任意形状、任意连通区域内进行网格剖分的缺欠,因而被CFD工程计算广泛采用.与结构网格相比,非结构网格可以使用更加灵活的几何形体.但是,非结构网格的存储格式比较复杂,不仅需要表示节点的坐标,而且需要标识单元的邻接关系.从数值计算角度来看,几何形体与空间离∗基金项目: 国家重点基础研究发展计划(973)(2009CB723803)收稿时间:2011-02-23; 定稿时间: 2012-04-20392 Journal of Software 软件学报 V ol.24, No.2, February 2013散精度密切相关,而网格的数据集存储格式又影响着计算效率[1].一直以来,高精度格式与高效率的数值方法构成了非结构网格研究的两个热点:在高精度格式方面,离散Galerkin [2]、谱体积[3]、谱差分[4]等方法的引入,使得非结构网格在计算格式上具有了二阶以上的计算精度,维持了较强的数值稳定性,在一定程度上满足了数值模拟高分辨率的需求;在高效率算法方面,由于CFD 工程应用的瓶颈往往出现在求解大规模的代数系统方程上,可以说在CFD 建立之初,实现更加快速、有效的求解方法,成为广大CFD 研究者不断追求的目标,因此,经过几十年的发展,各种高效的数值求解方法仍在不断地出现和完善,其中,多重网格方法[5−7]的应用和改进就是这一领域比较活跃的代表,由于其串行的时间和空间复杂度均为O (n ),满足渐进最优的性质,适用于线性和非线性系统方程的求解,且不受控制方程时间和空间离散格式的限制,如显式格式、隐式格式都可以获得较快的收敛速度,因而,非结构网格的高效解算器中必然涉及多重网格方法.同时,非结构网格的CFD 计算问题一般比较庞大,某些问题甚至是串行计算机无法解决的,因而需要大规模并行计算平台的支持.并且,解算器的并行实现可以大量缩短计算时间上的开销,更加真实地模拟和再现物理环境,在很大程度上满足了高精度、多目标、实时性方面的需求.另一方面,非结构网格的CFD 软件开发却远远没有跟上计算机硬件发展的速度[8],且并行扩展能力不高,规模较大的计算问题往往使用较少的计算节点,使得并行计算平台的优越性得不到有效的发挥.另外,随着计算机体系结构的发展,使用CPU+GPU 的计算模式[9,10]越来越多地被CFD 领域所采用.可以看出,非结构网格上的高效解算器必须具有良好的并行可扩展性,适用于不同的硬件环境和计算平台.因而,非结构网格中多重网格的并行化应用研究,也就成为近年来CFD 数值解算器研究的热点问题.本文主要以多重网格求解非结构网格下大规模系统方程为主线,详细分析了其应用研究过程中的技术难题和热点.首先,从非结构网格CFD 的控制方程出发,区别对待空间离散和时间推进格式,引出工程应用中的计算需求以及相应高效求解方法;其次,对多重网格方法的起源、结构和求解原理进行简要的说明,重点阐述几何多重网格和代数多重网格各自研究中的技术热点,归纳其串、并行计算的层次构建方法,并对几何多重网格方法和代数多重网格方法进行比较,同时总结了实际应用中光滑算子的选取策略;最后,简要叙述部分相关开源项目,同时对未来研究方向做进一步的展望.1 控制方程离散化非结构网格下的CFD 数值模拟问题源于对Navier-Stokes 方程或Euler 方程的离散,以3D 非定常可压NS 方程为例,其守恒形式可以表示为 Q F G H S t x y z∂∂∂∂+++=∂∂∂∂ (1) 其中,Q 表示解向量,F ,G ,H 分别表示x ,y ,z 方向上的通量项,S 代表有源项(当体积力和体积热流可以忽略时等于0).方程(1)的通量项离散化方法主要包括有限差分、有限元、有限体积、谱方法等,根据非结构网格存储格式的不同,又可以分为单元中心型(cell-center)和节点中心型(node-center).关于这两种形体格式孰优孰劣的讨论,长期以来一直没有定论.单元中心型的优势在于流场特性的分辨率较高,但计算量大;节点中心型的优势在于计算量小,对网格质量不太敏感.文献[11,12]中对比了6种单元中心格式和两种节点中心格式,从理论分析和数值实验的角度阐明:单元中心最近邻接格式(cell-centered nearest-neighbor,简称CC-NN)适用于无粘通量离散,节点中心格式(node-centered,简称NC)适用于粘性通量离散,各自具有较优的时空复杂度.对于空间离散化过程,以有限体积方法为例,方程(1)最终可以写成半离散化形式: (,)i i i j Q V R Q Q t∂Δ=∂ (2) 其中,ΔV i 表示控制体i 的体积,R 为空间离散格式形成的右端项,Q i 为控制体i 的解向量平均值.需要说明的是,其他空间离散方法也可以得到类似于方程(2)的半离散化形式.至此,空间离散后转化为时间常微分方程的积分过程,即方程(2)的求解.在时间推进方法上,存在显示和隐李宗哲 等:非结构网格的并行多重网格解算器 393式两种方法.比较而言,显示方法的单步计算速度快,内存需求小,数值稳定性强烈依赖于CFL(Courant- Friedrichs-Lewy)数.早期的非结构显格式解算器不仅计算方式简单,时空复杂度也比较高,而且数值稳定性较差.自从Jameson 引入多步Runge-Kutta 方法[13]后,实现了高阶时间精度,随后被广泛用于CFD 商业软件.m 步Runge-Kutta 方法可表述为 (0)()(0)(1)1()Δ()Δn m m m n m i Q Q t Q Q R Q V Q Q α−+⎧=⎪=+⋅⋅⎨⎪=⎩(3) 其中,Q n 和Q n +1分别表示第n 层和n +1层上的值, m 表示步数,αm 为第m 项系数.Jameson 在文献[14]中使用了多重网格化的Runge-Kutta 方法加速Euler 方程的收敛过程,取得了较好的效果.另一类经典的显式方法是总变差递减(total variation diminishing,简称TVD)型的Runge-Kutta 方法[15],具有较强的数值稳定性,其系数αm 相对不固定,因而存在多种极大化CFL 参数的优化版本.其他显式格式方法的相关细节可参见文献[16].相比而言,隐式方法单步计算的时间步长降低了对CFL 数的限制,某些问题的离散化甚至是无条件收敛的,因而数值稳定性更高,引入的问题就是必须求解大型稀疏线性方程组.将方程(2)采用一阶向后Euler 差分,省略中间步骤,可以简化为 1ΔΔ(,)Δn n n n i i j V R I Q R Q Q tQ +⎛⎞∂⋅−⋅=⎜⎟∂⎝⎠ (4) 其中,ΔQ n +1=Q n +1−Q n .上式等价于求解Ax =b 的原型问题.非结构网格与结构网格不同,不能直观地得到矩阵A 的性质,如对角占优、对称正定、三对角型、五对角型等,从而其数值解算器不能完全照搬结构网格.例如,1D 结构网格并行计算使用的Wang 分裂法[17](又称追赶法),只适用于三对角型矩阵的情况.线性系统方程的数值求解,大体可以分为直接法和迭代法两大类.直接法以矩阵分裂为基础,易于实现,理论上可以求得精确解,但计算量、存储量较大,适用于对角型的特殊矩阵;迭代法只提供近似解,计算量存储量较小,易于并行实现.所以,非结构网格的解算器,尤其是并行解算器,在满足相应精度格式的前提下,很少采用直接法,而是采用迭代法求解.迭代法又可分为3类:以矩阵分裂为基础的经典迭代法、以Krylov 子空间为主体的投影方法、层次化的多重网格方法.Krylov 子空间迭代方法通常加入预条件技术,以取得更好的收敛效果[18−20].有关迭代法的具体细节可参见Saad 的专著[21]和Duff 的综述[22].多重网格方法,无论是经典迭代方法还是Krylov 子空间投影方法,都可以作为多重网格方法的光滑算子,达到消除高频误差的效果.2 多重网格解算器多重网格方法最早产生于20世纪60年代,Fedorenko 和Bakhvalov 开发了两层网格的插值校正算法用于求解泊松型偏微分方程.之后,层次化误差校正的高效性逐渐被人们所认识,且越来越多地应用到数值计算领域.其误差校正的原理是:将高频误差采用光滑算子(如经典迭代方法)消除,低频误差在网格由细层转化为粗层的过程中,相应地转化为粗网格层上的高频误差,同样采用光滑算子进行消除,最后,将计算结果插值到细层网格,修正先前结果,从而达到快速收敛的目的.对于线性方程组的原型问题Ax =b ,定义网格层次,设定A 1=A ,网格空间层次序列Ω1⊃Ω1⊃…⊃Ωm .从空间 Ωi +1向空间Ωi 的插值算子定义为1i i P +,空间Ωi 到空间Ωi +1的限制算子定义为1i i R +,空间Ωi 的光滑算子为S i ,则空间Ωi +1上的矩阵算子111i i i i i i A R A P +++=.具体的多重网格可见算法1.算法1. MG (A k ,x k ,b k ).if k =m then精确计算A k x k =b kelse394Journal of Software 软件学报 V ol.24, No.2, February 2013前光滑,迭代ν1次x k =S k (A k ,x k ,b k )粗网格校正:残差限制,11()k k k k k k b R b A x ++=−设定初值,x k +1=0 k +1次递归,for j =1 to γ do x k +1=MG (A k +1,x k +1,b k +1)误差校正,11k k k k k x x P x ++=+后光滑,迭代ν2次x k =S k (A k ,x k ,b k )endif当γ=1时,该算法的迭代过程被称为V 循环;当γ=2时,该算法迭代过程被称为W 循环.如图1所示,图中原点表示光滑过程,实线表示传递算子(插值、限制算子).多重网格方法不断改进过程中,在这两种循环方式的基础上,发展出了多种循环结构,如N 型循环、FMG(full multigird)、K 型循环等结构.其中,FMG,W,V 循环应用最为广泛.串行计算时,这3种循环方式的空间和时间复杂度均为O (n ).从大量数值实验结果来看,其性能相差并不明显,相对而言,FMG 要好于W 循环和V 循环,W 循环又好用V 循环.另一方面,并行计算环境下,FMG,W,V 循环相 应的复杂度分别为2(log ),((log )O n O n O n .可见,V 循环在并行计算时间上具有较快的收敛特性.Fig.1 V-Cycle, W-cycle and FMG-cycle图1 V 循环、W 循环和FMG 循环为了使得多重网格方法达到理想的性能要求,其设计和应用过程中最关键的两个问题是网格层次结构的构建和光滑算子的选取.网格层次根据几何成网格和代数矩阵的不同,其构建过程也相互独立.无论是几何形式还是代数形式的插值、限制算子,其功能都是用于构建网格层次和各层间的数值传递,因而可以归结为第1个问题的组成部分.而光滑算子的优劣,将严重影响各层上高频误差的消除,决定了收敛速度的快慢;同时,非结构网格计算环境下的多重网格光滑算子在加速收敛、处理各向异性问题等方面又存在其特殊性.因此,下面将分别从几何多重网格、代数多重网格的并行化层次构建、光滑算子选取策略加以说明.2.1 几何多重网格早期的多重网格方法是基于结构网格的几何形体层次结构,因而得名为几何多重网格(geometry multigrid,简称GMG).这种层次结构的构建,由于能够较快地收敛到稳定状态,很快被CFD 计算所采用.最初的应用主要采用显式时间格式和几何多重网格相结合的方法加速其收敛过程[14].之后,根据不同物理问题的计算特性以及非结构网格的具体需求,对几何多重网格的应用和改进[7,23−25]层出不穷,体现出几何多重网格在数值计算上强大的优越性.其中的关键问题是构建几何网格的层次结构,即网格粗化,这也是其应用中最受关注的问题.2.1.1 GMG 串行粗化在网格层次构建的串行算法中,结构网格上,GMG 层次构建比较简单,可以直接通过消除相应坐标方向上的网格线得到更粗的网格.相比之下,非结构网格就显得比较复杂,大体上可以分为4种类型:(1) 从粗网格递归的构建细网格[26],多采用自适应改进策略.优点是建立层次结构算法简单,且容易编程实现,细网格质量得到一定程度的修正.缺点是3D 的网格构建方法不太成熟,且细网格质量严重依赖于粗网格层,数值稳定性与求解精度不高,因此,在大规模的CFD 工业应用中使用较少.(2) 非递归地从细网格层独立构建粗网格层[27,28],采用相互独立的方法生成粗网格,各层节点没有相互李宗哲等:非结构网格的并行多重网格解算器395包含关系.优点是可以根据不同问题量身定制不同的粗网格层次,因而具有较优良的性能.但是,网格层次无法自动化地生成,网格层数较少,插值和误差校正过程比较复杂,多用于节点中心型计算格式.(3) 基于边消除(edge-collapsing)[29,30]策略的网格粗化,先消除原有网格边及其连接点,再对剩余节点重新网格化的构建下层网格.优点是容易控制粗化过程,具有合适的计算复杂度,且自动化程度比较高.缺点是只能应用于节点中心型计算格式,多层次的粗化将改变实际物面外型,对计算精度有一定的影响,且复杂外型的适应能力较差.(4) 通过结块(agglomeration)[31]递归地构建粗网格层次,相互邻近的单元按照一定规则聚合成一个单元,以形成下一网格层次.这种方法适用于各种复杂的几何外形,数值收敛速度较快,且实现简单易于编程,非常适合于并行应用,也是目前应用最为广泛的方法,被Fluent,NSU3D,USM3D等商业软件采用.此外,结块过程中聚合单元的选取具有一定的随机性,生成的粗化层网格质量也表现出一定的不确定性,通常采用启发式规则对其聚合过程进行改良,使其具有更好的形体适应能力.另外,结块式方案还可以与代数多重网格协同开发,发挥更加优良的特性.2.1.2 GMG并行粗化GMG的并行化方法涉及网格分区、各区网格层次的构建、边界数值通信等.(1) 实现有效的网格分区,依赖于非结构网格的类型以及并行系统的体系结构.图划分的问题本身是NP-完全问题,常使用各种启发式规则.典型的图划分策略有贪婪算法、KL算法[32]、递归坐标二分、递归图二分等.对于复杂的几何外形,如多段翼、多孔介质,上述划分容易导致子区域不连通,递归谱二分[33]法可以有效弥补上述方法的缺陷,缺点就是引入大量计算开销.多层分割技术不仅快速有效而且健壮稳定,被图分割软件广泛采用,如Metis[34],ParMetis[35]等.另外,多层分割还可以减小通信和维持平衡负载,一个典型的应用[36]是Mavriplis使用Metis划分飞行器外形的非结构网格(72M节点,315M单元),用于2 008个处理器的并行计算,取得接近线性的加速扩展效果.(2) GMG网格层次的并行构建,使用非递归方式构建网格层次的方法,自动化程度不高,需要借助于网格生成软件,尤其是网格节点规模较大时,手动生成网格层次就变得不现实.所以,并行化的网格层次构建往往采用结块(agglomeration)方式.开源软件MGridGen/ParMGridGen[37]提供了这样的功能,不仅可以定制结块大小,而且可以控制生成网格的质量,其中,MGridGen为串行版本,ParMGridGen为并行版本.在并行计算时,按照网格划分与层次构建的顺序,存在两种方式:一种是网格输入时统一建立网格层次,然后分区到各个处理器上去;另一种是先划分后并行建立网格层次.在前一种方式下,各分区细网格层将继承粗网格层的通信模式,容易导致负载不均衡;后一种方式避免了负载平衡的问题,但初始化过程和通信模式比较复杂.在现实工程应用中,可以根据不同的需求采用不同的方式,更多有关细节可参见文献[38].(3) 通常,GMG的并行采用完全通信模式,每个网格层次上进行一次边界数据同步.虽然在数值计算的精度上得到了保证,却大量增加了通信开销,降低了并行加速比,不利于并行可扩展性开发,严重影响GMG算法的高效性.为了减少并行计算时的数据交换,增大计算量与通信量的比值,往往采用计算换通信的方式.对于GMG而言,插值和限制的过程并不需要数据更新,而只在分区产生一次完整的GMG循环之后进行数据通信.更有甚者,多次进行完整的并行GMG循环才进行一次边界数据同步[39],其优点是可以有效地减少通信量,带来的问题是数值精度不高,收敛速度较慢.因而无论是完整的通信模式,还是计算换通信的模式,都涉及收敛速度与并行效率的矛盾.如何有效地处理这对矛盾,仍然是今后一段时间内并行计算的数值理论分析与实践应用必须面对的问题.关于更加详细的并行通信模式可参见文献[27,40].值得一提的是,借助高精度计算格式的发展,GMG的非结构网格解算器在数值稳定性、高效收敛性、并行可扩展性的开发上,仍然具有较强的吸引力,与并行GMG网格层次化构建一起组成了当前GMG研究的两大热点,其中最有代表性的是p-Multigrid[24,41]和hp-Multigrid[40]格式,使得GMG解算器在增加数值精度的同时,收敛速度也得到了较大的提高,在Euler方程和NS方程的并行求解上取得近似线性的加速比,有效地弥补了GMG 在并行可扩展性方面的不足.396 Journal of Software软件学报 V ol.24, No.2, February 20132.2 代数多重网格与几何多重网格方法不同,代数多重网格(algebraic multigrid,简称AMG)以系数矩阵为研究对象,不需要存储网格层次的几何信息,也不需要事先确定网格层次结构,对各向异性网格以及无网格问题都具有较强的适应能力,AMG插值、限制算子相对比较灵活,只需要存储对应的系数矩阵,随着问题规模的扩大,可扩展性能较好,不受网格维度的限制;另一方面,在每次AMG迭代过程中都需要计算相应的粗化和插值矩阵,对AMG的高效性有一定程度的影响,因而也是AMG研究的一个热点问题.2.2.1 AMG串行粗化AMG着眼于Ax=b的原型问题.早期的研究侧重于矩阵粗化过程,因为粗化方式决定了插值、限制算子,插值、限制算子用于构建下一层的原型问题,影响网格层次间误差的传递和修正;同时,随着AMG网格层次的不断加深,从理论上讲,对误差的消除越来越迅速.但是单步AMG循环的计算时间就越长,用于矩阵结构的存储也就越大,这些因素最终将影响到AMG高效性.因此,特定问题确定合理的网格层次数,在串行AMG和并行AMG 上都具有重要的意义.遗憾的是,多数实际应用中,网格层数还是通过试探性或经验性的方法来确定,而关于其数值理论的分析更是少之又少.AMG串行粗化方法大体上可以分为两类:(1) 经典的粗化[42]方式,完全将网格节点分割为粗网格点集C和细网格点集F,粗网格节点集C用于构建下一层的整体网格节点,递推地生成多个层次.在误差校正阶段,细网格点集F的值,将由粗网格点集C中相应的点插值得到.为了减小计算量,通常采用距离为1的插值策略.为了平衡数值精度与计算速度的关系,其改进策略中也有条件地加入距离为2的数值插值,同样能够获得快速的收敛效果.(2) 聚合型(aggregation)的粗化[43]方式,满足强耦合条件的多个节点聚合形成一个新的节点,所有新形成的节点构成下一层的网格节点集.这种方式非常类似于GMG中的结块方式,在一定程度上,两种方式是等价的.唯一不同的是,其对象一个是几何形体结构,另一个是矩阵行列.2.2.2 AMG并行粗化基于上述两类方式的并行化,也是近年来的研究热点,相对应的并行化方法也可以分成两大类:(1) 经典粗化方式的并行主要有:基于串行版本的并行化方法,包括降耦合粗化格式RS0[42,44]、耦合粗化格式RS3[42];基于完全并行化的方法,包括CLJP[42]和PMIS[45,46]、子区域块格式[46];以及组合型格式HMIS[45]等.著名的开源软件包Hypre[47]提供了上述各种方法的C++版本的实现,为用户进行二次开发提供了方便.另外,从上述各种并行化策略的优劣性来说,文献[44]的数值实验提供了一定的参照,完全并行化的方法(CLJP,PMIS)以及各种组合方法具有较强适应能力,其计算性能相对较好,表现出渐进收敛因子和算法复杂度相对较低的特点.(2) 聚合型粗化方式的并行化主要包括基于光滑聚合(smoothed aggregation)[48]方式的耦合并行、降耦合并行、并行极大独立集方法以及基于非光滑聚合的双点对聚合(double pairwise aggregation)[49]方法.采用光滑聚合方式,利用对插值算子进行光滑改进收敛性,可以有效地增强聚合型AMG方法的数值稳定性,降低对耦合节点的依赖,提高并行扩展性能.缺点是光滑过程增大了AMG的启动时间,对应的开源软件包有ML[50];而双点对聚合方法,其插值、限制算子虽然没有加入光滑过程,但是使用K-循环的Krylov子空间方法,加速各矩阵层次上的光滑过程,同样表现出快速收敛的特性.随着AMG研究的深入,其矩阵粗化方法的创新和改进层出不穷,如代数光滑误差修正的AMGe[51], ρAMGe[52]、启发式的面中心修正格式[53]、自适应光滑聚类多重网格(αSA)方法[54]、自适应代数多重网格(SA)方法[55]、基偏移(basis shifting approach)型光滑聚类算法[56]等.在众多的代数多重网格及其并行化方法中,仅仅针对Poisson方程或Laplace方程的数值求解,但是现实的CFD计算问题要复杂得多,对非结构网格下的并行开发仍然需要一段时间.文献[25]中调研了4种可行的并行AMG方法:最小区域块、PMIS粗化、光滑聚合、K-循环.在求解弱可压流的CFD工业应用中,从算法复杂度、计算时间上进行对比,得到K-循环方法表现出最优。
a r X i v :c o n d -m a t /0106484v 2 [c o n d -m a t .m e s -h a l l ] 7 M a y 2002Optimal orientation of striped states in the quantum Hall system against externalmodulationsT.Aoyama,K.Ishikawa,and N.MaedaDepartment of Physics,Hokkaido University,Sapporo 060-0810,Japan(February 1,2008)We study striped states in the quantum Hall system around half-filled high Landau levels and obtain the optimal orientation of the striped state in the presence of an external unidirectional periodic potential.It is shown that the optimal orientation is orthogonal to the external modulation in the Coulomb dominant regime (the orthogonal phase)and is parallel in the external modulation dominant regime (the parallel phase).The phase boundary of these two phases is determined numerically in the parameter space of the strength and wave number of the external modulation at the half-filled third Landau level.PACS numbers:73.40.Hm,73.20.DxThe modern semiconductor technology yields ex-tremely pure two-dimensional (2D)electron systems in heterostructures.In the presence of a strong perpendic-ular magnetic field,various kinds of new phenomena in addition to the ordinary quantum Hall effect are found in the systems.The effect of a crystal structure has been believed to be ignored,because the magnetic length is much larger than the lattice constant of the host crystal.Hence,the system is supposed to have an orientational symmetry,that is,physics in the x direction and y direc-tion is equivalent.In the end of the last century,how-ever,highly anisotropic states,which have an enormous anisotropy in magnetoresistances,were observed around the half-filled high Landau level (LL)[1–4].This obser-vation agrees with the striped state which was predicted in a mean field theory [5,6].The charge density of the striped state is uniform in the one direction and peri-odic in the orthogonal direction.The Hartree-Fock (HF)theory and numerical calculations in a small system for striped states were studied recently at the filling factor where the anisotropy is observed and the results seem to support the striped state [7–10].The experiments show that the stripe direction is parallel to the specific crystal-lographic direction.The origin of the orientation of the striped state is still puzzling [11,12].It is naturally considered that the origin of the orien-tation is related to yet unknown and very weak periodic structure in the sample.Therefore,we suppose that the origin to determine the orientation can be modeled by an external modulation in the 2D electron system [13].We compute the energy of striped states in the quan-tum Hall system with an external modulation and find the optimal orientation.It is naively expected that the external modulation breaks the orientational symmetry and makes the orientation of the striped state parallel to the modulation.Unexpectedly,we find the counter-intuitive phase in which the optimal orientation of the striped state becomes orthogonal to the external modu-lation,and the other phase in which the optimal orienta-tion is parallel to the external modulation depending on the strength and wave number of the modulation.Our results are consistent with the recent experiments [11,14].Let us consider a 2D electron system in a perpendic-ular uniform magnetic field B and a unidirectional peri-odic potential.The total Hamiltonian H of the system is written as H =H 0+H 1+H 2,H 0=ψ†(r )(ˆp +e A )22ρ(r )V (r −r ′)ρ(r ′)d 2rd 2r ′,(1)H 2=gρ(r )cos(K ·r )d 2r,where ˆp α=−i ¯h ∂α,∂x A y −∂y A x =B ,V =q 2/r ,q 2=e 2/4πǫ(ǫis a dielectric constant).ψ(r )is the elec-tron field,and ρ(r )=ψ†(r )ψ(r ).H 0is the free Hamil-tonian,which is quenched in the LL.H 1is the Coulomb interaction term and H 2is the external modulation term.We ignore the spin degree of freedom.The electron field is expanded by the momentum state |f l ⊗βp in von Neumann lattice (vNL)formalism [15]asψ(r )=∞ l =0BZd 2p2π¯h /eB and r s is an asymmetric parameter.Weconsider only the l th LL state and ignore the LL mixing.Hence,H 0turns out to be constant and we omit the free Hamiltonian.Fourier transformed density operator ˜ρin the l th LL is written in vNL formalism as1˜ρ(k)= BZ d2p4πˆkx(2p y−aˆk y)],whereˆk=(r s k x,k y/r s)and f l(k)=L l(a2k28π, here L l is the Laguerre polynomial.We substitute Eq.(3) into H1and H2andfind the ground state in two pertur-bative approaches.In thefirst approach,perturbative expansions with respect to g in H2,which describe the Coulomb dominant regime,are applied.In the second approach,perturbative expansions with respect to q2/a in H1,which describe the external modulation dominant regime,are applied.Thefilling factor isfixed at l+1/2in the following calculation and numerical calculations are performed at l=2.(I)H2as a perturbation:We obtain the ground state of H1in the HF approximationfiing the HF ground state,we treat H2as a perturbation.This ap-proximation is relevant in the Coulomb dominant regime, g≪q2/a.We use the striped state|Ψ1 which is uniform in the y direction as the unperturbed ground state of the H1.In the HF approximation,this is given as[16,9]|Ψ1 =N1|p x|≤π,|p y|≤π/2b†l(p)|0 ,(4)where|0 is the vacuum state for b l and N1is a normal-ization factor.The Fermi surface is parallel to the p x axis.The density of this state Ψ1|ρ(r)|Ψ1 is uniform in y direction and periodic in x direction with a period ar s[9,16].The orthogonality of the Fermi surface in the momentum space and the density in the coordinate space plays important roles and is reminiscent of the Hall effect. The one-particle spectrum is given byǫHF=ǫH+ǫF,ǫH=2r s q2n2,(5)ǫF=−r s q22+p y+2πn−π2πh l(k),(6)where h l(k)=f l( k2x+(k y r s/a)2.ǫHF depends on only p y,and the self-consistency condi-tion for|Ψ1 is satisfied.The Fermi velocity is in the y direction.The HF energy per particle is calculated as E HF(r s)= Ψ1|H1|Ψ1 /N where N is a number of electrons.E HF is a function of r s and calculated as E HF= π/2−π/2dp y2N Ψ1|(˜ρ(K)+˜ρ(−K))|Ψ1 .(7)The operator˜ρ(K)moves an electron in the Fermi seaby aˆK in the momentum space.Therefore,except forthe case that aˆK y coincides with a multiple of2π,∆E(1)vanishes.We consider only the range|aˆK y|<πwhich issufficient to compare our results with experiments.The perturbation energy per particle in the second or-der∆E(2)is written as∆E(2)(g,θ,K)= π2−aˆK y dp yǫHF(p y+aˆK y)−ǫHF(p y),(8)whereˆK y=K sinθ/r mins,θis an angle between the stripedirection and external modulation.We obtain the totalenergy per particle in the Coulomb dominant regime asE Coul(g,K,θ)=E HF(r mins)+∆E(2)(g,K,θ).(9)Theθdependence of∆E(2)is shown in Fig.1at the half-filled third LL.As seen in Fig.1,the energy is alwaysminimum atθ=π/2,that is,the optimal orientationof the striped state is orthogonal to the external modu-lation.We call this phase the orthogonal phase.Notethat∆E(2)vanishes andθdependence disappears whenK equals the zeros of f l(K).In this case,the externalmodulation loses control of the stripe direction.(II)H1as a perturbation:We diagonalize H2first bychoosing the y axis of vNL to be parallel to the externalmodulation and r s=2π/aK,that is,the period of thestriped states ar s equals the wave length2π/K of theexternal ing this vNL basis,we treat H1as a perturbation.This approximation is relevant in theexternal modulation dominant regime,g≫q2/a.Thenthe state|Ψ1 is the ground state of H2.The externalmodulation term readsH2=−|gf l(K)| BZ d2pπ|gf l(K)|.The perturbation energy perparticle is calculated as E HF(r s)= Ψ1|H1|Ψ1 /N withr s=2π/aK.Hence,the total energy per particle in theexternal modulation dominant regime is given byE ext(g,K)=E HF(2π/aK)−2parallel phase has lower energy and is realized at large g. The phase boundary is calculated by solving the equation in g,E ext(g,K)=E Coul(g,K,π/2)for various value of aK.The phase diagram in g-K plane is shown in Fig.3, where orthogonal and parallel phases are indicated by I and II respectively.The dashed lines correspond to the zeros of f2(K),aK=2.714and6.550,at which the stripe direction is undetermined.At aK=2π/r min=2.544, where the period of the external modulation coincides with the optimal period of the stripe,the phase bound-ary touches the K axis.The direct verification of our results is made by ob-serving a transition between the two phases by tuning the extenal modulation.The necessary wave length of the modulation for the verification is on the order of2a, which is about100nm at B=2T.Recently,the unidirec-tional lateral superlattice with a period92nm is achieved on top of the2D electron system[14].The experiment shows that the magnetoresistance orthogonal to the ex-ternal modulation has a shallow and broad dent between two peaks aroundν=9/2.The magnetoresistance par-allel to the external modulation does not have the same structure aroundν=9/2.Anisotropy observed in this experiment is small due to the low mobility compared with experiments of striped states[1–4].The strength of the modulation is estimated as g=0.015meV.The parameters(g,K)=(0.006q2/a,3.097/a)correspond to this experimental setting,and are shown as X in Fig.4. X belongs to the orthogonal phase.In this phase,the one-particle dispersion has no energy gap in the orthogo-nal direction and has an energy gap in the parallel di-rection to the external modulation.Hence,the mag-netoresistance orthogonal to the external modulation is strongly modified by the injected electric current com-pared with the parallel magnetoresistance.This is con-sistent with the experiment.With a slightly larger period 115nm,orthogonal magnetoresistance aroundν=9/2is structureless.In this case,the corresponding parameters (g,K)=(0.014q2/a,2.478/a)are shown as Y in Fig.4 [20].Y belongs to the parallel phase.Hence,there is an energy gap in orthogonal direction to the external modu-lation and the magnetoresistance in this direction is not modified strongly by the injected electric current.This is also consistent with the experiment.We hope that a similar experiment with a higher mobility sample will give more clear evidence for our results.In the half-filled lowest LL,the anisotropic effect is ob-served under the external modulation[21].The semiclas-sical composite fermion theory is proposed for this exper-iment[22].In this theory,it is assumed that anisotropic transport is caused by the density modulation.On the other hand,we study the striped state under the external modulation in the half-filled higher LL.Note that the ori-gin of the anisotropy in the present case is spontaneous stripe formation rather than the external modulation. It seems difficult to understand how the orthogonal phase is realized contrary to the naive expectation that two striped structures tend to be parallel.To understand the reason,it is convenient to consider the striped state in the momentum space.Since the Fermi surface of the striped state isflat as seen in Eq.(4),a perturbation with a wave number vector perpendicular to the Fermi surface affects the total energy most strongly.Therefore the or-thogonal phase could be realized in a small external mod-ulation.The point of our theory is that the meanfield theory has theflat Fermi surface.Thefluctuation around the meanfield has been studied but discussions seem un-settled yet[17–19].Comparisons between experiments with an in-plane magneticfield[3,4,23]and HF calcula-tions indicate that the meanfield energy is good approx-imation for the striped state[7,8].Higher order correc-tions are expected to be small because the Fermi velocity diverges due to the Coulomb interaction[16].We esti-mate∆E(2)in the RPA approximation to the density cor-relation function asπRPA00(k)=π00(k)/(1−˜V(k)π00(k)). The results are shown in Fig.1by dashed lines.As seen in thisfigure the correction is small actually.In summary it is shown that a weak external modula-tion determines the orientation of the striped state and there are two phases in the2D parameter space of the strength and wave number of the external modulation, that is,the orthogonal phase and parallel phase.In the former phase,the optimal orientation of the striped state is orthogonal to the external modulation.In the latter phase,the optimal orientation is parallel to the external modulation.The phase diagram is obtained numerically at the half-filled third LL.We believe that ourfindings shed a new light on the origin of an orientation of striped states in quantum Hall systems.We thank A.Endo and Y.Iye for useful discus-sions.This work was partially supported by the special Grant-in-Aid for Promotion of Education and Science in Hokkaido University provided by the Ministry of Educa-tion,Science,Sports,and Culture,and by the Grant-in-Aid for Scientific Research on Priority area(Physics of CP violation)(Grant No.12014201).Rev.Lett.76,499(1996);M.M.Fogler,A.A.Koulakov,and B.I.Shklovskii,Phys.Rev.B 54,1853(1996).[6]R.Moessner and J.T.Chalker,Phys.Rev.B 54,5006(1996).[7]T.Jungwirth,A.H.MacDonald,L.Smrˇc ka,and S.M.Girvin,Phys.Rev.B 60,15574(1999).[8]T.Stanescu,I.Martin,and P.Phillips,Phys.Rev.Lett.84,1288(2000).[9]N.Maeda,Phys.Rev.B 61,4766(2000).[10]E.H.Rezayi,F.D.M.Haldane and K.Yang,Phys.Rev.Lett.83,1219(1999).[11]R.L.Willett,J.W.P.Hsu,D.Natelson,K.W.West,and L.N.Pfeiffer,Phys.Rev.Lett.87,126803(2001).[12]K.B.Cooper,M.P.Lilly,J.P.Eisenstein,T.Jungwirth,L.N.Pfeiffer,and K.W.West,Solid State Commun.119,89(2001).[13]The stripe or strings formation due to crystal strucure is studied in different systems.See for example,F.V.Kus-marutsev,Phys.Rev.Lett.84,530(2000);D.I.Khom-skii and K.I.Kugel,Europhys.Lett.55,208(2001)[14]A.Endo and Y.Iye,Solid State Commun.117,249(2001);other interpretation of this experiment is the re-entrant integer quantum Hall effect;see K.B.Cooper,M.P.Lilly,J.P.Eisenstein,L.N.Pfeiffer,and K.W.West,Phys.Rev.B 60,R11285(1999).[15]K.Ishikawa,N.Maeda,T.Ochiai,and H.Suzuki,Phys-ica E 4,37(1999);N.Imai,K.Ishikawa,T.Matsuyama,and I.Tanaka,Phys.Rev.B 42,10610(1990).[16]K.Ishikawa,N.Maeda,and T.Ochiai,Phys.Rev.Lett.82,4292(1999);K.Ishikawa and N.Maeda,Physica B 298,159(2001);cond-mat/0102347.[17]E.Fradkin and S.A.Kivelson,Phys.Rev.B 59,8065(1999).[18]A.H.MacDonald and M.P.A.Fisher,Phys.Rev.B 61,5724(2000).[19]R.Cˆo t´e and H.A.Fertig,Phys.Rev.B 62,1993(2000).[20]A.Endo (private communication).[21]R.L.Willett,K.W.West,and L.N.Pfeiffer,Phys.Rev.Lett.78,4478(1997).[22]F.von Oppen,A.Stern,and B.I.Halperin,Phys.Rev.Lett.80,4494(1998).[23]W.Pan,T.Jungwirth,H.L.Stormer,D.C.Tsui,A.H.MacDonald,S.M.Girvin,L.Smrˇc ka,L.N.Pfeiffer,K.W.Baldwin,and K.W.West,Phys.Rev.Lett.85,3257(2000).0.20.40.60.811.21.41.6θ-0.04-0.03-0.02-0.01E (θ)K =2K =5K =7Fig. 1.The θ-dependence of ∆E (2)(g,K,θ)for aK =2,5,7.The solid lines and dashed lines stand for the HF calculation and RPA approximation,respectively.The unit of energy is g 2a/q 2-1.5-1-0.500.51 1.52g-0.875-0.85-0.825-0.8-0.775-0.75-0.725E n e r g yK =2Fig. 2.The total energy of the striped state obtained in (I)and (II)for aK =2.The unit of energy and g is q 2/a .The straight line stands for E ext (g,2/a )and the parabolic line stands for E Coul (g,2/a,π/2).The lower energy state is represented by the bold line.0.250.50.751 1.25 1.5 1.752g1234567K I IIIIIIIFig. 3.The phase diagram of the striped state at the half-filled third LL.The unit of g is q 2/a and the unit of K is 1/a .The regions denoted by I and II correspond to the orthogonal phase and parallel phase,respectively.The dashed lines represent the zeros of f 2(K ),at which the stripe direction is undetermined.0.0050.010.0150.02g2.42.62.833.2KXYIIIFig. 4.The two points X and Y in the phase diagram stand for the experimantal data [14].4。