European Space Agency
- 格式:pdf
- 大小:21.77 KB
- 文档页数:7
全球最大中文百科由全球位网民共同编写而成。
共计 词条, 文字。
∙ 首 页∙ 百 科∙ 图 片 ∙ 小 组∙ 论坛∙ 百科建站 ∙ 更多 ∙ 帮助 ∙ 快速了解| ∙ 注册| ∙登录|∙∙ ∙欧洲航天局相关图片 编辑词条 参与讨论所属分类: 俄罗斯 太空宇宙 月球 欧洲 美国 航天欧洲航天局(欧空局)是在1975年由一个政府间会议设立的,目标是专门为和平目的提供和促进欧洲各国在空间研究、空间技术和应用方面的合作。
它的前身是欧洲航天研究组织和欧洲航天器发射装置研制组织。
目录∙ • 机构简介 ∙• 主要机构 ∙ • 主要项目 ∙ • 主要活动 ∙• 火箭研制[显示全部]欧洲航天局-机构简介欧空局有14个成员国:奥地利、比利时、丹麦、芬兰、法国、德国、爱尔兰、意大利、荷兰、挪威、西班牙、瑞典、瑞士、大不列颠及北爱尔兰联合王国。
加拿大与欧空局订有一个密切合作的协定。
欧空局由各成员国代表组成的理事会领导,行政首长是总干事。
欧空局1997年预算约为30亿埃居(35亿美元)。
雇用的工作人员约为1,750人。
成员国必须参加强制性的科学和基础技术方案,但自行决定对地球观测、电信、空间运输系统、空间站和微重力方面的各个任选方案的贡献。
欧洲航天局-主要机构(a )设在巴黎的总部,政治决定在此作出;(b )设在荷兰诺德韦克的欧洲航天研究和技术中心,它是欧空局的主要技术机构,大多数项目小组以及空间科学部和技术研究和支助工程师在此工作。
欧洲航天研究和技术中心还提供有关的试验设施;(c )设在德国达姆施塔特的欧洲航天空间操作中心,它负责所有卫星操作以及相应的地面设施和通信网络;(d )设在意大利弗拉斯卡蒂的欧洲航天研究所,它的主要任务是利用来自空间的地球观测数据;(e )设在德国porz-Wahn 的欧洲航天员中心,它协调所有欧洲航天员活动,包括未来欧洲航天员的培训。
欧空局还对设在库鲁的欧洲航天港圭亚那航天中心作出贡献。
欧洲航天局-主要项目伽利略定位系统(Galileo positioning system ):计划中的卫星定位系统 火星快车号(Mars Express ):火星探测器罗塞塔号航天探测器(Rosetta space probe ):2004年发射的彗星探测器 哥伦布轨道设备(Columbus orbital facility ): 国际空间站的一个科学实验室ATV :即自动转移航天器(Automated Transfer vehicle ),一种可与国际空间站的进步号太空船(Progress spacecraft )相比的太空货船。
sentinel-2光谱响应函数Sentinel-2是欧空局(European Space Agency)推出的高分辨率地球观测卫星,其中科学家们研究了该卫星的光谱响应函数。
光谱响应函数是描述某个光学场的相对响应因子的函数。
在卫星影像处理中,光谱响应函数非常重要,它在辐射定标和地表反射率估计等方面发挥了重要作用。
在Sentinel-2卫星中,光谱响应函数非常关键,因为卫星能够获取大量的可见光谱和红外波段数据。
事实上,Sentinel-2卫星通过使用12个波段(通常称为带)提供高质量的遥感数据,其中四个波段在可见光谱区域内,其余八个波段在红外区域内。
每个波段的光谱响应函数被定义为卫星对辐射目标的响应,包括太阳辐射、地球表面反射和大气折射。
Sentinel-2光谱响应函数的定义可以通过测量特定光源的辐射照度和卫星接收器信号来实现。
在这种情况下,可以通过使用黑体(black body)来模拟太阳辐射,黑体以2,650K的温度为例。
同时,可以使用电弧灯来模拟地球表面的光,这种光谱响应函数可以模拟地面反射光。
通过这样的方法可以获得所有波段的光谱响应函数。
光谱响应函数的概念非常关键,因为它可以帮助科学家们在不同波段之间进行比较,并使用这些波段提取相关信息。
例如,对于Sentinel-2卫星的某些波段,可以使用光谱响应函数将其与其他波段相关联。
在复杂应用中,例如水体检测、土壤含水量估计和植被状况估计等方面,确切的波段响应函数非常重要。
总的来说,在Sentinel-2的光谱响应函数中,涉及许多因素,包括在不同地面条件下的反射率,大气消光率和光谱拐点的影响。
科学家们使用这些因素来构建连接地球表面信号和卫星接收机的函数,以便更好地理解卫星接收到的信号的本质,并在误差预测和信息获取方面进行相应的调整。
光谱响应函数的成功应用与使用使Sentinel-2成为了一款先进的遥感工具,可以帮助农民、生态保护者、城市规划者以及自然灾害应对者等有效地利用高质量的卫星数据,从而使世界变得更好。
Unit 4 Space 单元测试一、单项选择(共15小题;共45分)根据语境,选出最佳选项。
1. We got ________ when we went to different schools last term, but we stayed in touch.A. totalB. certainC. humorousD. separated 从下面各题所给选项中,选出与所给单词划线部分读音不同的选项。
2. In the following words, which underlined letter has a different sound from the others?A. landB. alienC. ancestor从下面各题所给选项中,选择最佳选项。
3. Why don’t you ________ your friends to the party? I want to meet them.A. goB. haveC. bringD. arrive4. It is said that the scientist ________ this planet five hundred years ago.A. discoveredB. coveredC. foundD. built5. I got up early ________ I wasn’t late for school.A. in order toB. becauseC. thoughD. so that从下面选项中选出可以替换划线部分的最佳选项。
6. The woman’s identity remains unknown.A. not known or identifiedB. famousC. common从下面各题所给选项中,选择最佳选项。
7. The shoes are a little small for me. I feel ________ when I wear them.A. comfortableB. uncomfortableC. relaxedD. interested8. She ________ some strawberries on the farm last weekend. That was interesting.A. madeB. tookC. putD. picked9. ________ your room before you go out, please.A. Pick upB. Come upC. Bring upD. Get up10. You have to leave now ________ you can catch the early bus.A. so thatB. as soon asC. becauseD. if从下面选项中选出可以替换划线部分的最佳选项。
OLCI波段1. 概述OLCI(Oceans and Land Colour Instrument)波段是由欧空局(European Space Agency)开发的一种光学传感器,用于监测海洋和陆地的颜色。
OLCI波段可以提供高质量、高分辨率的彩色图像,对于研究地球表面的光学特性和环境变化具有重要意义。
2. OLCI波段的特点OLCI波段具有以下几个特点:高空间分辨率OLCI波段具有高空间分辨率,可以捕捉地球表面细微的颜色变化。
这种高分辨率使得OLCI波段在陆地和海洋环境的监测和分析中非常有效。
多光谱波段OLCI波段可以分析多个光谱波段,这些波段涵盖了可见光和近红外光谱。
通过测量不同波段的光的反射率和吸收率,OLCI波段可以提供丰富的光学信息,帮助科学家研究地球表面的物理和化学特性。
高灵敏度OLCI波段具有高灵敏度,可以检测非常微弱的光信号。
这对于监测海洋和陆地的微弱颜色变化非常重要,可以提供对环境变化的及时和准确的反馈。
全球覆盖OLCI波段安装在卫星上,可以提供全球范围的覆盖。
这使得OLCI波段成为研究全球气候变化和环境保护的重要工具。
3. OLCI波段应用案例海洋监测OLCI波段在海洋监测中发挥着重要作用。
通过测量海洋表面的颜色变化,科学家可以获取海洋中藻类浓度、溶解有机物含量、水色等信息。
这些信息对于监测海洋生态系统的健康状况、水质污染以及浮游生物的分布和迁徙等方面具有重要意义。
陆地遥感OLCI波段在陆地遥感中也具有广泛的应用。
它可以提供土地覆盖类型、植被生长情况、土地利用等信息。
这对于农业生产、土地管理、城市规划以及生态环境保护等方面都有很大的帮助。
气候变化研究OLCI波段可以用于研究全球气候变化。
通过监测海洋和陆地表面的颜色变化,科学家可以获得大气中的气溶胶、水汽含量以及碳循环等信息。
这有助于了解气候系统的运行机制,预测未来的气候变化趋势,并提出相应的应对措施。
4. 总结OLCI波段是一种用于监测海洋和陆地颜色的光学传感器。
VOA慢速英语:飞船历史性的在彗星上着陆Spacecraft Makes Historic Landing on a Comet宇宙飞船历史性地在彗星登陆After traveling 10 years and hundreds of millions of kilometers, a small robotic spacecraft has for the first time landed on the surface of a comet, a solar system object madeof ice and rock.在太空旅行10年、航行数十亿千米后,一个小型机器飞船首次登陆彗星表面,彗星是太阳系的一个天体,由冰和岩石构成。
T he probe launched from the European Space Agency’s main Rosetta spaceship early Wednesday. The spaceship is designedto carefully study the appearance and materials that make up the Comet67P/Churymov-Gerasimenko.周三早上,这个探测器搭载欧洲航天局“罗塞塔”飞船进入太空。
飞船设计很细致,主要研究彗星67P的表面物质。
Scientists worried the probe might not land on solid ground but early reports say the Philae research probe is in good condition.科学家担心探测器也许无法达到彗星表面,虽然早些时候的报道称“菲莱”着陆器运行状态良好。
Stephan Ulamec, of the German Aerospace Center, announced the news. He said the landing equipment and a special device meant to secure the spacecraft to the comet had deployed.德国航天中心的史蒂芬·莱姆克宣布一个消息:他说已经部署好一个特别的装置来确保着陆彗星的探测器器和飞船的安全。
大学英语四级考试听力部分长对话大学英语四级考试听力大学英语四级考试听力外加部分长对话大学英语四级考试听力。
下面是店铺给大家整理的大学英语四级考试听力部分长对话大学英语四级考试听力的相关知识,供大家参阅!2014年12月英语四级真题听力长对话及部分答案(网络版) 听力长对话部分答案长对话Conversation 1M:That’s Marria’s families and we wantto be engaged.W:It’swonderful, Erik! Congratulations!M: I really like her families, too, verynice. Ms Comona speaks four languages and Mr. Comona a diplomat. Infact, he gives the speech at the Saturdaymorning.W: OH, that’s was N’s fa ther? I heardthe speech.M: You did?W: Well, I heard part of it and listenedto it for ten minutes, and then I fell asleep. I saw itwas in class. Anyway, tell meabout your weekend.M:Saturdayevening we saw a play. And Sunday afternoon we saw the soccer game.Then Sunday night we all went out for dinner. Marria, her parents,and me. That was the first chance we had totalk.W: Would you knowthis?M: That’s first I was. We didn’t saymuch. Mr. Comona told some good stories about his experiences as adiplomat and he asked about my hobbies.W: And what did yousay?M: Well, I didn’t tell him about myflying lessons. I told him about my chess play and my classicalmusic collection.W: Good idea! Her parents reallyapproval of you. Don’t they?M: I guess so. Marria called thismorning and said,” My father told me he’ll like you sunny rightnow”W: That was great.M: Not exactly. I want to get marriedafter I graduated school in about three years.Q9: what does theconversation aboutMarria’s father?Q10:What does Marria andErik do last Sunday afternoon?Q11:What do we learn fromMarria’s phone call this morning?Conversation 2M:You’re going to wear out computer’skeyboard.W: Oh, hi!M: Do you have any idea what time itis?W: About ten or tenthirty?M: It’s merelymidnight.W: Real ly? I didn’t know it was solate.M: Don’t you have an early class toteach tomorrowmorning?W: Yes, at seven o’clock, my computerclass. The students go to work right after theirlesson.M: Then you ought to go to bed. What areyou writing anyway?W: An article, I hope I cansell.M:Oh, another view ofnewspaper pieces. What’s this oneabout?W:Do you remember the trip I took lastmonth?M: The one up to theAmazon?W: Well, that’s what I’m writing about.The new high-way and the changes is making in the Amazonvalley.W:It shouldbe interesting.W:It is. Iguess that’s why I forgot all about thetime.M:How many articles have you solvenow?W:About a dozen sofar.M:What kind of newspapers bythem?W:The paper is carrying a lot of foreignnews. They usually appear in the big Sunday editions where theyneed a lot background stories to help develop the space between theads.M:Is thereany future in it?W: I hope so. There’s a chance I maysell this article to a news service.M:Then your papers will be published inseveral papers winter.W: that’s the idea. And they might evenbe able to do other stories the on a regularbasis.M:That would be great.Q12: what is the woman’soccupation?Q13:what is the womanwriting about?Q14:where did the woman’sarticles usually appear?Q15:what does the womanexpect?大学英语四级考试听力部分长对话大学英语四级考试听力1 Questions 19 to 21 are based on the conversation you have just heard.19.[A] In a college bookstore.[B] In a lecture hall.[C] In a library.[D] In a domp3itory.20.[A] English.[B] Biology.[C] Introduction to English Literature.[D] A required course.21. [A] He lives on the 10th floor of Butler Hall.[B] He never wants to listen to students.[C] He used to teach biology.[D] He is an excellent professor.Questions 22 to 25 are based on the conversation you have just heard.22.[A] When to move.[B] Where to live the following year.[C] How much time to spend at home.[D] Whose house to visit.23. [A] Take some money to the housing office.[B] Infomp3 the director of student housing in a letter.[C] Fill out a fomp3 in the library.[D] Maintain a high grade average.24. [A] Both live on campus.[B] Both live off campus.[C] The man lives on campus; the woman lives off campus.[D] The woman lives on campus; the man lives off campus.25. [A] Grades.[B] Privacy.[C] Sports.[D] Money.Conversation OneW: Excuse me, are you going to buy that book?M: Well, I need it for a class but it’s awfully expensive.W: Oh, we must be in the same class. Introduction to British Literature?M: Yes, that’s the one. Were you there yesterday for the first class?W: I sure was. Professor Robert really seems to know his subject.M: Yes, I took his Shakespeare course last semester and it wasvery good. He likes listening to his students.W: That’s a relief. I’m a biology major and I was a little uncertain about taking an English course.M: I’m an English major and this is a required course. But now I’m in trouble because I’m not sure I can afford this book.W: Hey, I’ve got an idea. Why don’t we split the cost and share the book?M: Sounds great. Do you live on campus?W: Yeah, I live on the 10th floor of Butler Hall.M: Perfect. I live on the 3rd floor of Butler. We should have no trouble sharing the book. I can bring it up to your room right after I wrap up the assignment.W: It’s a deal.Questions 19 to 21 are based on the passage you have just heard.19. Where is the conversation most probably taking place?【解析】选[A]。
宇宙英语知识点1. 为什么学习宇宙英语?宇宙英语指的是在太空领域使用的英语词汇和表达方式。
学习宇宙英语对于那些对太空探索和航天技术感兴趣的人来说非常重要。
无论是在国际空间站上工作,还是参与航天任务,掌握宇宙英语将有助于与国际航天组织和宇航员进行交流,并理解相关的科学研究和技术发展。
2. 常用词汇和表达以下是一些常用的宇宙英语词汇和表达:•Spacecraft:太空飞船•Astronaut:宇航员•Galaxy:星系•Orbit:轨道•Lunar:月球的•Solar System:太阳系•Gravity:重力•Extraterrestrial:外星的•Black hole:黑洞•Meteorite:陨石3. 太空任务和探索太空任务和探索是宇宙英语的重要领域之一。
以下是一些常见的太空任务和探索项目:1.火星任务:Mars Missions2.卫星发射:Satellite Launch3.太空行走:Spacewalk4.国际空间站:International Space Station (ISS)5.月球着陆:Moon Landing4. 宇宙现象和天体宇宙中存在许多神秘而令人着迷的现象和天体。
了解这些现象和天体的英语词汇将帮助我们更好地理解宇宙的奥秘。
1.星系和星云:Galaxies and Nebulae2.星座:Constellations3.太阳风:Solar Wind4.陨石坑:Craters5.星际尘埃:Interstellar Dust5. 宇宙科学和研究学习宇宙英语还将涉及到宇宙科学和研究领域的词汇。
以下是一些常见的词汇和表达:1.星际导航:Interstellar Navigation2.太空探测器:Space Probe3.星系演化:Galaxy Evolution4.太阳系形成:Formation of Solar System5.宇宙膨胀:Cosmic Expansion6. 国际合作和宇航组织国际航天活动是由多个国家和宇航组织共同合作进行的。
“地平线2020”(Horizon 2020)项目介绍欧洲空间局(European Space Agency, ESA)于近日在官方网站上公布名为“地平线2020”(Horizon 2020, H2020)的研究与创新活动,号召与全球卫星导航领域相关的组织、机构积极参与伽俐略系统有关项目的申报。
H2020是欧盟推动科研成果市场化的措施之一,通过该活动,可了解欧盟科研市场化的思路与措施,推广伽俐略系统和EGNOS系统的方式及方法,为北斗卫星导航系统的推广提供良好借鉴。
关于此项目的基本情况介绍如下。
一、活动背景欧盟(EU)按计划每十年制定一次针对未来十年的战略发展规划。
在欧盟2010年至2020年的规划(European 2020 Strategy)中将确保欧洲的国际竞争力列为了重点工作。
欧盟委员会(European Commission, EC)作为欧盟的执行机构,为推进十年工作计划,遂推行名为创新联盟(Innovation Union, IU)的系列活动。
创新联盟系列活动可分为六类近三十项活动,例如每年编写计分榜(Innovation Union Scoreboard)以鞭策在创新领域不积极的欧盟国家,缩小国家间投入的差距;以投资人的角度对全球至少500所高等院校进行评估、排名,有效引导社会资金流向学术机构,缩短科研与市场的差距。
创新联盟系列活动旨在通过将尽可能多的科学研究成果转化为服务和产品,提升欧洲国际竞争力、提高欧洲人民生活水平、创造就业岗位、实现社会经济可持续增长。
研究与创新框架计划2014-2020(the Framework Programme for Research and Innovation 2014-2020, FP),也称为“地平线2020”(Horizon 2020, H2020)是创新联盟(IU)系列活动之一,以公开征集、评选后注资的形式推动科研成果走向市场。
换言之,H2020是以项目形式出现的金融工具,是资金流通的平台,它也是欧盟迄今为止最大型的研究和创新类项目。
Strategies for Parallelizing a Navier-Stokes Codeon the Intel Touchstone MachinesJochem HäuserEuropean Space Agencyand Roy WilliamsCalifornia Institute of TechnologyAbstractThe purpose of this paper is to predict the efficiency of the Navier-Stokes code NSS*, which will run on an MIMD architecture parallel machine. Computations are performed using a three-dimensional overlapping structured multi-block grid. Each processor works with some of these blocks, and exchanges data across the boundaries of the blocks. The efficiency of such a code depends on the number of grid points per processor, the amount of computation per grid point, and the amount of communication per boundary point. In this paper, we estimate these quantities for NSS*, and present measurements of communication times for two parallel machines, the Intel T ouchstone Delta machine, and an Intel iPSC/860 machine, consisting of 520 and 64 Intel i860 processors respectively. The peak performance of the Delta machine is 32 Gflops. Secondly it is shown how, starting from a 7-block grid of about 5 000 000 points for the Hermes space plane, a mesh of 512 equally sized blocks is constructed, retaining the original topology. This example demonstrates that multiblock grids provide sufficient control over both the number and size of blocks. Therefore, it will be possible to simulate realistic configurations on massively parallel systems with a specified number of proces-sors, while achieving good quality load balancing.1. Computational Requirements in Aerospace DesignThe design and analysis of new generations of aircraft and hypersonic vehicles requires accurate values for critical quantities. For example, a drag reduction of 0.5% for a new aircraft is considered substantial progress; surface temper-atures for the European space plane Hermes must be known within 50 K during the reentry phase when maximum heat flux occurs.The National Aerospace Plane (NASP) in the United States, and the Sänger two-stage-to-orbit vehicle in Germany form a new generation of airbreathing vehicles which demand integration of the airframe and propulsion systems. Wind-tunnel testing is very difficult in the extreme flight conditions of these vehicles [1], so that their design and anal-ysis will be based to a large extent on numerical simulation including viscosity, nonequilibrium chemistry and com-bustion. As the air starts to react, new species are created, which are described by additional conservation equations.A multitemperature model may be needed to compute the vibrational temperatures of the diatomic species. At temper-atures above 10000K, ionization occurs resulting in additional species. Combustion models for hydrogen/air may have 16 species.Hence the problem is to solve a large system of coupled nonlinear PDE’s in three dimensions with a complex geometry. In general the time scales of the flow and the chemistry are different, leading to a stiff system. To adequately resolve the viscous boundary layer, the grid must be clustered near the surface of the vehicle, leading to highly stretched meshes and thus reducing the convergence rate.For airbreathing vehicles, we must also model the transition from laminar to turbulent flow. Turbulence modelling increases the computational cost by either calculating the turbulent viscosity locally, or by introducing a turbulent transport model such as the k-ε model (k is turbulent energy andε the dissipation rate). The number of physical vari-ables is also increased, but also more simulations are required because of the uncertainty of turbulence modelling. From the foregoing it is clear that such an accuracy can only be achieved with very fine meshes. Present calculationsuse one to three million grid points, while a satisfactory solution would really need 10 million. W e give as an example the computing time for the NSS* code for the Space Shuttle using 200 000 points. Running on one processor of a Cray YMP, each iteration takes eight seconds and convergence of the pressure is achieved in 2000 iterations, making a total running time of about five hours. Convergence of the heat flux solution would require a further factor of five.The total number of floating point operations for a convergent heat flux solution on a mesh of 10 million grid points is about 5x1014. If a one Teraflop machine were available with an assumed sustained performance of 250 Gigaflops (opti-mistic) it would take approximately 2 000 seconds to solve the heat flux problem. An unsteady Navier-Stokes simula-tion on the same grid would take approximately 8 000 seconds because of the reduction in CFL number. However, as mentioned above inclusion of a turbulence model and nonequilibrium effects would demand a much higher computa-tional effort. If aeroelasticity effects have to be accounted for, i.e. a structural code has to be coupled to the unsteady Navier-Stokes solver, the computational time will increase further. These calculations are valid for a single point of the flight trajectory. To simulate a complete flight envelope, 10 to 20 points may be needed. Recently, the coupling of electrodynamic effects (Maxwell’s equations) with the Navier-Stokes equations has become a topic of research. In addition, if CFD is going to play a role in the aerodynamic design process, some sort of shape optimization for a vehicle has to be possible, demanding a new level of performance in supercomputing.Although from these mostly qualitative arguments it is obvious that, for advanced aerospace applications, the Teraflop machine can only be the first major step to bring up CFD to a level where it can be used as a design tool. It is also safe to claim that the distributed memory MIMD architecture is highly suitable for complex aerodynamic simulations.2. Interprocessor CommunicationMIMD parallel computers offer great increases in speed over conventional supercomputers, and at a much lower cost per megaflop, but with increased software cost. Before making the effort to parallelize a code such as NSS*, it is of great interest to predict the inefficiency, which is defined aswhere P is the number of processors,T P is the time taken to run on these P processors, and T 1 is the time taken to run on a single processor. If the reasons for inefficiency are independent, inefficiencies may be added. Many parallel codes,including NSS*, have an inner loop structure of loosely synchronous cycles of computing and communicating, where the communication is exchanges of data between processors whose physical domains are adjacent. If this is the case,there are usually two contributions to inefficiency: load-balance inefficiency and communication inefficiency [2,3].Load-balance inefficiency occurs because the processors have different amounts of computation to perform, which for NSS* means different numbers of grid points. The algorithm then runs at the speed of the processor with the lar gest number of grid points, so the load balance inefficiency is the maximum number of grid points divided by the average number of grid points minus one.Communication inefficiency arises from the time taken for messages to pass between the processors. W e first measured the communication rates between just two processors simply exchanging messages. The Intel NX programming envi-ronment supports two ways of receiving a message,crecv and irecv . A call to crecv blocks until a message has arrived and been copied into the buffer provided by the application code; thus an exchange of data between two pro-cessors consists of a csend then a crecv call. In contrast, the irecv mechanism consists of a processor first “post-ing a receive”, nominating a buffer into which incoming messages are to be placed, then sending to the other processor with csend , then calling msgwait , which blocks until the expected message has actually been received.Figure 1 shows the measured communication bandwidth for message exchange between two processors, in megabytes per second, against the size of the exchanged messages. The iPSC/860 machine achieves 2.80 Mbyte/sec, and the Delta9.80 Mbyte/sec, for exchange of messages between two adjacent processors.inefficiency 1PT 1T P −=The NSS* code [4] uses an implicit cell-centered finite-volume scheme, combining an oscillation-free low-order scheme with a third-order MUSCL scheme by a local flux limiter function. Since the finite volume technique and the multiblock approach are widely used, the same or similar efficiency can be expected for a large class of Navier-Stokes solvers.The solution domain consists of a connected set of logically rectangular blocks [5] in three dimensional space. Each block has at most six neighbor blocks, although some at physical boundaries have fewer neighbors. However, because of the complex geometry of the vehicle, the connectivity of these blocks is irregular. If there are fewer blocks than processors, the blocks may be split judiciously until each processor has one or more of the subblocks. The splitting should be made so that each processor is in charge of approximately equal numbers of grid points. As an example, we shall consider a multiblock grid around the Hermes space plane, made with seven blocks. T o use this grid with a mas-sively parallel machine, with perhaps 520 processors, we must split these into many more smaller blocks, and decide which processor is to take which of these smaller blocks.The code runs in loosely synchronous cycles of computation followed by data exchange across the faces of the blocks. Each processor computes with the data of its block independent of the others, then exchanges data with its neighboring blocks. The exchange consists of sending messages to each of the neighbor blocks, where the size of each message is proportional to the number of grid points on the common face. The processor then waits to receive the corresponding messages from the neighbor processors, and when these are received, the cycle starts again with the computation phase. The load balance inefficiency results from processors having different numbers of grid points, and the communication inefficiency results from the time required for message-passing between the blocks. W e should note at this point that even the sequential code contains a kind of communication inefficiency because of the multiblock structure; the data from the face of the sending block must be taken from memory and ordered, then unpacked into the memory associated with the receiving block. Even if no message is to be sent, such as with a sequential code, this packing and unpacking must still take place.Figure 2 shows a logical or computational view of the seven blocks from the Hermes grid with their connectivity and the numbers of grid points in each direction. The total grid size is 4 160 000, obtained by multiplying together the num-bers of grid points for each block and summing over the seven blocks. The thick lines connecting the blocks indicate that two blocks have the same grid dimensions on the corresponding faces,which is necessary to guarantee a slope-continuous grid. Figure 3 shows the way in which this 7-block grid may be split into 512 subblocks. W e have chosentextures in this figure represents a different way of splitting the grid, being five splitting planes in all, including the splitting on the radial direction. With the splitting as shown, each block has size 25x25x13, with 13 grid points in the radial direction. By splitting only four instead of eight times in the radial direction, we could also make 256 blocks of size 25x25x26.4. Communication ModellingIn addition to the block of data owned by each processor, there are two surrounding layers of “ghost” points, as shownnuity from the neighbor block for the computation. There are two layers of these ghost points because of the third-order nature of the solution scheme used by NSS*.If the processors have different numbers of grid points, we may expect some load balance inefficiency; indeed we expect that the time for the computational phase of the algorithm cycle is determined by the processor with the largest number of grid points. The splitting of the Hermes grid discussed above has the same number of grid points in each block, so that inefficiency results only from the communication part of the cycle. If the computational time is more accurately modelled, we must include the time taken to prepare and interpret the outgoing and incoming messages:This time is different for the different blocks because interior blocks have six neighboring blocks, whereas those at the physical boundary may only have one or two neighbors.For the purpose of estimating efficiency, we need to know some information about the algorithm:N , the total number of gridpoints;P , the number of processors being used;F , the number of floating point operations (flops) per grid point per iteration;B , the number of bytes per grid point to be exchanged between blocks;and we also need some information about the machine on which the algorithm is to run:T flop , the time for a processor to do a flop;T cube , the time per byte for a cubic exchange.Since the blocks are equal, each block contains N/P points. Thus we may define the linear dimension of each block to be l = (N/P)1/3. The amount of data to be communicated for each block is then 12Bl 2 per iteration; a factor 2 comes from the two layers of grid points to be exchanged at each face, and a factor 6 because each block has at most six faces.The number of flops per iteration is Fl 3. Thus the communication inefficiency is then the communication time divided by the computation time, which isFor the NSS* code, the algorithm parameters might be estimated as follows. As noted in Section 3, we take the number of grid points N = 4 160 000, and number of processors P = 512, so that l = 20. The number of flops per grid point per iteration is about F = 5000 for this low/high order code with a real gas model. The data exchanged across the bound-aries are the five primitive variables: density, energy and three components of momentum; so that B = 5 words = 40bytes. We have taken the time for a floating point operation to be T flop = 0.2µs, corresponding to a conservative 5Mflops for the i860 processor. The computation time per cycle is thus Fl 3T flop = 8 seconds.We have measured T cube with the following model of the parallel NSS* code. Our model consists of a cubic lattice of equal blocks, so that each processor has exactly the same number of grid points. Each block has six faces, and on the other side of each face there may or may not be another processor .To simulate the irregularity caused by the geometry of the solution domain, we have made random assignments of the blocks to the processors of the machine. Clearly random assignment is not optimal; there are schemes [6] for assign-ment of processors to blocks which tend to place adjacent blocks in adjacent processors, and such a scheme would presumably improve the communication performance.Figure 5 shows the results for bandwidth. For the parameters given above, the number of bytes to be exchanged at each face is 12Bl 2 = 0.192 Mbyte, so that T cube is 0.16 seconds on 512 processors of the Delta machine.Thus our measurements predict that each iteration of the NSS* code with 4 million grid points takes about 8 seconds,of which about 0.16 seconds is taken with communication, so that the efficiency is greater than 95%.5. ConclusionsThe communication timings indicate that a complex practical calculation, such as the accurate solution of the Navier-12Bl2Fl 3T cubeT flop⋅ciency. There are four main reasons for this high predicted efficiency:Complex algorithm: Solving the Navier-Stokes equations rather than the simpler Euler equations means that deriva-tives of the physical fields are required, increasing the computation per grid point. Furthermore, using a real gas model rather than an ideal gas, as well as the use of simultaneous low- and high-order schemes, and the semi-implicit nature of the code all increase the amount of computation per grid point, and hence improve the efficiency.Hardware speed:We have made preliminary measurements of the communication rate on the Intel T ouchstone Delta machine, the result being an increase of a factor of 3.5 over the older generation iPSC/860 machine.Memory: Each processor of the Delta machine has a memory of 16 megabytes, so that each processor may work on a large block, thus reducing the surface area to volume ratio of the block, thus decreasing the amount of communication relative to calculation.Grid Partitioning:Along with inefficiency caused by communication overhead, the code would also be inefficient if the processors were unbalanced by having different numbers of grid points in each. W e have shown that it is possible to split a 7-block grid into 512 equal subblocks, so that this cause of inefficincy is eliminated. Furthermore, it is pos-sible to make this splitting such that the subblocks have reasonably low surface area to volume ratios, so that the com-munication inefficiency is low.AcknowledgementsWe would like to thank the Concurrent Supercomputing Consortium for our use of the Delta and iPSC/860 machines, and to Horst D. Simon of the Computer Science Corporation, NASA Ames Research Center, for useful discussions. References1. H. Hornung and B. Sturtevant,Challenges for High-Enthalpy Gasdynamics Research During the 1990’s, CaltechGraduate Aeronautical Lab. Report FM 90-1, January 1990.2. G. Fox, M. Johnson, G. Lyzenga, S. Otto, J. Salmon and D. Walker,Solving Problems on Concurrent Processors,Prentice-Hall, New York, 1990.3. R. D. Williams,Express: Portable Parallel Programming, pages 347-352 in Proc. Cray User Group, Austin, T exas,October 1990.4. J. Häuser and H. D. Simon,Aerodynamic Simulation on Massively Parallel Systems, to be published in ParallelCFD ‘91, eds. W. Schmidt, A. Ecer, J. Häuser and J. Periaux, Elsevier North-Holland, 1991.5. J. Häuser, H. G. Paap, H. Wong and M. Spel,A General Multiblock Surface and Volume Grid Generation Toolbox,in N umerical Grid Generation in Computational Fluid Dynamics and Related Fields, eds. A. Arcilla, J. Häuser, P. R. Eiseman and J. F. Thompson, Elsevier North-Holland, 1991.6. R. D. Williams,Performance of Dynamic Load Balancing Algorithms for Unstructured Mesh Calculations, Con-currency, Practice & Experience, to be published.。