Simulation of Spatial and Temporal Radiation Exposures for ISS in the South Atlantic Anomal
- 格式:pdf
- 大小:640.38 KB
- 文档页数:11
复杂地形条件下城市热岛及局地环流特征的数值模拟孙永;王咏薇;高阳华;王恪非;何泽能;杜钦;陈志军【摘要】Using WRF model coupled with multilayer urban canopy scheme BEP (Building Environment Parameterization) and BEM (Building Energy Model) scheme, a simulation was conducted to explore characteristics and causes of Chongqing urban heat island and impact of local circulation on it. There were two simulation cases conducted, one was URBAN case that utilized real Chongqing land use data, another was NOURBAN case that replaced urban category with crop in order to understand impact of urban on Chongqing heat island. Results showthat: (1) WRF model produces good results compared to observed 2m air temperature. Errors mainly occur at noon temperature peak and morning temperature valley, which are caused by the characteristics of urban land use and unreal building parameters. (2) BEP+BEMscheme can simulate well spatial and temporal features of urban heat island in Chongqing. Spatial distribution of temperature in Chongqing is influenced by topography and urban underlying surface. When closer to the city, greater temperature is affected by the urbanization, and higher temperature locates at lowelevation. (3) Urban 3D surface leads to trap effect in urban surface albedo (Total reflectivity of urban surface is low), and the urban upward shortwave radiation is less than about 20 in suburbs. Sensible heat is a major factor in urban energy balance however latent heat in suburbs. The larger urban surface heat storage and the waste heat of air-conditioner released to theatmosphere at night are important reasons for urban heat island conformation. (4) The background wind field is mainly southeast wind in the simulated area. The wind speed is higher in mountain area and lower in urban area, which reflects the aerodynamic effects of dense urban buildings on the low-level atmospheric flowfield, as well as the characteristics of valley wind circulation over complex valley terrain. There are high mountains in the western and southeastern sides of the city, which block the outflowfrom the city, let the background wind to climb or circle around the mountains, and contribute to the enhancement of urban heat island.%应用基于多层城市冠层方案BEP (Building Environment Parameterization) 增加室内空调系统影响的建筑物能量模式BEM (Building Energy Model) 方案的WRF模式, 模拟研究重庆热岛的特征、成因以及局地环流对热岛形成的影响.文中共有两个算例, 一为重庆真实下垫面算例, 称之为URBAN 算例, 二为将城市下垫面替换为耕地下垫面的对比算例, 称之为NOURBAN算例.结果表明:1) WRF方案模拟结果与观测2 m气温的对比吻合较好, 误差主要出现在正午温度峰值和凌晨温度谷值处, 由城市下垫面特性及城市内建筑分布误差引起.2) BEP+BEM方案较好地模拟出了重庆地区的热岛分布的空间和时间特征.重庆市温度的分布受地形和城市下垫面的双重影响, 越靠近城区, 温度的分布受城市化影响就越大, 在海拔低处, 温度就越高.3) 城区立体三维表面对辐射的陷阱作用导致城市表面总体反射率小, 向上短波辐射小于郊区约20 W/m2.城市表面以感热排放为主, 而郊区则表现为潜热的作用占主导.夜间城市地表储热以及空调废热向大气释放, 是城市热岛形成的重要原因.4) 模拟区域背景风场主要为东南风, 局地环流呈现出越靠近山区风速越大、城市区域风速较小的特性, 体现了城市密集的建筑群对低层大气流场的空气动力学效应, 以及复杂山谷地形的山谷风环流特性.在市区的西侧和东南侧均有高大山脉阻挡, 山脉对城市出流的阻碍作用、气流越山与绕流运动对城市热岛的形成有一定影响.【期刊名称】《大气科学学报》【年(卷),期】2019(042)002【总页数】13页(P280-292)【关键词】城市热岛;WRF模式;城市冠层方案【作者】孙永;王咏薇;高阳华;王恪非;何泽能;杜钦;陈志军【作者单位】南京信息工程大学大气环境中心,江苏南京 210044;南京信息工程大学大气环境中心,江苏南京 210044;重庆市气象科学研究所,重庆 401147;南京信息工程大学大气环境中心,江苏南京 210044;重庆市气象科学研究所,重庆 401147;重庆市气象科学研究所,重庆 401147;重庆市气象科学研究所,重庆 401147【正文语种】中文高温天气的形成受辐射增温、平流增温、绝热下沉增温等因素的影响(张尚印等,2005)。
Modeling the Spatial Dynamics of Regional Land Use:The CLUE-S ModelPETER H.VERBURG*Department of Environmental Sciences Wageningen UniversityP.O.Box376700AA Wageningen,The NetherlandsandFaculty of Geographical SciencesUtrecht UniversityP.O.Box801153508TC Utrecht,The NetherlandsWELMOED SOEPBOERA.VELDKAMPDepartment of Environmental Sciences Wageningen UniversityP.O.Box376700AA Wageningen,The NetherlandsRAMIL LIMPIADAVICTORIA ESPALDONSchool of Environmental Science and Management University of the Philippines Los Ban˜osCollege,Laguna4031,Philippines SHARIFAH S.A.MASTURADepartment of GeographyUniversiti Kebangsaan Malaysia43600BangiSelangor,MalaysiaABSTRACT/Land-use change models are important tools for integrated environmental management.Through scenario analysis they can help to identify near-future critical locations in the face of environmental change.A dynamic,spatially ex-plicit,land-use change model is presented for the regional scale:CLUE-S.The model is specifically developed for the analysis of land use in small regions(e.g.,a watershed or province)at afine spatial resolution.The model structure is based on systems theory to allow the integrated analysis of land-use change in relation to socio-economic and biophysi-cal driving factors.The model explicitly addresses the hierar-chical organization of land use systems,spatial connectivity between locations and stability.Stability is incorporated by a set of variables that define the relative elasticity of the actual land-use type to conversion.The user can specify these set-tings based on expert knowledge or survey data.Two appli-cations of the model in the Philippines and Malaysia are used to illustrate the functioning of the model and its validation.Land-use change is central to environmental man-agement through its influence on biodiversity,water and radiation budgets,trace gas emissions,carbon cy-cling,and livelihoods(Lambin and others2000a, Turner1994).Land-use planning attempts to influence the land-use change dynamics so that land-use config-urations are achieved that balance environmental and stakeholder needs.Environmental management and land-use planning therefore need information about the dynamics of land use.Models can help to understand these dynamics and project near future land-use trajectories in order to target management decisions(Schoonenboom1995).Environmental management,and land-use planning specifically,take place at different spatial and organisa-tional levels,often corresponding with either eco-re-gional or administrative units,such as the national or provincial level.The information needed and the man-agement decisions made are different for the different levels of analysis.At the national level it is often suffi-cient to identify regions that qualify as“hot-spots”of land-use change,i.e.,areas that are likely to be faced with rapid land use conversions.Once these hot-spots are identified a more detailed land use change analysis is often needed at the regional level.At the regional level,the effects of land-use change on natural resources can be determined by a combina-tion of land use change analysis and specific models to assess the impact on natural resources.Examples of this type of model are water balance models(Schulze 2000),nutrient balance models(Priess and Koning 2001,Smaling and Fresco1993)and erosion/sedimen-tation models(Schoorl and Veldkamp2000).Most of-KEY WORDS:Land-use change;Modeling;Systems approach;Sce-nario analysis;Natural resources management*Author to whom correspondence should be addressed;email:pverburg@gissrv.iend.wau.nlDOI:10.1007/s00267-002-2630-x Environmental Management Vol.30,No.3,pp.391–405©2002Springer-Verlag New York Inc.ten these models need high-resolution data for land use to appropriately simulate the processes involved.Land-Use Change ModelsThe rising awareness of the need for spatially-ex-plicit land-use models within the Land-Use and Land-Cover Change research community(LUCC;Lambin and others2000a,Turner and others1995)has led to the development of a wide range of land-use change models.Whereas most models were originally devel-oped for deforestation(reviews by Kaimowitz and An-gelsen1998,Lambin1997)more recent efforts also address other land use conversions such as urbaniza-tion and agricultural intensification(Brown and others 2000,Engelen and others1995,Hilferink and Rietveld 1999,Lambin and others2000b).Spatially explicit ap-proaches are often based on cellular automata that simulate land use change as a function of land use in the neighborhood and a set of user-specified relations with driving factors(Balzter and others1998,Candau 2000,Engelen and others1995,Wu1998).The speci-fication of the neighborhood functions and transition rules is done either based on the user’s expert knowl-edge,which can be a problematic process due to a lack of quantitative understanding,or on empirical rela-tions between land use and driving factors(e.g.,Pi-janowski and others2000,Pontius and others2000).A probability surface,based on either logistic regression or neural network analysis of historic conversions,is made for future conversions.Projections of change are based on applying a cut-off value to this probability sur-face.Although appropriate for short-term projections,if the trend in land-use change continues,this methodology is incapable of projecting changes when the demands for different land-use types change,leading to a discontinua-tion of the trends.Moreover,these models are usually capable of simulating the conversion of one land-use type only(e.g.deforestation)because they do not address competition between land-use types explicitly.The CLUE Modeling FrameworkThe Conversion of Land Use and its Effects(CLUE) modeling framework(Veldkamp and Fresco1996,Ver-burg and others1999a)was developed to simulate land-use change using empirically quantified relations be-tween land use and its driving factors in combination with dynamic modeling.In contrast to most empirical models,it is possible to simulate multiple land-use types simultaneously through the dynamic simulation of competition between land-use types.This model was developed for the national and con-tinental level,applications are available for Central America(Kok and Winograd2001),Ecuador(de Kon-ing and others1999),China(Verburg and others 2000),and Java,Indonesia(Verburg and others 1999b).For study areas with such a large extent the spatial resolution of analysis was coarse(pixel size vary-ing between7ϫ7and32ϫ32km).This is a conse-quence of the impossibility to acquire data for land use and all driving factors atfiner spatial resolutions.A coarse spatial resolution requires a different data rep-resentation than the common representation for data with afine spatial resolution.Infine resolution grid-based approaches land use is defined by the most dom-inant land-use type within the pixel.However,such a data representation would lead to large biases in the land-use distribution as some class proportions will di-minish and other will increase with scale depending on the spatial and probability distributions of the cover types(Moody and Woodcock1994).In the applications of the CLUE model at the national or continental level we have,therefore,represented land use by designating the relative cover of each land-use type in each pixel, e.g.a pixel can contain30%cultivated land,40%grass-land,and30%forest.This data representation is di-rectly related to the information contained in the cen-sus data that underlie the applications.For each administrative unit,census data denote the number of hectares devoted to different land-use types.When studying areas with a relatively small spatial ex-tent,we often base our land-use data on land-use maps or remote sensing images that denote land-use types respec-tively by homogeneous polygons or classified pixels. When converted to a raster format this results in only one, dominant,land-use type occupying one unit of analysis. The validity of this data representation depends on the patchiness of the landscape and the pixel size chosen. Most sub-national land use studies use this representation of land use with pixel sizes varying between a few meters up to about1ϫ1km.The two different data represen-tations are shown in Figure1.Because of the differences in data representation and other features that are typical for regional appli-cations,the CLUE model can not directly be applied at the regional scale.This paper describes the mod-ified modeling approach for regional applications of the model,now called CLUE-S(the Conversion of Land Use and its Effects at Small regional extent). The next section describes the theories underlying the development of the model after which it is de-scribed how these concepts are incorporated in the simulation model.The functioning of the model is illustrated for two case-studies and is followed by a general discussion.392P.H.Verburg and othersCharacteristics of Land-Use SystemsThis section lists the main concepts and theories that are prevalent for describing the dynamics of land-use change being relevant for the development of land-use change models.Land-use systems are complex and operate at the interface of multiple social and ecological systems.The similarities between land use,social,and ecological systems allow us to use concepts that have proven to be useful for studying and simulating ecological systems in our analysis of land-use change (Loucks 1977,Adger 1999,Holling and Sanderson 1996).Among those con-cepts,connectivity is important.The concept of con-nectivity acknowledges that locations that are at a cer-tain distance are related to each other (Green 1994).Connectivity can be a direct result of biophysical pro-cesses,e.g.,sedimentation in the lowlands is a direct result of erosion in the uplands,but more often it is due to the movement of species or humans through the nd degradation at a certain location will trigger farmers to clear land at a new location.Thus,changes in land use at this new location are related to the land-use conditions in the other location.In other instances more complex relations exist that are rooted in the social and economic organization of the system.The hierarchical structure of social organization causes some lower level processes to be constrained by higher level dynamics,e.g.,the establishments of a new fruit-tree plantation in an area near to the market might in fluence prices in such a way that it is no longer pro fitable for farmers to produce fruits in more distant areas.For studying this situation an-other concept from ecology,hierarchy theory,is use-ful (Allen and Starr 1982,O ’Neill and others 1986).This theory states that higher level processes con-strain lower level processes whereas the higher level processes might emerge from lower level dynamics.This makes the analysis of the land-use system at different levels of analysis necessary.Connectivity implies that we cannot understand land use at a certain location by solely studying the site characteristics of that location.The situation atneigh-Figure 1.Data representation and land-use model used for respectively case-studies with a national/continental extent and local/regional extent.Modeling Regional Land-Use Change393boring or even more distant locations can be as impor-tant as the conditions at the location itself.Land-use and land-cover change are the result of many interacting processes.Each of these processes operates over a range of scales in space and time.These processes are driven by one or more of these variables that influence the actions of the agents of land-use and cover change involved.These variables are often re-ferred to as underlying driving forces which underpin the proximate causes of land-use change,such as wood extraction or agricultural expansion(Geist and Lambin 2001).These driving factors include demographic fac-tors(e.g.,population pressure),economic factors(e.g., economic growth),technological factors,policy and institutional factors,cultural factors,and biophysical factors(Turner and others1995,Kaimowitz and An-gelsen1998).These factors influence land-use change in different ways.Some of these factors directly influ-ence the rate and quantity of land-use change,e.g.the amount of forest cleared by new incoming migrants. Other factors determine the location of land-use change,e.g.the suitability of the soils for agricultural land use.Especially the biophysical factors do pose constraints to land-use change at certain locations, leading to spatially differentiated pathways of change.It is not possible to classify all factors in groups that either influence the rate or location of land-use change.In some cases the same driving factor has both an influ-ence on the quantity of land-use change as well as on the location of land-use change.Population pressure is often an important driving factor of land-use conver-sions(Rudel and Roper1997).At the same time it is the relative population pressure that determines which land-use changes are taking place at a certain location. Intensively cultivated arable lands are commonly situ-ated at a limited distance from the villages while more extensively managed grasslands are often found at a larger distance from population concentrations,a rela-tion that can be explained by labor intensity,transport costs,and the quality of the products(Von Thu¨nen 1966).The determination of the driving factors of land use changes is often problematic and an issue of dis-cussion(Lambin and others2001).There is no unify-ing theory that includes all processes relevant to land-use change.Reviews of case studies show that it is not possible to simply relate land-use change to population growth,poverty,and infrastructure.Rather,the inter-play of several proximate as well as underlying factors drive land-use change in a synergetic way with large variations caused by location specific conditions (Lambin and others2001,Geist and Lambin2001).In regional modeling we often need to rely on poor data describing this complexity.Instead of using the under-lying driving factors it is needed to use proximate vari-ables that can represent the underlying driving factors. Especially for factors that are important in determining the location of change it is essential that the factor can be mapped quantitatively,representing its spatial vari-ation.The causality between the underlying driving factors and the(proximate)factors used in modeling (in this paper,also referred to as“driving factors”) should be certified.Other system properties that are relevant for land-use systems are stability and resilience,concepts often used to describe ecological systems and,to some extent, social systems(Adger2000,Holling1973,Levin and others1998).Resilience refers to the buffer capacity or the ability of the ecosystem or society to absorb pertur-bations,or the magnitude of disturbance that can be absorbed before a system changes its structure by changing the variables and processes that control be-havior(Holling1992).Stability and resilience are con-cepts that can also be used to describe the dynamics of land-use systems,that inherit these characteristics from both ecological and social systems.Due to stability and resilience of the system disturbances and external in-fluences will,mostly,not directly change the landscape structure(Conway1985).After a natural disaster lands might be abandoned and the population might tempo-rally migrate.However,people will in most cases return after some time and continue land-use management practices as before,recovering the land-use structure (Kok and others2002).Stability in the land-use struc-ture is also a result of the social,economic,and insti-tutional structure.Instead of a direct change in the land-use structure upon a fall in prices of a certain product,farmers will wait a few years,depending on the investments made,before they change their cropping system.These characteristics of land-use systems provide a number requirements for the modelling of land-use change that have been used in the development of the CLUE-S model,including:●Models should not analyze land use at a single scale,but rather include multiple,interconnected spatial scales because of the hierarchical organization of land-use systems.●Special attention should be given to the drivingfactors of land-use change,distinguishing drivers that determine the quantity of change from drivers of the location of change.●Sudden changes in driving factors should not di-rectly change the structure of the land-use system asa consequence of the resilience and stability of theland-use system.394P.H.Verburg and others●The model structure should allow spatial interac-tions between locations and feedbacks from higher levels of organization.Model DescriptionModel StructureThe model is sub-divided into two distinct modules,namely a non-spatial demand module and a spatially explicit allocation procedure (Figure 2).The non-spa-tial module calculates the area change for all land-use types at the aggregate level.Within the second part of the model these demands are translated into land-use changes at different locations within the study region using a raster-based system.For the land-use demand module,different alterna-tive model speci fications are possible,ranging from simple trend extrapolations to complex economic mod-els.The choice for a speci fic model is very much de-pendent on the nature of the most important land-use conversions taking place within the study area and the scenarios that need to be considered.Therefore,the demand calculations will differ between applications and scenarios and need to be decided by the user for the speci fic situation.The results from the demandmodule need to specify,on a yearly basis,the area covered by the different land-use types,which is a direct input for the allocation module.The rest of this paper focuses on the procedure to allocate these demands to land-use conversions at speci fic locations within the study area.The allocation is based upon a combination of em-pirical,spatial analysis,and dynamic modelling.Figure 3gives an overview of the procedure.The empirical analysis unravels the relations between the spatial dis-tribution of land use and a series of factors that are drivers and constraints of land use.The results of this empirical analysis are used within the model when sim-ulating the competition between land-use types for a speci fic location.In addition,a set of decision rules is speci fied by the user to restrict the conversions that can take place based on the actual land-use pattern.The different components of the procedure are now dis-cussed in more detail.Spatial AnalysisThe pattern of land use,as it can be observed from an airplane window or through remotely sensed im-ages,reveals the spatial organization of land use in relation to the underlying biophysical andsocio-eco-Figure 2.Overview of the modelingprocedure.Figure 3.Schematic represen-tation of the procedure to allo-cate changes in land use to a raster based map.Modeling Regional Land-Use Change395nomic conditions.These observations can be formal-ized by overlaying this land-use pattern with maps de-picting the variability in biophysical and socio-economic conditions.Geographical Information Systems(GIS)are used to process all spatial data and convert these into a regular grid.Apart from land use, data are gathered that represent the assumed driving forces of land use in the study area.The list of assumed driving forces is based on prevalent theories on driving factors of land-use change(Lambin and others2001, Kaimowitz and Angelsen1998,Turner and others 1993)and knowledge of the conditions in the study area.Data can originate from remote sensing(e.g., land use),secondary statistics(e.g.,population distri-bution),maps(e.g.,soil),and other sources.To allow a straightforward analysis,the data are converted into a grid based system with a cell size that depends on the resolution of the available data.This often involves the aggregation of one or more layers of thematic data,e.g. it does not make sense to use a30-m resolution if that is available for land-use data only,while the digital elevation model has a resolution of500m.Therefore, all data are aggregated to the same resolution that best represents the quality and resolution of the data.The relations between land use and its driving fac-tors are thereafter evaluated using stepwise logistic re-gression.Logistic regression is an often used method-ology in land-use change research(Geoghegan and others2001,Serneels and Lambin2001).In this study we use logistic regression to indicate the probability of a certain grid cell to be devoted to a land-use type given a set of driving factors following:LogͩP i1ϪP i ͪϭ0ϩ1X1,iϩ2X2,i......ϩn X n,iwhere P i is the probability of a grid cell for the occur-rence of the considered land-use type and the X’s are the driving factors.The stepwise procedure is used to help us select the relevant driving factors from a larger set of factors that are assumed to influence the land-use pattern.Variables that have no significant contribution to the explanation of the land-use pattern are excluded from thefinal regression equation.Where in ordinal least squares regression the R2 gives a measure of modelfit,there is no equivalent for logistic regression.Instead,the goodness offit can be evaluated with the ROC method(Pontius and Schnei-der2000,Swets1986)which evaluates the predicted probabilities by comparing them with the observed val-ues over the whole domain of predicted probabilities instead of only evaluating the percentage of correctly classified observations at afixed cut-off value.This is an appropriate methodology for our application,because we will use a wide range of probabilities within the model calculations.The influence of spatial autocorrelation on the re-gression results can be minimized by only performing the regression on a random sample of pixels at a certain minimum distance from one another.Such a selection method is adopted in order to maximize the distance between the selected pixels to attenuate the problem associated with spatial autocorrelation.For case-studies where autocorrelation has an important influence on the land-use structure it is possible to further exploit it by incorporating an autoregressive term in the regres-sion equation(Overmars and others2002).Based upon the regression results a probability map can be calculated for each land-use type.A new probabil-ity map is calculated every year with updated values for the driving factors that are projected to change in time,such as the population distribution or accessibility.Decision RulesLand-use type or location specific decision rules can be specified by the user.Location specific decision rules include the delineation of protected areas such as nature reserves.If a protected area is specified,no changes are allowed within this area.For each land-use type decision rules determine the conditions under which the land-use type is allowed to change in the next time step.These decision rules are implemented to give certain land-use types a certain resistance to change in order to generate the stability in the land-use structure that is typical for many landscapes.Three different situations can be distinguished and for each land-use type the user should specify which situation is most relevant for that land-use type:1.For some land-use types it is very unlikely that theyare converted into another land-use type after their first conversion;as soon as an agricultural area is urbanized it is not expected to return to agriculture or to be converted into forest cover.Unless a de-crease in area demand for this land-use type occurs the locations covered by this land use are no longer evaluated for potential land-use changes.If this situation is selected it also holds that if the demand for this land-use type decreases,there is no possi-bility for expansion in other areas.In other words, when this setting is applied to forest cover and deforestation needs to be allocated,it is impossible to reforest other areas at the same time.2.Other land-use types are converted more easily.Aswidden agriculture system is most likely to be con-verted into another land-use type soon after its396P.H.Verburg and othersinitial conversion.When this situation is selected for a land-use type no restrictions to change are considered in the allocation module.3.There is also a number of land-use types that oper-ate in between these two extremes.Permanent ag-riculture and plantations require an investment for their establishment.It is therefore not very likely that they will be converted very soon after into another land-use type.However,in the end,when another land-use type becomes more pro fitable,a conversion is possible.This situation is dealt with by de fining the relative elasticity for change (ELAS u )for the land-use type into any other land use type.The relative elasticity ranges between 0(similar to Situation 2)and 1(similar to Situation 1).The higher the de fined elasticity,the more dif ficult it gets to convert this land-use type.The elasticity should be de fined based on the user ’s knowledge of the situation,but can also be tuned during the calibration of the petition and Actual Allocation of Change Allocation of land-use change is made in an iterative procedure given the probability maps,the decision rules in combination with the actual land-use map,and the demand for the different land-use types (Figure 4).The following steps are followed in the calculation:1.The first step includes the determination of all grid cells that are allowed to change.Grid cells that are either part of a protected area or under a land-use type that is not allowed to change (Situation 1,above)are excluded from further calculation.2.For each grid cell i the total probability (TPROP i,u )is calculated for each of the land-use types u accord-ing to:TPROP i,u ϭP i,u ϩELAS u ϩITER u ,where ITER u is an iteration variable that is speci fic to the land use.ELAS u is the relative elasticity for change speci fied in the decision rules (Situation 3de-scribed above)and is only given a value if grid-cell i is already under land use type u in the year considered.ELAS u equals zero if all changes are allowed (Situation 2).3.A preliminary allocation is made with an equalvalue of the iteration variable (ITER u )for all land-use types by allocating the land-use type with the highest total probability for the considered grid cell.This will cause a number of grid cells to change land use.4.The total allocated area of each land use is nowcompared to the demand.For land-use types where the allocated area is smaller than the demanded area the value of the iteration variable is increased.For land-use types for which too much is allocated the value is decreased.5.Steps 2to 4are repeated as long as the demandsare not correctly allocated.When allocation equals demand the final map is saved and the calculations can continue for the next yearly timestep.Figure 5shows the development of the iteration parameter ITER u for different land-use types during asimulation.Figure 4.Representation of the iterative procedure for land-use changeallocation.Figure 5.Change in the iteration parameter (ITER u )during the simulation within one time-step.The different lines rep-resent the iteration parameter for different land-use types.The parameter is changed for all land-use types synchronously until the allocated land use equals the demand.Modeling Regional Land-Use Change397Multi-Scale CharacteristicsOne of the requirements for land-use change mod-els are multi-scale characteristics.The above described model structure incorporates different types of scale interactions.Within the iterative procedure there is a continuous interaction between macro-scale demands and local land-use suitability as determined by the re-gression equations.When the demand changes,the iterative procedure will cause the land-use types for which demand increased to have a higher competitive capacity (higher value for ITER u )to ensure enough allocation of this land-use type.Instead of only being determined by the local conditions,captured by the logistic regressions,it is also the regional demand that affects the actually allocated changes.This allows the model to “overrule ”the local suitability,it is not always the land-use type with the highest probability according to the logistic regression equation (P i,u )that the grid cell is allocated to.Apart from these two distinct levels of analysis there are also driving forces that operate over a certain dis-tance instead of being locally important.Applying a neighborhood function that is able to represent the regional in fluence of the data incorporates this type of variable.Population pressure is an example of such a variable:often the in fluence of population acts over a certain distance.Therefore,it is not the exact location of peoples houses that determines the land-use pattern.The average population density over a larger area is often a more appropriate variable.Such a population density surface can be created by a neighborhood func-tion using detailed spatial data.The data generated this way can be included in the spatial analysis as anotherindependent factor.In the application of the model in the Philippines,described hereafter,we applied a 5ϫ5focal filter to the population map to generate a map representing the general population pressure.Instead of using these variables,generated by neighborhood analysis,it is also possible to use the more advanced technique of multi-level statistics (Goldstein 1995),which enable a model to include higher-level variables in a straightforward manner within the regression equa-tion (Polsky and Easterling 2001).Application of the ModelIn this paper,two examples of applications of the model are provided to illustrate its function.TheseTable nd-use classes and driving factors evaluated for Sibuyan IslandLand-use classes Driving factors (location)Forest Altitude (m)GrasslandSlope Coconut plantation AspectRice fieldsDistance to town Others (incl.mangrove and settlements)Distance to stream Distance to road Distance to coast Distance to port Erosion vulnerability GeologyPopulation density(neighborhood 5ϫ5)Figure 6.Location of the case-study areas.398P.H.Verburg and others。
湍流脉动速度的英文Turbulent Fluctuating Velocity.Turbulence, often described as the "chaos" of fluids, is a common and complex phenomenon encountered in various natural and engineering applications. It is characterized by random fluctuations in various fluid properties, including velocity, pressure, and temperature. Among these fluctuations, turbulent pulsating velocity, or simply turbulent fluctuating velocity, plays a pivotal role in determining the overall behavior of turbulent flows.1. Definition and Characteristics.Turbulent fluctuating velocity refers to the rapid and irregular variations in the velocity of fluid particles within a turbulent flow. These variations are caused by the interaction of eddies, vortices, and other small-scale structures within the flow. These structures constantly form, merge, and break down, leading to the observedfluctuations.The magnitude of these fluctuations is typically much larger than the mean velocity of the flow and can be several orders of magnitude higher. They are also highly uncorrelated, meaning that the velocity at one point in the flow does not depend on the velocity at another point, unless they are separated by a distance comparable to the size of the turbulent eddies.2. Importance of Turbulent Fluctuating Velocity.Turbulent fluctuating velocity is crucial in various fluid dynamics applications. It significantly impacts heat transfer, mass transfer, and the mixing of fluids. For example, in heat exchangers, the turbulent fluctuating velocity enhances the rate of heat transfer between two fluids by increasing the effective surface area for heat exchange.In addition, turbulent fluctuating velocity also plays a key role in determining the overall resistance or dragexperienced by objects placed within a turbulent flow. The fluctuating velocities cause pressure fluctuations on the object's surface, leading to additional drag forces.3. Measurement and Analysis.Measuring turbulent fluctuating velocity is a challenging task due to its random and transient nature. However, several techniques have been developed to capture these fluctuations, including hot-wire anemometry, laser Doppler anemometry, and particle image velocimetry.These measurements provide valuable insights into the characteristics of turbulent flows, such as the statistics of velocity fluctuations, their spatial and temporal correlations, and the energy spectrum of turbulent eddies.4. Modeling and Simulation.Modeling and simulating turbulent fluctuating velocity require sophisticated numerical techniques and computational resources. turbulence models, such as theReynolds-Averaged Navier-Stokes (RANS) model and Large Eddy Simulation (LES), are commonly used to predict the behavior of turbulent flows.These models aim to capture the effects of turbulent fluctuating velocity by introducing additional terms or equations into the governing fluid dynamics equations.While RANS models focus on the statistical properties of turbulence, LES aims to resolve the largest eddies directly and model the smaller ones.5. Conclusion.Turbulent fluctuating velocity is a crucial aspect of turbulent flows, affecting various fluid dynamics phenomena. Understanding its characteristics and behavior is essential for predicting and controlling turbulent flows in various applications, including energy conversion, transportation, and environmental engineering.With ongoing research and the continuous development of new measurement techniques and numerical models, ourunderstanding of turbulent fluctuating velocity and its impact on turbulent flows will continue to deepen.。
第43卷第3期2024年3月硅㊀酸㊀盐㊀通㊀报BULLETIN OF THE CHINESE CERAMIC SOCIETY Vol.43㊀No.3March,2024相变调温墙板热工性能试验和数值模拟研究张路曼,侯㊀风(郑州工业应用技术学院建筑工程学院,郑州㊀451100)摘要:为解决墙体能量供给在时间和空间上的矛盾,进一步提高建筑舒适性,本文结合相变材料和水泥材料的优点,将相变微胶囊(Micro-PCM)掺入到水泥胶凝材料中得到相变砂浆,并将其粉刷在墙板表面形成相变砂浆层,采用白炽灯加热来模拟建筑围护结构外表面的太阳辐射,研究相变调温墙板在太阳辐射下的热工性能,并采用COMSOL 软件对相变调温墙板的热工性能进行数值模拟㊂结果表明:随着Micro-PCM 掺量的增大,相变调温墙板的储热性能增大,以普通墙板为基准,当Micro-PCM 掺量为40%(体积分数)时,相变调温墙板的温度比基准墙板峰值降低了5.166ħ,峰值温度出现时间延迟了145min,峰值温度波幅降低了4.509ħ,峰值传热量降低了22.202W /m 2㊂当相变砂布置在加气混凝土砌块墙体内侧时,峰温度波幅降低了2.38ħ,最高瞬时传热降低了1.61W /m 2,相变砂浆的储热性能最好,具有一定的力学强度,可以应用于围护结构表面发挥其控温作用㊂关键词:相变微胶囊;水泥砂浆;力学性能;热工性能;控温性能中图分类号:TU502㊀㊀文献标志码:A ㊀㊀文章编号:1001-1625(2024)03-0866-12Thermal Performance Test and Numerical Simulation of Phase Change Thermostatic Wall BoardZHANG Luman ,HOU Feng(School of Architecture and Construction,Zhengzhou University of Industrial Technology,Zhengzhou 451100,China)Abstract :In order to address the spatial and temporal contradiction in wall energy supply and further enhance building comfort,phase change microcapsules (Micro-PCM)were incorporated into cementitious materials to develop a phase change mortar,leveraging the advantages of both phase change materials and cement.A layer of phase change mortar was applied onto the surface of a wallboard,which was subjected to simulated solar radiation using incandescent lamp heating.The thermal performance of the phase change thermostatic wall board under solar radiation was experimentally investigated,while numerical simulations were conducted using COMSOL software.The results demonstrate that with the increase of Micro-PCM content,the heat storage capacity of phase change thermostatic wall board increases.When the Micro-PCM content reaches 40%(volume fraction),compared to ordinary wallboards,there is a reduction in peak temperature by 5.166ħ,delay in peak temperature time by 145min,decrease in peak temperature amplitude by 4.509ħ,and reduction in peak heat transfer by 22.202W /m 2.Furthermore,when phase change mortar is placed within aerated concrete block walls,there is a decrease in peak temperature amplitude to 2.38ħand maximum instantaneous heat transfer reduced by1.61W/m2.The developed phase change mortar exhibits excellent heat storage performance along with sufficient mechanical strength for application on envelope structures to effectively regulate temperatures.Key words :Micro-PCM;cement mortar;mechanical property;thermal performance;temperature control performance 收稿日期:2023-07-26;修订日期:2023-11-30基金项目:河南省科技攻关项目(232102230030,222102320201);河南省教育厅重点科研项目(23A130003)作者简介:张路曼(1987 ),女,讲师㊂主要从事新型建筑材料的研究㊂E-mail:864559124@通信作者:侯㊀风,博士,讲师㊂E-mail:FengHou_88@0㊀引㊀言随着社会经济突飞猛进的发展,各行业对能源的消耗量也在不断增加,目前我国的年能源消耗已经位列第3期张路曼等:相变调温墙板热工性能试验和数值模拟研究867㊀世界第二[1],减少能源消耗和碳排放量已成为世界各国可持续发展的目标㊂我国能源消耗的三大主要领域分别是建筑业㊁工业和交通运输业,其中,建筑业的能耗约占全社会总能耗的33%,二氧化碳排放占全国总体碳排放的25%以上,建筑运行阶段中的能耗更是占建筑业能耗的70%以上[1-2]㊂当前,采暖㊁通风和空调系统在内的空气调节系统(heating,ventilation,air-conditioning and cooling,HVAC)是构成建筑运行阶段能耗的重要组成部分,而这些设备的使用频率与建筑围护结构的材料㊁设计和构造密不可分,通过改善建筑围护结构的保温隔热性能可有效降低HVAC 系统的运行能耗㊂然而,目前传统的建筑材料(混凝土㊁砖㊁砂子等)和保温隔热材料均为显热储能,其蓄热性能和节能减排效果差[3]㊂因此,研究和开发新型建筑节能材料对实现我国的 双碳 目标具有重要的理论和现实意义[4-5]㊂图1㊀相变调温墙板调节室内温度的原理示意图Fig.1㊀Schematic diagram of the principle of phase change thermostatic wall board to adjust indoor temperature 利用相变储能技术降低建筑能耗是建筑节能研究的热点之一,通过利用相变材料具有单位体积蓄热量大和吸㊁放热过程几乎恒温这两个突出优势,达到储存能量和提高能源利用效率的目的[6]㊂使用相变材料作为建筑围护结构中的储能媒介能显著增加围护结构的热惯性,从而削弱热循环的振幅,防止建筑内部温度的过度突变[7]㊂图1为相变调温墙板调节室内温度的原理示意图,相变材料根据外界环境温度的变化而发生融化吸热和凝固放热的交替过程,这使得相变调温墙板具有使室内温度稳定在人体舒适温度范围的能力[8]㊂因此,相变材料在建筑节能领域具有良好的应用前景㊂目前,各国学者已针对相变储能建筑材料开展了广泛研究㊂早期研究者直接将相变材料添加到建筑材料中,但试验过程中发现固-液相变材料在相变时会引起三个不容忽视的问题:1)渗漏;2)相变材料与建筑材料基体相互作用;3)降低传热效率㊂为了克服上述问题,研究人员开发出定形相变材料,包括定形相变骨料㊁相变宏胶囊㊁相变微胶囊(micro-encapsulation phase change materials,Micro-PCM)㊂其中,Micro-PCM 是一种由性能稳定的高分子膜(壳材)包裹固-液相变材料(芯材)而成的新型复合相变材料㊂研究人员已利用Micro-PCM 开发出不同种类的储能建筑材料,例如Bassim 等[1]将Micro-PCM 掺入到水泥砂浆中,并采用陶瓷集料(ceramic fine aggregate,CFA)替代复合材料中的砂子,当Micro-PCM 掺量为50%(质量分数,下同)㊁陶瓷集料替代砂子量为100%时,与普通水泥砂浆相比,复合材料温度降低了9.5ħ㊂Ren 等[9]将相变微胶囊掺入到超高性能混凝土(ultra-high performance concrete,UHPC)中,研制出了一种具有优异储热性能的新型结构-功能一体化混凝土(MPCM-UHPC),通过热工测试可知,掺入10%相变微胶囊的UHPC 表面温度比基准组降低了3.9ħ㊂Park 等[10]采用普通墙板和含Micro-PCM 的墙板分别搭建了同一尺寸(2400mm ˑ2700mm ˑ2300mm)的测试房,通过传热试验可知,相变墙板室内温度比基准组降低了1~2ħ,供暖能耗降低了27.7%㊂综上分析,将Micro-PCM 引入建筑材料中可起到储存热量㊁保温隔热㊁提高墙体热惰性㊁降低室内温度波动的作用㊂然而,外来颗粒的引入会改变建筑材料的微观结构,从而影响建筑材料的力学性能㊂例如,Djamai 等[11]对掺入5%㊁10%㊁15%Micro-PCM 的水泥砂浆进行了力学试验,结果表明掺入20%的Micro-PCM 使复合材料的强度降低了70.5%㊂Yu 等[12]将Micro-PCM 与水泥砂浆相结合,开发出了一种具有热能储存功能的相变砂浆,结果表明掺入20%Micro-PCM 的相变砂浆28d 抗压强度和抗折强度分别为36.5和6.2MPa,与普通水泥砂浆相比,力学强度略有降低㊂Rahul 等[3]使用Micro-PCM 替代高流动性水泥砂浆中的细集料,研究了相变砂浆的力学性能,结果表明掺入5%和10%Micro-PCM 的相变砂浆抗压强度分别降低了15%和54%㊂为了克服Micro-PCM 导致建筑材料力学强度下降的问题,将Micro-PCM 与水泥砂浆混合涂敷于建筑围护结构,既可以避免结构材料强度损失,又改善围护结构的热工性能㊂另外,国内外学者研究Micro-PCM 在建筑领域的作用时,大多考虑将Micro-PCM 在基体中均匀分布或整齐排列制成相变层,未考虑Micro-PCM 在基体结构中的无规则排列㊂鉴于此,本文将Micro-PCM 随机分布在868㊀水泥混凝土硅酸盐通报㊀㊀㊀㊀㊀㊀第43卷水泥砂浆中,制备了相变砂浆,并对其热学性能进行研究㊂利用MATLAB 软件的随机分布程序和COMSOL 有限元软件构建了Micro-PCM 随机分布的传热模型,模拟分析了相变调温墙板的蓄热性能㊂通过模拟分析相变调温墙板在传热过程中墙体内表面温度随时间变化的规律,对比传热过程中普通墙板和相变调温墙板内表面温度波动情况,得出相变调温墙板中Micro-PCM 的掺量与墙体内/外壁面温度波幅㊁热流密度㊁温度延迟之间的关系㊂1㊀实㊀验1.1㊀原材料图2㊀Micro-PCM 的DSC 曲线Fig.2㊀DSC curves of Micro-PCM 采用安徽美科迪智能微胶囊科技有限公司生产的Micro-PCM 作为水泥砂浆的添加剂,得到一种微胶囊化的被动式热调节相变砂浆并涂敷于墙板上,微胶囊法利用成膜物质将相变材料包覆其中,形成微小的核-壳结构,芯材是具有潜热储能的石蜡,在相变过程中能够保持特定的温度㊂每粒胶囊中PCM 含量约占总质量的85%~90%㊂Micro-PCM 壁由一种稳定的惰性聚合物构成,粒径一般在5~1000μm㊂相变温度为26.44ħ,相变潜热为175.39J /g,具体如见图2所示㊂Micro-PCM 的外貌形态如图3所示,表面有许多的折皱,这是由Micro-PCM 中相变材料的相态变化导致体积变化所引起的[13-14]㊂图3㊀Micro-PCM 的SEM 照片Fig.3㊀SEM images of Micro-PCM 1.2㊀相变砂浆的制备及储热特性试验为了不影响墙板的力学性能,将相变砂浆以抹灰形式涂覆于墙板外侧㊂原料选用P㊃O 32.5普通硅酸盐水泥㊁ISO 标准砂㊁自来水㊁Micro-PCM㊂试验配合比参照‘砌筑砂浆配合比设计规程“(JGJT 98 2010)中的比例要求,以及结合以往相关经验,配合比设计如表1所示,该配合比是在恒定的水灰比和恒定的细骨料与水泥比(W /C =0.6和FA /C =6.95)下制备,并将Micro-PCM 置换为7.5%的标准砂㊂为了防止Micro-PCM 破碎,将Micro-PCM 在最后一步中添加㊂先将水泥㊁水置于搅拌桶中,启动搅拌机,拌合程序为:低速30s ң加砂30s ң高速30s ң停90s ң加Micro-PCM 及减水剂高速60s,最后调整高效减水剂(水泥用量的0%~1.1%)的用量,使得相变砂浆具有理想的流动性(稠度值为70mm),并易于涂敷于墙板(尺寸为300mm ˑ300mm ˑ100mm)表面,相变砂浆涂敷厚度为20mm,试件制备过程如图4所示㊂最后,将制备好的相变砂浆涂敷于墙板后置于温度为(20ʃ2)ħ㊁湿度大于95%的养护箱里面进行养护,养护7d,进行下一步的相变储能测试㊂第3期张路曼等:相变调温墙板热工性能试验和数值模拟研究869㊀表1㊀相变砂浆配合比Table1㊀Mix ratio of phase change mortarMicro-PCM volume fraction/%Mix ratio/(kg㊃m-3)Cement Water Standard sand Micro-PCM Superolasticizer(SP)/%Cement consistency/nm0230.0138.00160000~1.17017.8212.9127.7414801200~1.170图4㊀相变砂浆制备过程Fig.4㊀Preparation process of phase change mortar在数值计算中,通常采用体积分数,采用下式将质量分数转换为体积分数:f Micro-PCM=w Micro-PCMρMicro-PCM/ρcement+w Micro-PCM(1-ρMicro-PCM/ρcement)(1)式中:f Micro-PCM㊁w Micro-PCM分别为Micro-PCM的体积分数㊁质量分数;ρMicro-PCM㊁ρcement分别为Micro-PCM的密度和水泥砂浆的密度,其中ρMicro-PCM=694kg/m3㊂1.3㊀储热性能测试为了测试相变调温墙板的控温性能,课题组制备的控温测试仓尺寸为1600mmˑ380mmˑ380mm,在测试仓的顶部固定100W的白炽灯作为辐射传热的热源,且为了减少测试仓与外界环境的热交换,在测试仓的内壁上粘贴2层40mm厚聚氨酯泡沫保温板,保温板表面粘贴网格线加强的锡箔纸㊂采集设备采用Captee Enterprise HFM-8热流密度采集仪和HS-30热流密度计,热流感知面尺寸为30mmˑ30mm,内置T型热电偶㊂为了能够准确监测相变调温墙板上㊁下面温度及表面热流,在相变调温墙板的上㊁下表面两个区域分别布置HS-30热流密度计,上表面布置一个HS-30热流密度计,下表面布置两个HS-30热流密度计,下表面取两个测点测试结果的平均值作为热流和温度测量值㊂HS-30热流密度计测试点位示意图如图5所示㊂图5㊀HS-30热流密度计测试点位示意图(单位:mm)Fig.5㊀Schematic diagram of experiment points of HS-30heat flux density meter(unit:mm)测试前,将试件置于制冷温度为8ħ的冰柜中24h,目的是将相变调温墙板的初始温度稳定在8ħ㊂测试时,将HS-30热流密度计安装在测试点,相变调温墙板的实测温度为10ħ,将该实测温度作为验证数值模型的初始温度;另外,设定HFM-8热流采集设备每1min收集一次温度值和热流值,启动电脑中的Launch870㊀水泥混凝土硅酸盐通报㊀㊀㊀㊀㊀㊀第43卷HFM8-lab软件,建立Launch HFM8-lab软件与HFM-8热流采集设备之间的通信,且设定采集设备同步储存测试数据㊂打开白炽灯开关,使白炽灯以辐射方式持续提供热量,同时,启动设备开始采集数值㊂两个相变调温墙板的测试时间均为624min㊂2㊀相变墙体模拟研究2.1㊀几何模型本文通过PFC软件中的Rand函数建立Micro-PCM随机分布模型,如图6所示㊂模型按照以下算法生成:1)输入Micro-PCM的数量㊁Micro-PCM的半径和基体材料的尺寸;2)使用RAND函数随机生成第一个Micro-PCM的坐标值;3)使用RAND函数随机生成Micro-PCM第i个位置,并判断第i个Micro-PCM是否会与已经生成的i-1个Micro-PCM相交,如果不相交,则生成Micro-PCM模型,反之则重新生成;4)重复上述过程,直至生成所指定数量的Micro-PCM㊂图6㊀Micro-PCM在水泥砂浆中的随机分布模型Fig.6㊀Random distribution model of Micro-PCM in cement mortar2.2㊀物理模型以试验测试中的试件尺寸构建数值模型,数值模型中的相变调温墙板厚度为100mm㊁高度为300m,相变砂浆的厚度为20mm,墙板㊁水泥砂浆㊁Micro-PCM的热物性参数如表2所示,Micro-PCM的相变温度T m为26.44ħ,假定相变发生在较小的温度区间内[T s,T l](下标l㊁s分别代表固相和液相),其中,T s=T m-ΔT m/2, T l=T m+ΔT m/2,ΔT m=1K㊂相变调温墙板左侧为室内环境,右侧为室外环境,上/下表面为绝热㊂表2㊀材料热物性参数Table1㊀Thermophysical properties of materialsMaterial Density/(kg㊃m-3)Specific heat/[J㊃(kg㊃K)-1]Thermal conductivity/[W㊃(m㊃K)-1]Latent heat/(J㊃g-1) Cement paste180010500.90Micro-PCM88032200.21175.39 Aerated concrete block70011500.242.3㊀数学模型为了简化计算,对所研究的问题作如下假设:1)Micro-PCM在水泥砂浆中随机分布,忽略相变过程中体积的变化;第3期张路曼等:相变调温墙板热工性能试验和数值模拟研究871㊀2)水泥墙板和调温砂浆各向同性㊁常物性;3)由于Micro-PCM 粒径较小,忽略自然对流的影响;4)墙体的厚度远小于宽度和高度,温度只沿厚度方向变化,传热过程为一维导热过程㊂2.4㊀控制方程本文采用等效热容法对融化和凝固相变过程进行理论建模,其控制方程如式(2)所示㊂ρC P ∂T ∂t =k Δ2T (2)式中:ρ为密度,C P 为比热容,T 为温度,k 为热传导系数,Δ为微分算子㊂在进行相变分析时,为了保证计算的稳定性,假定相变发生在一个很小的温度区间内[T s ,T l ],该温度区间ΔT m =T l -T s =1K,T s =T m -ΔT m /2,T l =T m +ΔT m /2㊂当T <T s 时为固相,T <T l 时为液相,T s <T <T l 为混合相,则各相与液体分数f 的关系如式(3)所示㊂f =0T <T s T -T s T l -T s T s ɤT ɤT l 1T >T l ìîíïïïï(3)式中:T s 为固相温度上限,T l 为液相温度下限㊂根据液相体积分数的定义,等效热容C P 计算式如式(4)所示㊂C p =1ρ[(1-f )ρs C p,s +f ρl C p,l +L m D (T )](4)式中:C P,s 为固相Micro-PCM 的比热容,C P,l 为液相Micro-PCM 的比热容,D (T )为相变区间ΔT m 内的标准高斯函数,该函数在相变区间内的积分为1㊂2.5㊀定解条件本试验中只考虑温度的变化,从而提出水泥墙板传热的数学模型的边界条件,如式(5)所示㊂q =q 0x =0(5)式中:q 0为Micro-PCM 体积分数为0%的试件上表面试验测试值㊂补充初始条件如式(6)所示㊂T 2=T 1=T 0<T m t =0(6)2.6㊀相变墙体有限元模型采用大型通用有限元软件COMSOL,建立有限元模型,见图7㊂在热传导的有限元模拟中,采用极精细的三角形网格来获得与网格无关的收敛解㊂在仿真中,计算时间步长设置为1s,使用COMSOL 中的PARDISO 直接求解器来求解高度非线性的多物理场耦合问题㊂图7㊀相变调温墙板和普通水泥墙板的有限元模型Fig.7㊀Finite element models phase change thermostatic wall board and ordinary wall board872㊀水泥混凝土硅酸盐通报㊀㊀㊀㊀㊀㊀第43卷3㊀结果与讨论3.1㊀验证模型本研究采用相对误差分析方法RME 进行结果验证,采用数值模拟值与试验测试值的相对误差值来表征两种研究方法之间的结果差别㊂RME 为相对误差的最大值,文献[15]中的定义如式(7)所示㊂RME =Max x sim -x exmx exm ()ˑ100%[](7)RAE 为相对误差的平均值,计算式如(8)所示㊂RAE =Average x sim -x exm x exm ()ˑ100%[](8)图8㊀数值模拟结果与试验结果对比Fig.8㊀Comparison between numerical simulation and experimental results 式中:x sim 为数值模拟结果,x exm 为试验测试值㊂以上述相变调温墙板的储热试验环境作为边界条件对相变调温墙板进行模拟计算,并将试验结果与模拟计算结果进行对比,验证试验结果与数值模拟结果的一致性㊂图8为数值模拟结果与试验结果对比(%为体积分数)㊂由图8可知,数值模拟结果与试验结果曲线变化趋势是一致的,当温度升至相变材料的融化温度后,数值计算结果比试验结果偏快㊂另外,相变调温墙板内表面温度最大误差率为2.24%,平均误差为1.89%,最大误差率和平均误差率均没有超过3%,说明在数值模拟过程中建立的计算模型㊁选用的材料参数和试验一致,数值计算结果和试验结果具有一致性㊂3.2㊀瞬态气象条件下的相变调温墙板动态隔热性能分析外界温度以夏热冬冷地区(河南省郑州市)典型气象数据作为依据,对相变水泥墙板的动态隔热进行分析,选取郑州市最热月7月25~27日的温度和太阳辐射强度作为边界条件,以西向墙体为例,拟合得到室外空气综合温度表达式,如式(9)~(11)所示㊂t 25(τ)=33.0+8.0sin(π(τ-8.28)/9.96)+273.15(9)t 26(τ)=34.5+9.2sin(π(τ-8.12)/11.7)+273.15(10)t 27(τ)=35.0+9.0sin(π(τ-9.32)/11.6)+273.15(11)基于上述物理模型和数学模型,利用COMSOL 软件模拟计算普通水泥墙板和相变调温墙板的传热过程,8~65h 时刻的温度云图如图9所示,在相同室外温度边界条件下,普通墙板的热量传递过程比相变调温墙板快,相变调温墙板均将高温区阻挡在墙体外部区域,表明相变材料提升了墙体的热惰性㊂根据相变调温墙板相变过程的分阶段传热式(9)~(11)可得到在室外瞬态气象条件下相变调温墙板内侧温度和液体分数瞬态分布曲线,如图10所示㊂从图10中液体分数曲线及相变调温墙板内侧温度曲线的变化情况可看出,当普通墙体外表面涂敷一定厚度的相变调温砂浆后,其内表面温度较普通水泥墙板均有所下降,具体分析如下:1)0~10h,表示Micro-PCM 外表面温度低于相变温度,相变材料为固态,相变调温墙板的传热过程同普通水泥墙板,但温度较普通水泥墙板低;2)10~19h,表示Micro-PCM 外表面温度高于相变温度,相变材料开始由固态转变为液态,同时以潜热的形式不断吸收热量,且固-液界面处温度维持在相变温度不变;3)19~21h,表示固-液相变结束后,相变材料处于液态,在外界温度作用下继续传热,传热过程同普通水泥墙板,但温度较普通水泥墙板低;第3期张路曼等:相变调温墙板热工性能试验和数值模拟研究873㊀图9㊀相变调温墙板和普通水泥墙板的计算温度云图(Micro-PCM,40%,体积分数)Fig.9㊀Cloud diagrams of the calculated temperature of phase change thermostatic wall board and ordinary wall board (Micro-PCM,40%,volumefraction)图10㊀相变调温墙板内侧温度和Micro-PCM 的液体分数变化(Micro-PCM,40%,体积分数)Fig.10㊀Temperature change inside the Micro-PCM-W and liquid fraction change inside phase change thermostatic wall board (Micro-PCM,40%,volume fraction)4)21~31h,表示Micro-PCM 外表面温度开始低于相变温度,相变材料开始由液态转变为固态,同时以潜热的形式不断释放热量,且固-液界面处温度维持在相变温度不变;5)31~38h,表示Micro-PCM 外表面温度高于相变温度,部分固相相变材料开始融化,同时以潜热的形式不断吸收热量,且固-液界面处温度维持在相变温度不变㊂3.3㊀Micro-PCM 掺量对相变调温墙板动态隔热性能的影响㊀㊀墙板内壁面温度决定了室内热环境,内壁温度随室外空气环境温度的变化关系如图11所示㊂由图11可知,室外环境温度以24h 为一个周期,72h 内出现3个周期波动,九类墙板内壁面温度也出现了3个波动周期,由于墙板具有热惰性,九类墙板内壁面温度波动均延迟外界环境温度的变化,但相变调温墙板延迟性较普通墙体高,且Micro-PCM 掺量越高延迟性越高㊂例如:在第一个温度波动周期内,普通水泥墙板约在16.96h 达到最高温度32.975ħ,而Micro-PCM 掺量为40%(体积分数)的相变水泥墙板在19.39h 达到最高温度27.809ħ,滞后时间为145min,温度降低量为5.166ħ,相变调温墙板的温度峰值滞后时间和温度降低量均优于普通水泥墙板,且Micro-PCM 含量越高,墙体的温度峰值滞后时间和温度降低量越大㊂874㊀水泥混凝土硅酸盐通报㊀㊀㊀㊀㊀㊀第43卷图11㊀不同Micro-PCM 含量下的相变调温墙板内侧温度变化Fig.11㊀The inner wall temperature changes of Micro-PCM control wallboard under different Micro-PCM content第二周期内,普通水泥墙板内壁面温度波动幅度为12.327ħ,Micro-PCM 的含量为5%~40%时,相变调温墙板内壁面温度波动幅度为7.818~11.395ħ,较普通墙体降低量为0.248~1.910ħ,即Micro-PCM 含量越高,相变调温墙板的室内侧温度波动越小,即相变调温墙板会提供稳定的室内热环境㊂相变调温墙板中第一层材料(即相变砂浆层)内侧温度如图12所示,在三个周期的升温过程中,第一层材料内测温度达到约26ħ时,相变调温墙板的温升速率开始低于普通水泥墙板,且Micro-PCM 含量越高,升温速率越低㊂这是由于室外空气将热量向墙板内侧传递时,墙板升温至约26ħ时,相变材料开始发生相变,热量以潜热的形式储存,减少了向墙板内侧区域传递的热量;在第一个周期内,体积含量为40%的相变水泥墙板内侧温度较普通墙体降低了5.565ħ,表明Micro-PCM 的潜热储能作用确实降低了墙板内部区域的温升㊂在三个周期内的降温过程中,普通水泥墙板的起始温度高于相变调温墙板,但其降温速率也高,这是由于Micro-PCM 以潜热的形式储存的热量在降温时以内热源的形式释放到外部环境中㊂不同Micro-PCM 含量下的相变调温墙板内侧表面的热流密度变化如图13所示㊂在3个温度波动周期内,相变调温墙板由于相变材料在相变过程中以潜热的形式存储了一部分能量,降低了向室内的传热量㊂在第一个温度波动周期内,Micro-PCM 体积分数为5%~40%的相变调温墙板最高瞬时传热量比普通水泥墙板分别降低了8.067㊁12.006㊁13.726㊁16.913㊁19.270㊁19.793㊁20.901㊁22.202W /m 2,表明相变调温墙板向室内传递的热量更少㊂图12㊀不同Micro-PCM 含量下相变砂浆层内侧的温度变化Fig.12㊀Temperature changes in the inner side of phase change mortar layer under different Micro-PCM content㊀图13㊀不同Micro-PCM 含量下的相变调温墙板内侧表面的热流密度变化Fig.13㊀Heat flux changes on the inside of phase change thermostatic wall board with different Micro-PCM content 综上可知,从墙体内部温度分布㊁第一层材料内侧温度和向室内传递的热负荷来看,相变调温墙板控温性能优于普通水泥墙板㊂3.4㊀相变砂浆位置对加气混凝土墙体控温性能的影响为了研究墙体围护结构最内层表面的温度受到室外空气的对流换热和太阳辐射的综合影响,参照工程结构中加气混凝土砌块墙体的组合形式,构建符合工程实际的相变调温墙体传热模型,具体如图14所示㊂另外,本文选取郑州的气象环境作为模拟参数,根据2022年7月份的气象数据报表中的统计数据为基础,以西向墙体为例,得出郑州7月份一周的室外空气综合温度分布曲线,作为相变调温墙板室外侧边界条件施加于模型上进行传热分析㊂第3期张路曼等:相变调温墙板热工性能试验和数值模拟研究875㊀图14㊀参照实际工况的相变墙体传热模型Fig.14㊀Heat transfer model of phase-change wall according to actual working conditions 为对比相变砂浆置于墙体内侧及外侧的隔热效果,取20mm 厚的相变砂浆层(40%的Micro-PCM)分别置于加气混凝土砌块墙体内侧(2型)和外侧(1型),计算得到典型墙体内表面温度和热流密度变化曲线如图15所示㊂由图15可知,郑州夏季7月份相变砂浆置于加气混凝土砌块墙内侧时隔热效果最好,相变砂浆置于墙体外侧时隔热效果较差㊂在第6天,1型和2型加气混凝土砌块墙体内侧温度降低量分别为0.21和2.38ħ,最高瞬时传热量分别降低了0.12和1.61W /m 2,由此可知,相变砂浆置于墙体内侧时,墙体的热工性能较理想㊂热量在由外墙传递至相变区的过程中能量会发生衰减损耗,导致温度到达相变区时会有所降低,温差的减小使得Micro-PCM 融化吸热的速率降低,室内温度波动和传入室内的热流密度减小,从而达到建筑节能的效果㊂图15㊀加气混凝土砌块相变调温墙板内侧表面温度和热流密度变化Fig.15㊀Temperature and heat flux changes of the inner surface of aerated concrete block phase change thermostatic wall board 3.5㊀Micro-PCM 掺量对相变砂浆力学性能的影响利用抗压抗折力学测试仪对相变砂浆试块进行了力学强度测试,每种配方的相变砂浆试块测试3次抗折强度㊁3次抗压强度,取平均值作为最终的结果㊂图16的(a)和(b)分别展示了不同体积分数的Micro-PCM 对相变砂浆抗压强度和抗折强度的影响,相变砂浆的抗压强度和抗折强度均随着Micro-PCM 掺量的增㊀㊀㊀图16㊀不同体积分数的Micro-PCM 对相变砂浆抗压强度和抗折强度的影响Fig.16㊀Effect of different volume fractions of Micro-PCM on compressive strength and flexural strength of phase change mortar。
室外自然场景下的雾天模拟生成算法Chapter 1. Introduction- Background and Motivation- Problem Statement- Objectives- Scope and Limitations- Significance of the StudyChapter 2. Literature Review- Overview of Fog Simulation Techniques- Classifications of Fog Models- Characteristics and Properties of Fog- Comparison of Existing Fog AlgorithmsChapter 3. Methodology- System Architecture- Data Acquisition- Fog Simulation Algorithm- Algorithm ExecutionChapter 4. Results and Analysis- Simulation Results- Simulation Metrics- Performance Evaluation- Sensitivity AnalysisChapter 5. Conclusion and Future Work- Summary of Findings- Implications and Contributions- Limitations and Recommendations- Future Research Directions- ConclusionReferencesChapter 1. IntroductionBackground and MotivationThe phenomenon of fog is commonplace in many natural outdoor scenes, but it can significantly affect visibility and safety in transportation, navigation, and surveillance systems. Fog is formed when the air temperature reaches dew point, causing the water droplets to condense into small particles in the atmosphere. The particles scatter light and absorb specific wavelengths, which decreases the contrast and color saturation of the scene. Capturing foggy scenes and simulating them in computer graphics and vision systems has become an active research area in recent years due to the increasing demand for realistic and robust fog simulation algorithms.Problem StatementExisting fog models and generation algorithms have several limitations, such as being computationally expensive, requiring large datasets, and not accurately representing the complex dynamics of atmospheric conditions. Therefore, there is a need for a comprehensive and efficient fog simulation algorithm that performs well in different outdoor scenarios and can generate realistic foggy images.ObjectivesThe primary objective of this study is to develop a novel algorithm to simulate fog in natural outdoor scenes. The algorithm shouldprovide realistic and visually pleasing results, be computationally efficient, and adapt to different weather conditions and lighting conditions. The secondary objectives are to compare the proposed algorithm with existing techniques and evaluate its performance and robustness in various simulated scenarios.Scope and LimitationsThis study focuses on simulating fog in natural outdoor scenes, including forests, mountains, and cities, but not in indoor or laboratory environments. The proposed algorithm is designed to work with RGB images and does not consider other modalities, such as infrared or stereo data. The study aims to provide a proofof concept and does not optimize the algorithm for real-time applications.Significance of the StudyThe proposed fog simulation algorithm can have practical applications in several domains, such as autonomous driving, visual effects, and virtual reality. By synthesizing realistic foggy images, the algorithm can improve the performance and reliability of computer vision and machine learning systems operating in outdoor environments. Furthermore, the proposed algorithm can aid in understanding and studying the complex atmospheric phenomena of fog and its impact on visual perception.In conclusion, this chapter introduces the problem of fog simulation in natural outdoor scenes and the motivation for developing a novel fog simulation algorithm. The objectives, scope, and limitations of the study are defined, and the significance of the proposed algorithm is highlighted. The next chapter will review theexisting literature on fog simulation techniques in moredetail.Chapter 2. Literature ReviewIntroductionIn recent years, fog simulation has received significant attention from the computer graphics, vision, and machine learning research communities. Several techniques have been proposed to simulate fog and haze effects in outdoor scenes, based on various physical and statistical models. This chapter reviews the existing literature on fog simulation techniques and analyzes their strengths and weaknesses.Physical ModelsPhysical models aim to simulate the scattering and absorption of light in the atmosphere, based on the laws of physics and optics. Radiative transfer equations (RTE) are commonly used to describe the light transport in the atmosphere, but they are computationally expensive and require complex boundary conditions. Approximate methods, such as the Monte Carlo method and the discrete ordinates method, have been proposed to solve RTE efficiently. However, these methods still suffer from practical limitations, such as parameterization and calibration.Statistical ModelsStatistical models approximate the appearance of foggy scenes based on empirical observations and statistical analysis. One of the earliest and most widely used statistical models for fog simulation is the Koschmieder model, which assumes uniform fog density and exponential attenuation of light with distance. However, this model is simplistic and does not account for spatial and temporalvariations in fog density and atmospheric conditions.Recently, machine learning techniques, such as deep neural networks, have been employed to learn the mapping between clear and foggy images, bypassing the need for explicit models. These techniques have shown promising results in generating realistic foggy scenes, but they require large amounts of training data and may not generalize well to unseen environments or lighting conditions.Evaluation MetricsEvaluating the quality and realism of fog simulation algorithms is challenging, as there is no objective ground truth for comparing the generated foggy images with real-world data. Therefore, several metrics have been proposed to measure different aspects of fog simulation performance, such as color preservation, contrast enhancement, and visibility improvement. These metrics include the atmospheric scattering model, the color distribution distance, and the visibility index. However, these metrics have their own limitations and may not capture all aspects of fog simulation performance.ConclusionIn conclusion, this chapter reviewed the existing literature on fog simulation techniques, including physical and statistical models and machine learning approaches. The strengths and weaknesses of these techniques were discussed, and evaluation metrics for fog simulation were introduced. The next chapter will present the proposed fog simulation algorithm, which combines physical and statistical models and uses machine learning forrefinement.Chapter 3. Proposed Fog Simulation Algorithm IntroductionIn this chapter, we propose a novel fog simulation algorithm that combines physical and statistical models and uses machine learning for refinement. The algorithm consists of three stages: 1) physical model-based fog density estimation, 2) statistical model-based image synthesis, and 3) machine learning-based refinement. Each stage will be described in detail below.Physical Model-Based Fog Density EstimationThe first stage of the proposed algorithm aims to estimate the fog density in the scene, based on physical models of light scattering and absorption in the atmosphere. We use the radiative transfer equation (RTE) to model the light transport in the atmosphere, and solve it using the discrete ordinates method with predefined boundary conditions. The inputs to this stage are the clear image and the atmospheric parameters, such as the air temperature, pressure, and humidity. The output is the depth-dependent fog density, which is used as input to the next stage.Statistical Model-Based Image SynthesisThe second stage of the proposed algorithm aims to synthesize a foggy image based on statistical models of fog appearance and empirical observations. We use a modified version of the Koschmieder model, which takes into account spatial and temporal variations in fog density and atmospheric conditions. The inputs to this stage are the clear image, the fog density estimated in the previous stage, and the atmospheric parameters. The outputs are the synthesized foggy image and a set of statistical parameters thatdescribe its appearance, such as the color distribution and contrast. Machine Learning-Based RefinementThe third stage of the proposed algorithm aims to refine the synthesized foggy image and improve its visual quality, using machine learning techniques. We use a deep neural network to learn the mapping between clear and foggy images, and use it to refine the synthesized foggy image. The training data for the neural network consists of pairs of clear and foggy images, which are generated using the physical and statistical models described above. The inputs to this stage are the synthesized foggy image and the statistical parameters, and the output is the refined foggy image. EvaluationWe evaluate the proposed algorithm using several metrics, including the atmospheric scattering model, the color distribution distance, and the visibility index. We compare the results of our algorithm with those of existing fog simulation techniques, including physical models, statistical models, and machine learning approaches. We also conduct a user study to assess the subjective quality of the generated foggy images.ConclusionIn conclusion, this chapter presented the proposed fog simulation algorithm, which combines physical and statistical models and uses machine learning for refinement. The algorithm consists of three stages, namely physical model-based fog density estimation, statistical model-based image synthesis, and machine learning-based refinement. We also described the evaluation metrics and methods used to evaluate the algorithm's performance. The nextchapter will present the experimental results and analysis of the proposed algorithm.Chapter 4. Experimental Results and Analysis IntroductionIn this chapter, we present the experimental results and analysis of the proposed fog simulation algorithm. We evaluate the algorithm using a set of benchmarks and compare it with existing fog simulation techniques, including physical models, statistical models, and machine learning approaches. We also conduct a user study to assess the subjective quality of the generated foggy images. Finally, we discuss the limitations and future directions of the proposed algorithm.Experimental SetupWe conducted our experiments on a desktop computer with an Intel Core i9-9900K CPU and an NVIDIA RTX 2080 Ti GPU. The algorithm was implemented using Python and TensorFlow. We used a set of clear images from the VOC dataset and a set of atmospheric parameters from the MERRA-2 dataset.Evaluation MetricsWe used several metrics to evaluate the performance of the proposed algorithm, including the atmospheric scattering model (ASM), the color distribution distance (CDD), and the visibility index (VI). The ASM measures the accuracy of the physical model-based fog density estimation stage. The CDD measures the similarity of the color distributions between the synthesized foggy image and the ground truth. The VI measures the visibility and contrast of the synthesized foggy image.Results and AnalysisWe first evaluated the physical model-based fog density estimation stage using the ASM metric. The results show that our algorithm achieves a higher accuracy than existing physical models, such as the Rayleigh-Debye-Gans model and the Mie scattering model.We then evaluated the statistical model-based image synthesis stage using the CDD and VI metrics. The results show that our algorithm outperforms existing statistical models, such as the Koschmieder model and the Murakami model, in terms of color distribution and visibility.Finally, we evaluated the machine learning-based refinement stage using the CDD, VI, and subjective quality metrics. The results show that our algorithm achieves a significant improvement in visual quality over the synthesized foggy image and the ground truth, with a high subjective rating from the user study. Limitations and Future DirectionsThe proposed algorithm has several limitations and future directions for improvement. Firstly, the algorithm currently only supports outdoor scenes, and further research is needed to extend it to indoor scenes. Secondly, the algorithm relies on predefined atmospheric parameters, and it may not perform well under extreme weather conditions. Thirdly, the algorithm may not generalize well to other datasets and domains. Finally, the computational cost of the algorithm is high, and further optimization is needed for real-time applications.ConclusionIn conclusion, we presented the experimental results and analysisof the proposed fog simulation algorithm, which combines physical and statistical models and uses machine learning for refinement. The results show that our algorithm outperforms existing fog simulation techniques, including physical models, statistical models, and machine learning approaches, in terms of accuracy, color distribution, visibility, and visual quality. The future directions for improving the algorithm were discussed, and they aim to address the limitations of the algorithm and extend its applicability to various domains.Chapter 5. Applications and Future WorkIntroductionIn this chapter, we present the potential applications of the proposed fog simulation algorithm in various fields, including computer graphics, autonomous driving, and remote sensing. We also discuss the future work to extend the algorithm's functionalities and improve its performance.ApplicationsComputer GraphicsThe proposed fog simulation algorithm can be used to generate realistic foggy images for computer graphics applications, such as video games, virtual reality, and augmented reality. The generated foggy images can add visual depth and atmosphere to the scene, making the virtual environment more immersive and realistic. Autonomous DrivingFoggy weather conditions can significantly reduce the visibility of the road, which poses a safety risk for autonomous driving systems.The proposed fog simulation algorithm can be used to generate foggy images for training and testing autonomous driving algorithms, enabling them to handle adverse weather conditions and improve their robustness and safety.Remote SensingFog can also affect remote sensing applications, such as satellite imagery and aerial photography. The proposed fog simulation algorithm can be used to simulate the effect of fog and remove the fog from images, enhancing the quality and accuracy of remote sensing data.Future WorkThe proposed fog simulation algorithm has several directions for future work to extend its functionalities and improve its performance.Indoor ScenesCurrently, the algorithm only supports outdoor scenes. Future work can extend the algorithm to simulate foggy weather conditions in indoor scenes, such as foggy room or foggy warehouse.Real-Time PerformanceThe current computational cost of the algorithm is high, which limits its real-time application. Future work can optimize the algorithm to improve its performance and reduce the computational cost for real-time applications.Extreme Weather ConditionsThe algorithm relies on predefined atmospheric parameters, and itmay not perform well under extreme weather conditions, such as tornadoes or hurricanes. Future work can investigate the effect of extreme weather conditions on fog simulation and develop more robust algorithms to handle them.Multi-Scale SimulationThe proposed fog simulation algorithm operates at a fixed scale, and it may not capture the multi-scale nature of fog. Future work can develop multi-scale simulation algorithms that can simulate fog at different scales, from the microscopic scale of water droplets to the macroscopic scale of fog banks.ConclusionIn conclusion, the proposed fog simulation algorithm has a broad range of potential applications in various fields, such as computer graphics, autonomous driving, and remote sensing. The future work aims to extend the algorithm's functionalities and improve its performance, enabling it to handle more complex foggy weather conditions and support real-time applications.。
数值模拟在污染地下水抽出-处理-回灌修复技术中的应用杜 川1,2,陈素云1,2,李厚恩1,2(1. 北京市勘察设计研究院有限公司,北京 100038;2. 北京市环境岩土工程技术研究中心,北京 100038)摘 要: 受污染地下水对人体健康和生态环境有较深影响,采取经济合理的地下水修复技术尤为关键。
抽出-处理-回灌技术是修复污染地下水的典型代表,溶质运移数值模拟是研究地下水污染物迁移转化的重要技术支撑手段。
以北京某地块石油类污染地下水为研究对象,针对重污染区域采用中心抽水-四周回灌和对整个污染区采用逐排处理两种方案,运用数值模拟分析不同模式下地下水中污染物的时空迁移规律,确定污染物去除效果。
结果表明:针对重污染区优先处理时,将污染物处理达标需要约23 d;针对整个污染区采用逐排处理时,则需要6~7个处理周期(每周期24 d)。
对重污染区域优先处理的模式可短期内使污染物浓度大幅降低,有效削减高浓度峰值,结合逐排抽出-回灌的修复模式,可更有效地使全区污染物浓度达到修复目标,两种模式结合使用具备技术可行性与高效性。
关键词: 地下水污染修复;抽出-处理-回灌技术;数值模拟;污染物运移预测中图分类号: X523文献标志码: A DOI:10.16803/ki.issn.1004 − 6216.2022090004Application of numerical simulation in pump-treat-recharge remediation technology ofpolluted groundwaterDU Chuan1,2,CHEN Suyun1,2,LI Houen1,2(1. BGI ENGINEERING CONSULTANTS LTD, Beijing 100038 , China;2. The Environmental Geotechnical EngineeringTechnology Research Center of Beijing, Beijing 100038 , China)Abstract: Polluted groundwater had a serious impact on human health and ecological environment, so it’s critical to adopt economic and reasonable groundwater remediation technology. Pump-treat-recharge technology was a typical representative method for the remediation of contaminated groundwater, and numerical simulation of solute transport was an important technical support means to study the migration and transformation of groundwater pollutants. Taking the petroleum-contaminated groundwater of a certain site in Beijing as the research object, two schemes were adopted for the heavily polluted area, namely, central pumping and peripheral reinjection, and the whole polluted area was treated row by row. Numerical simulation was used to analyze the temporal and spatial migration of pollutants in groundwater under different modes and determined the pollutant removal effect. The results showed that 23 days were taken about for pollutants to reach the standard when priority treatment was given to the heavily polluted area, and 6~7 treatment cycles (24 days per cycle) were required when adopting row-by-row treatment in the whole pollution area. The priority treatment mode for heavily polluted areas could significantly reduce the pollutant concentration in a short period. It could reduce the peak value of high concentration, and in combination with the row-by-row repair mode, the pollutant concentrationcan effectively reach the remediation goal. The combination of the two modes showed technical feasibility and efficiency.Keywords: remediation of groundwater pollution;pump, treat and recharge remediation technique;numerical simulation;pollutant transport predictionCLC number: X523收稿日期:2022 − 09 − 01 录用日期:2022 − 10 − 18作者简介:杜 川(1989—),男,硕士、工程师。
1 Introduction1.1 Challenges and motivationA broad range of scientific and engineering problems involve multiple scales. Traditional approaches have been known to be valid for limited spatial and temporal scales. Multiple scales dominate simulation efforts wherever large disparities in spatial and temporal scales are encountered. Such disparities appear in virtually all areas of modern science and engineering, for example, composite materials, porous media, turbulent transport in high Reynolds number flows, and so on. A complete analysis of these problems is extremely difficult. For example, the difficulty in analyzing groundwater transport is mainly caused by the heterogeneity of subsurface formations spanning over many scales. This heterogeneity is often represented by the multiscale fluctuations in the permeability (hydraulic conductivity) of the media. For composite materials, the dispersed phases (particles or fibers), which may be randomly distributed in the matrix, give rise to fluctuations in the thermal or electrical conductivity or elastic property; moreover, the conductivity is usually discontinuous across the phase boundaries. In turbulent transport problems, the convective velocity field fluctuates randomly and contains many scales depending on the Reynolds number of the flow. The direct numerical solution of multiple scale problems is difficult even with the advent of supercomputers. The major difficulty of direct solutions is the size of the computation. A tremendous amount of computer memory and CPU time are required, and this can easily exceed the limit of today’s computing resources. The situation can be relieved to some degree by parallel computing; however, the size of the discrete problem is not reduced. Whenever one can afford to resolve all the small-scale features of a physical problem, direct solutions provide quantitative information of the physical processes at all scales. On the other hand, from an application perspective, it is often sufficient to predict the macroscopic properties of the multiscale systems. Therefore, it is desirable to develop a method that captures the small-scale effect on the large scales, but does not require resolving all the small-scale features.Y. Efendiev, T.Y. Hou, Multiscale Finite Element Methods: Theory and Applications, Surveys and Tutorials in the Applied Mathematical Sciences 4, DOI 10.1007/978-0-387-09496-0 1, c Springer Science+Business Media LLC 2009 121 IntroductionRepresentative Volume Element;Macroscopic region boundariesFig. 1.1. Schematic description of Representative Volume Element and macroscopic elements.The methods discussed in this book attempt to capture the multiscale structure of the solution via localized basis functions. These basis functions contain essential multiscale information embedded in the solution and are coupled through a global formulation to provide a faithful approximation of the solution. Typically, we distinguish between two types of multiscale processes in this book. The first type has scale separation. In this case, the small-scale information is captured via local multiscale basis functions computed based only on the information within local regions (coarse-scale grid blocks). The other types of multiscale processes do not have apparent scale separation. For these processes, the information at different scales (e.g., nonlocal information) is used for constructing effective properties, such as multiscale basis functions. Next, we present a more in-depth discussion of scale issues that arise in multiscale simulations. When dealing with multiscale processes, it is often the case that input information about processes or material properties is not available everywhere. For example, if one would like to study the fluid flows in a subsurface, then the subsurface properties at the pore scale are not available everywhere in the reservoir. Similarly, material properties of fine-grained composites are not often available everywhere. In this case, one can use Representative Volume Element (RVE) which contains essential information about the heterogeneities. For example, pore scale distribution in an extracted rock core can be regarded as representative information about the pore scale distribution over some macroscale region. Assuming that such information is available over the entire domain in macroscopic regions (see Figure 1.1 for illustration), one can perform upscaling (or averaging) and simulate a process over the entire region. Multiscale methods discussed in this book can easily handle such cases with1.1 Challenges and motivation3scale separation, and the proposed basis functions can be computed only in RVE. We would like to note that there are many methods (see next subsection) that can solve macroscopic equations given the information in RVE. These approaches can be divided into two groups: fine-to-coarse approaches and coarse-to-fine approaches. In fine-to-coarse approaches, the coarse-scale equations are not formulated explicitly and representative fine-scale information is carried out throughout the simulations. On the other hand, coarse-to-fine approaches assume the form of coarse-scale equations and the coarse-scale parameters are computed based on the calculations in RVE. These approaches share similarities. The methods discussed in this book belong to the class of fine-to-coarse approaches. To illustrate the above discussion in a simple case, we consider a classical example of steady-state heterogeneous diffusion div(k (x)∇p) = f (x). (1.1)Here k (x) is a spatial field varying over multiple scales. It is possible that the full description (details) of k (x) at the finest resolution is not available, and we can only access it in small portions of the domain. These small regions are RVE (see Figure 1.1), and one can attempt to simulate the macroscopic behavior of the material or subsurface processes based on RVE information. However, the latter assumes that the material has some type of scale separation because RVE information is sufficient to determine the macroscopic properties of the material. In many other applications, the fine-scale description of the media is given or can be obtained everywhere based on prior information. This information is usually not precise and contains uncertainties. However, this information often contains some important features of the media. For example, in porous media applications, the subsurface properties typically contain some largescale (nonlocal) features such as connected high-conductivity regions. In the example (1.1), k (x) represents the permeability (or hydraulic conductivity). Modern geostatistical tools allow us to prescribe k (x) at every grid block which is usually called the fine-scale grid block. Usually, the detailed subsurface model is built based on prior information. This information is a combination of fine-scale information coming from core samples and large-scale information coming from seismic data and macroscopic inversion techniques. The largescale features typically provide the information about the connectivity of the porous media and can be quite complex, for example, tortuous long channels with small varying width or multiple connectivity information embedded to each other. In Figure 1.2, we illustrate the multiscale nature of the conductivity in typical subsurface problems. Here, we illustrate that pore scale information is needed for understanding the conductivity of the core sample. However, it is also essential to understand the large-scale features of the media in order to build a comprehensive model of porous media. More complicated situations41 IntroductionPorescale Core scaleGeological scaleFig. 1.2. Schematic description of various scales in porous media.in geomodeling can occur. In Figure 1.3, geological variation over multiple scales is shown. Here one can observe faults (red lines in Figure 1.3(e)) with complicated geometry, thin but laterally extensive compaction bands that represent low-conductivity regions (blue lines in Figure 1.3(e); see also 1.3(d)) as well as other features at different scales. A blowup of the fault zone is shown in Figure 1.3(c). The fault rock is of low conductivity and the slip band sets consist of fractures that are filled (fully or partially) with cement. Pore-scale views of portions of a slip band set are shown in Figures 1.3(a) and 1.3(b)1 . When simulating based only on RVE information as discussed before, the large-scale nonlocal information is disregarded, and this can lead to large errors. Thus, it is crucial to incorporate the multiscale structure of the solution at all scales that are important for simulations. Materials with multiscale properties occur in many other applications. For example, composite material properties, similar to subsurface properties, can vary over many length scales. In Figure 1.42 , fiber materials are depicted. Materials such as papers, filtration materials, and other engineered materials can have fibers of various sizes and geometry. As we see from Figure 1.4, the fibers can have complicated geometry and connectivity patterns. As in subsurface processes, the multiscale features of these materials at different scales are needed to perform reliable simulations. Although small-scale features of the media are important, the large-scale connectivity can play a crucial role. When both fine- and coarse-scale information are combined, the resulting media properties have scale disparity and vary over many scales. In these problems, one cannot simply use RVE because there is no apparent scale separation. Moreover, the solution of (1.1) can be prohibitively expensive or unaffordable to compute.This situation is further12We are grateful to L. Durlofsky for providing us the figures and the explanations. We would like to thank the authors (see the caption of Figure 1.3) for allowing us to use the figures in the book. Published with permission of Engineered Fibers Technology, LLC (www.EFTfi).1.1 Challenges and motivation5Fig. 1.3. Schematic description of hierarchy of heterogeneities in subsurface formations ((a) and (b) are from [20], (c) is from [19], (d) and (e) are courtesy of Kurt Sternlof).. complicated because of the fact that the flow equations (e.g., (1.1)) need to be solved many times for different source terms (f (x) in (1.1)), mobilities (λ(x) in (1.2)), and so on. For example, in a simplest situation, two-phase immiscible flow in heterogeneous porous media is described by div(λ(x)k (x)∇p) = f (x), (1.2)where λ(x), f (x) are coarse-scale functions that vary dynamically (in time). As the physical processes become complicated due to additional physics arising in multiphase flows, it becomes impossible to simulate these processes without coarsening model equations. When performing simulations on a coarse grid, it61 IntroductionFig. 1.4. Schematic description of fiber materials.is important to preserve important multiscale features of physical processes. The multiscale methods considered in this book are intended for these purposes. Our multiscale methods compute effective properties of the media in the form of basis functions which are used, as in classical upscaling methods, to solve the processes on the coarse grid (see discussions in Section 1.2 and Figure 1.5 for an illustration). As we mentioned earlier, the media properties often contain uncertainties. These uncertainties are usually parameterized and one deals with a large set of permeability fields (realizations) with a multiscale nature. This brings an additional challenge to the fine-scale simulations and necessitates the use of coarse-scale models. The multiscale methods are important for such problems. For these problems, one can look for multiscale basis functions that contain both spatio-temporal scale information and the uncertainties. These basis functions allow us to reduce the dimension of the problem and simulate realistic stochastic processes. We show that the multiscale finite element methods studied in this book can easily be generalized to take into account both multiscale features of the solution and the associated uncertainties.1.2 Literature reviewMany multiscale numerical methods have been developed and studied in the literature. In particular, many numerical methods have been developed with goals similar to ours. These include generalized finite element methods [33, 31, 30], wavelet-based numerical homogenization methods [56, 87, 84, 168], methods based on the homogenization theory (cf. [49, 95, 80]), equation-free computations (e.g., [166, 238, 224, 176, 242, 241]), variational multiscale methods [154, 59, 155, 209, 165], heterogeneous multiscale methods [97], matrixdependent multigrid-based homogenization [168, 84], generalized p-FEM in1.2 Literature review7homogenization [197, 198], mortar multiscale methods [228, 27, 226], upscaling methods (cf. [91, 199]), network methods [48, 44, 45, 46] and other methods [181, 180, 222, 223, 210, 81, 65, 63, 62, 124, 195]. The methods based on the homogenization theory have been successfully applied to determine the effective properties of heterogeneous materials. However, their range of applications is usually limited by restrictive assumptions on the media, such as scale separation and periodicity [43, 164]. Before we present a brief discussion about various multiscale methods, we would like to mention that multiscale finite element methods (MsFEMs) share similarities with upscaling methods. Upscaling procedures have been commonly applied and are effective in many cases. The main idea of upscaling techniques is to form coarse-scale equations with a prescribed analytical form that may differ from the underlying fine-scale equations. In multiscale methods, the fine-scale information is carried throughout the simulation and the coarse-scale equations are generally not expressed analytically but rather formed and solved numerically. For problems with scale separation, one can establish the equivalence between upscaling and multiscale methods (see Section 2.8). The MsFEMs discussed in this book take their origin from a pioneering work of Babuˇ ska and Osborn [33, 31]. In this paper, the authors propose the use of multiscale basis functions for elliptic equations with a special multiscale coefficient which is the product of one-dimensional fields. This approach is extended in the work of Hou and Wu [145] to general heterogeneities. Hou and Wu [145] showed that boundary conditions for constructing basis functions are important for the accuracy of the method. They further proposed an oversampling technique to improve the subgrid capturing errors. Later on, the MsFEM of Hou and Wu were generalized to nonlinear problems in [104, 112]. In these papers, various global coupling approaches and subgrid capturing mechanisms are discussed. There are a number of approaches with the purpose of forming a general framework for multiscale simulations. Among them are equation-free [166] and heterogeneous multiscale method (HMM) [97]. These approaches are intended for solving macroscopic equations based on the information in RVE and cover a wide range of applications. When applied to partial differential equations, MsFEMs are similar to these approaches. For such problems, multiscale basis functions presented in the book are approximated using the solutions in RVE. We note that for MsFEMs the local problems can be described by the set of equations different from the global equations. An important step in multiscale simulations is often to determine the form of the macroscopic equations and the variables upon which the basis functions depend. In many linear problems and problems with scale separation, these issues are well understood. Many general numerical approaches for multiscale simulations do not address the issues related to determining the variables on which the macroscopic quantities (e.g., multiscale basis functions) depend (see [176] where some of these issues are discussed).81 Introductionpre−computation of multiscale (or coarse−scale) quantitiesExternal parameters(forcing, boundary conditions, mobilities,....)Coupling of multiscale parameters(coarse−scale problem)Simulation resultsFig. 1.5. A schematic illustration of upscaling concept.MsFEMs also share similarities with variational multiscale methods [59, 154, 155]. In this approach, the solution of the multiscale problem is divided into resolved (coarse) and unresolved parts. The objective is to compute the resolved part via the unresolved part of the solution and then approximate the unresolved part of the solution. This is shown in the framework of linear equations. Typically, the approximation of the unresolved part of the solution requires some type of localization (e.g., [154, 24]). The localization leads to methods similar to the MsFEM. This is discussed in detail in Section 2.8. There are also many other multiscale techniques with goals similar to ours. In particular, many methods share similarities in approximating the subgrid effects (the effects of the scales smaller than the coarse grid block size). There are a number of approaches that rely on techniques derived from homogenization theory (e.g., [198]). These methods are often restricted in terms of structure of multiscale coefficients. However, these approaches are more robust and accurate when the underlying multiscale structure satisfies the necessary constraints. The multiscale methods considered in this book pre-compute the effective parameters that are repeatedly used for different sources and boundary conditions. In this regard, these methods can be classified as upscaling methods where upscaled parameters are pre-computed. An illustration for the concept of upscaling is presented in Figure 1.5. In multiscale approaches discussed in this book, one can re-use pre-computed quantities to form coarse-scale equations for different source terms, boundary conditions and so on. Moreover, adaptive and parallel computations can be carried out with these methods where one can downscale the computed coarse-scale solution in the regions of interest. These features of upscaling methods and MsFEMs are exploited in1.2 Literature review9subsurface applications. The multiscale methods considered in this book differ from domain decomposition methods (e.g., [257]) where the local problems are solved many times. Domain decomposition methods are powerful techniques for solving multiphysics problems; however, the cost of iterations can be high, in particular, for multiscale problems. These iterations guarantee the convergence of domain decomposition methods under suitable assumptions. Multiscale methods with upscaling concepts in mind, on the other hand, attempt to find accurate subgrid capturing resolution and avoid the iterations. This may not be always possible, and for that reason, some type of hybrid methods with accurate subgrid modeling can be considered in the future (e.g., [93]). One of the recent directions in multiscale simulations has been the use of some type of limited global information. The use of limited global information is not new in upscaling methods. The main idea of these approaches is to use some simplified surrogate models to extract important information about non-local multiscale behavior of physical processes. The surrogate models are typically solved off-line in a pre-computation step and their computations can be expensive. However, they allow us to compute effective parameters that will render a more accurate description of dynamic problems with varying source terms, boundary conditions, and so on. An example is two-phase immiscible flow in highly heterogeneous media. In [69, 103], single-phase flow information is used for accurate upscaling of two-phase flow and transport. In particular, the global single-phase flow equation is solved several times to compute upscaled permeabilities (conductivities) which are then used in the simulations of two-phase flow and transport on a coarse grid. These upscaling computations are performed off-line. Similar to upscaling methods using global information, multiscale finite element methods using limited global information are introduced in [1, 103, 218]. The work of [218] provides a theoretical foundation for upscaling using limited global information. These methods use limited global information to construct multiscale basis functions. Finally, we would like to mention that multiscale methods with limited global information share some similarities with reduced model techniques (e.g., [234]) where snapshots of the solution at previous times can be used to construct a reduced basis for approximating the solution. There are many other multiscale methods in the literature that discuss bridging scales in various applications. In this book, we mostly focus on methods that are most relevant to MsFEMs. We note that a main feature of MsFEMs is the use of variational formulation at the coarse scale which allows us to couple multiscale basis functions. Fine-scale formulation of the problem that allows computing multiscale basis functions is not necessarily based on partial differential equations and can have a discrete formulation. In this regard, MsFEMs share conceptual similarities with some approaches that couple atomistic (discrete) and continuum effects. These approaches use a variational formulation at the coarse scale, but use a discrete atomistic description at finest scales (e.g., quasi-continuum method ([251, 252])). This method has been widely used in material science applications.101 Introduction1.3 Overview of the content of the bookThe purpose of this book is to review some recent advances in multiscale finite element methods and their applications. Here, the notion “multiscale finite element methods” refers to a number of methods, such as multiscale finite volume, mixed multiscale finite element method, and the like. The concept that unifies these methods is the coupling of oscillatory basis functions via various variational formulations. One of the main aspects of this coupling is the subgrid capturing errors that are extensively discussed in this book. The book is laid out in a way that it is accessible to a broader audience. Each chapter is divided into the description of the numerical method and the computational results. At the end of each chapter, the section “Discussions” is presented. This section discusses extensions, existing methods, and other relevant research in this area. The analysis of the proposed methods is discussed in the last chapter. We have attempted to keep the book concise and therefore present convergence analysis only for a few representative cases. Some of the results are referred to earlier in the chapter to convey the convergence of the proposed methods. The book is organized in the following way. In the second chapter, we review MsFEM for solving partial differential equations with multiscale solutions; see [145, 147, 146, 107, 71, 260, 14, 103]. The central goal of this approach is to obtain the large-scale solutions accurately and efficiently without resolving the small-scale details. The main idea is to construct finite element basis functions that capture the small-scale information within each element. The small-scale information is then brought to the large scales through the coupling of the global stiffness matrix. The basis functions are constructed from the leading-order homogeneous elliptic equation in each element. As a consequence, the basis functions are adapted to the local microstructure of the differential operator. We discuss various global coupling techniques and the computational issues associated with multiscale methods. Simple examples and pseudo-codes are presented. Issues such as performance and implementation of MsFEMs are discussed in Section 2.9. We present the comparison between the MsFEM and some other multiscale methods in Section 2.8. Some comments on generalizations of MsFEMs are presented in Section 2.4. In Chapter 3, we discuss the extension of MsFEMs to nonlinear problems. Our aim is to show that one can naturally extend the multiscale methods to nonlinear problems by replacing the multiscale basis functions with multiscale maps. Indeed, because the underlying equations are nonlinear, the small-scale features of the problem do not form a linear space. We show that with this modification MsFEMs can be used for solving nonlinear partial differential equations. After presenting the methodology, some numerical examples are presented for solving nonlinear elliptic equations. The chapter also includes discussions on the extension of the method to nonlinear parabolic equations and multiphysics problems.1.3Overview of the content of the book11Multiscale methods discussed in Chapter2and Chapter3apply lo-cal calculations to determine basis functions.Although effective in many cases,global effects can be important for some problems.The importance of global information has been illustrated within the context of upscaling procedures as well as multiscale computations in recent investigations(e.g., [69,143,1,103,218]).These studies have shown that the use of limited global information in the calculation of the coarse-scale parameters(such as basis functions)can significantly improve the accuracy of the resulting coarse model. In the fourth chapter of the book,we describe the use of limited global infor-mation in multiscale simulations.The chapter starts with a motivation and a motivating numerical example which show that the accuracy of multiscale methods deteriorates for problems with strong nonlocal effects.We introduce the basic idea of multiscale methods using limited global information.These approaches are used if the problem is solved repeatedly for varying parameters but keeping the source of heterogeneitiesfixed.Typical problems of this type arise,for example,in porous media applications.Numerical examples both for structured and unstructured grids for mixed MsFEM and MsFVEM are presented in this chapter.In general,one can use simplified global informa-tion combined with local multiscale basis functions for accurate simulation purposes which is discussed in Section4.4.Chapter5is devoted to the applications of multiscale methods to mul-tiphaseflow and transport in highly heterogeneous porous media.We limit ourselves to a few applications.For two-phaseflow and transport simulations, we consider the applications of multiscale methods to hyperbolic equations de-scribing the dynamics of the phases and their coupling to multiscale methods for pressure equations in Section5.2.In this section,the applications of non-linear multiscale methods to hyperbolic equations are presented along with various subgrid treatment techniques for hyperbolic equations.We present an application of nonlinear MsFEMs to Richards’equation andfluidflows in highly deformable porous media in Sections5.3and5.4.We include two short sections(contributed by J.E.Aarnes and S.H.Lee et al.)summarizing the applications of mixed MsFEM and multiscalefinite volume(MsFV)to reservoir modeling and simulation in Sections5.5and5.6.These sections dis-cuss the use of MsFEMs in the simulations of multiphaseflow and transport which include various additional physics arising in more realistic petroleum applications.The extension of MsFEMs to stochastic differential equations is described in Section5.7.To handle the uncertainties in heterogeneous coef-ficients,we propose to use a few realizations of the permeability to generate multiscale basis functions for the ensemble.Uncertainty quantification in in-verse problems using multiscale methods is also discussed in this section.The aim is to speedup uncertainty quantification in inverse problems using fast multiscalefinite element methods as surrogate models.Wefinish the chapter with a discussion on other applications of multiscalefinite element methods.In Chapter6,we present an analysis of MsFEMs discussed in Chapters2, 3,and4.Only some representative cases are studied in the book with the aim。
非牛顿流体的英语实验结果Experimental Results on Non-Newtonian Fluids.Non-Newtonian fluids are a unique class of liquids that do not obey the classical laws of fluid mechanics established by Sir Isaac Newton. These fluids exhibit a complex relationship between stress and strain rate, making them behave differently from the more familiar Newtonian fluids like water or air. The behavior of non-Newtonian fluids can range from viscoelastic, where they resist deformation and exhibit a memory of past deformations, to dilatant, where their viscosity increases with shear rate, or pseudoplastic, where their viscosity decreases with shear rate.To delve deeper into the fascinating properties of non-Newtonian fluids, we conducted a series of experiments designed to observe and understand their behavior under various conditions. In this article, we present thedetailed experimental results from our investigation.Experiment 1: Shear Thickening Behavior.In the first experiment, we aimed to observe the shear thickening behavior of a pseudoplastic non-Newtonian fluid, such as cornstarch suspension. We used a viscometer to measure the viscosity of the fluid at different shear rates. As the shear rate increased, we observed a significant increase in viscosity, indicating the shear thickening effect. This behavior is counterintuitive as most fluids become less viscous with increased shear rate. The experiment revealed that the cornstarch suspensionexhibited a dramatic increase in viscosity when subjectedto rapid shear, making it behave like a solid under high-stress conditions.Experiment 2: Flow Patterns.For our second experiment, we investigated the flow patterns of a non-Newtonian fluid in a closed loop system. We used a transparent tube filled with the fluid and observed its flow behavior as it was pumped through theloop. We found that the fluid exhibited complex flow patterns, with regions of high and low shear ratescoexisting within the same flow stream. This behavior isnot seen in Newtonian fluids, where the shear rate is uniform throughout the flow. The experiment highlighted the importance of considering both spatial and temporal variations in shear rate when studying non-Newtonian fluids.Experiment 3: Rheological Properties.In our third experiment, we focused on characterizingthe rheological properties of a non-Newtonian fluid using a rheometer. Rheometers allow for precise measurement ofstress and strain rate relationships, providing insightsinto the fluid's viscoelastic behavior. We observed thatthe fluid exhibited both viscous and elastic components,with the elastic component becoming more dominant at lower frequencies. This finding is significant as it suggeststhat non-Newtonian fluids can store and release energy like solids, making them behave like viscoelastic solids under certain conditions.Experiment 4: Impact Response.Finally, in our fourth experiment, we investigated the impact response of a non-Newtonian fluid. We dropped a weight into a container filled with the fluid and observed the resulting deformation and recovery behavior. We found that the fluid exhibited a unique ability to resist deformation upon impact but recovered its original shape quickly after the impact. This behavior is distinct from that of Newtonian fluids, which typically deform permanently upon impact. The experiment demonstrated the unique properties of non-Newtonian fluids in dynamic loading conditions.In conclusion, our experiments have provided valuable insights into the complex behavior of non-Newtonian fluids. These fluids exhibit a rich array of rheological properties that are not seen in Newtonian fluids, making them fascinating and challenging to study. The findings from our experiments have implications in various fields, including industrial processing, biomechanics, and material science, where non-Newtonian fluids play crucial roles. Futureresearch in this area is likely to yield even more surprising discoveries and potential applications for these unique fluids.。
杭州电子科技大学学位论文原创性声明和使用授权说明原创性声明本人郑重声明:所呈交的学位论文,是本人在导师的指导下,独立进行研究工作所取得的成果。
除文中已经注明引用的内容外,本论文不含任何其他个人或集体已经发表或撰写过的作品或成果。
对本文的研究做出重要贡献的个人和集体,均已在文中以明确方式标明。
申请学位论文与资料若有不实之处,本人承担一切相关责任。
论文作者签名:日期:年月日学位论文使用授权说明本人完全了解杭州电子科技大学关于保留和使用学位论文的规定,即:研究生在校攻读学位期间论文工作的知识产权单位属杭州电子科技大学。
本人保证毕业离校后,发表论文或使用论文工作成果时署名单位仍然为杭州电子科技大学。
学校有权保留送交论文的复印件,允许查阅和借阅论文;学校可以公布论文的全部或部分内容,可以允许采用影印、缩印或其它复制手段保存论文。
(保密论文在解密后遵守此规定)论文作者签名:日期:年月日指导教师签名:日期:年月日杭州电子科技大学硕士学位论文声相关计程仪测速技术研究研究生:胡一峰指导教师:刘顺兰教授2013年2月Dissertation Submitted to Hangzhou Dianzi Universityfor the Degree of MasterResearch On Acounstic Correlation Log For Velocity MeasurementCandidate: Hu YifengSupervisor: Prof. Liu ShunlanFebruary,2013摘要声相关计程仪是以“波形不变性”原理为准则的声学测速设备,它在民用和军事作战方面有着广泛应用。
相对于多普勒计程仪而言,它有着独到的优势。
声相关计程仪采用垂直向下发射宽波束,不仅可以得到更强的回波信号,也增强了船体的抗摇摆性能,同时其用尺寸较小的换能器,较低的工作频率和较小的功率就可以作用于更深的距离,加之重量轻,结构简单,是未来很有发展前途的测速导航仪器。
Stabilized high-power laser system forthe gravitational wave detector advancedLIGOP.Kwee,1,∗C.Bogan,2K.Danzmann,1,2M.Frede,4H.Kim,1P.King,5J.P¨o ld,1O.Puncken,3R.L.Savage,5F.Seifert,5P.Wessels,3L.Winkelmann,3and B.Willke21Max-Planck-Institut f¨u r Gravitationsphysik(Albert-Einstein-Institut),Hannover,Germany2Leibniz Universit¨a t Hannover,Hannover,Germany3Laser Zentrum Hannover e.V.,Hannover,Germany4neoLASE GmbH,Hannover,Germany5LIGO Laboratory,California Institute of Technology,Pasadena,California,USA*patrick.kwee@aei.mpg.deAbstract:An ultra-stable,high-power cw Nd:Y AG laser system,devel-oped for the ground-based gravitational wave detector Advanced LIGO(Laser Interferometer Gravitational-Wave Observatory),was comprehen-sively ser power,frequency,beam pointing and beamquality were simultaneously stabilized using different active and passiveschemes.The output beam,the performance of the stabilization,and thecross-coupling between different stabilization feedback control loops werecharacterized and found to fulfill most design requirements.The employedstabilization schemes and the achieved performance are of relevance tomany high-precision optical experiments.©2012Optical Society of AmericaOCIS codes:(140.3425)Laser stabilization;(120.3180)Interferometry.References and links1.S.Rowan and J.Hough,“Gravitational wave detection by interferometry(ground and space),”Living Rev.Rel-ativity3,1–3(2000).2.P.R.Saulson,Fundamentals of Interferometric Gravitational Wave Detectors(World Scientific,1994).3.G.M.Harry,“Advanced LIGO:the next generation of gravitational wave detectors,”Class.Quantum Grav.27,084006(2010).4. B.Willke,“Stabilized lasers for advanced gravitational wave detectors,”Laser Photon.Rev.4,780–794(2010).5.P.Kwee,“Laser characterization and stabilization for precision interferometry,”Ph.D.thesis,Universit¨a t Han-nover(2010).6.K.Somiya,Y.Chen,S.Kawamura,and N.Mio,“Frequency noise and intensity noise of next-generationgravitational-wave detectors with RF/DC readout schemes,”Phys.Rev.D73,122005(2006).7. B.Willke,P.King,R.Savage,and P.Fritschel,“Pre-stabilized laser design requirements,”internal technicalreport T050036-v4,LIGO Scientific Collaboration(2009).8.L.Winkelmann,O.Puncken,R.Kluzik,C.Veltkamp,P.Kwee,J.Poeld,C.Bogan,B.Willke,M.Frede,J.Neu-mann,P.Wessels,and D.Kracht,“Injection-locked single-frequency laser with an output power of220W,”Appl.Phys.B102,529–538(2011).9.T.J.Kane and R.L.Byer,“Monolithic,unidirectional single-mode Nd:Y AG ring laser,”Opt.Lett.10,65–67(1985).10.I.Freitag,A.T¨u nnermann,and H.Welling,“Power scaling of diode-pumped monolithic Nd:Y AG lasers to outputpowers of several watts,”mun.115,511–515(1995).11.M.Frede,B.Schulz,R.Wilhelm,P.Kwee,F.Seifert,B.Willke,and D.Kracht,“Fundamental mode,single-frequency laser amplifier for gravitational wave detectors,”Opt.Express15,459–465(2007).#161737 - $15.00 USD Received 18 Jan 2012; revised 27 Feb 2012; accepted 4 Mar 2012; published 24 Apr 2012 (C) 2012 OSA7 May 2012 / Vol. 20, No. 10 / OPTICS EXPRESS 1061712. A.D.Farinas,E.K.Gustafson,and R.L.Byer,“Frequency and intensity noise in an injection-locked,solid-statelaser,”J.Opt.Soc.Am.B12,328–334(1995).13.R.Bork,M.Aronsson,D.Barker,J.Batch,J.Heefner,A.Ivanov,R.McCarthy,V.Sandberg,and K.Thorne,“New control and data acquisition system in the Advanced LIGO project,”Proc.of Industrial Control And Large Experimental Physics Control System(ICALEPSC)conference(2011).14.“Experimental physics and industrial control system,”/epics/.15.P.Kwee and B.Willke,“Automatic laser beam characterization of monolithic Nd:Y AG nonplanar ring lasers,”Appl.Opt.47,6022–6032(2008).16.P.Kwee,F.Seifert,B.Willke,and K.Danzmann,“Laser beam quality and pointing measurement with an opticalresonator,”Rev.Sci.Instrum.78,073103(2007).17. A.R¨u diger,R.Schilling,L.Schnupp,W.Winkler,H.Billing,and K.Maischberger,“A mode selector to suppressfluctuations in laser beam geometry,”Opt.Acta28,641–658(1981).18. B.Willke,N.Uehara,E.K.Gustafson,R.L.Byer,P.J.King,S.U.Seel,and R.L.Savage,“Spatial and temporalfiltering of a10-W Nd:Y AG laser with a Fabry-Perot ring-cavity premode cleaner,”Opt.Lett.23,1704–1706 (1998).19.J.H.P¨o ld,“Stabilization of the Advanced LIGO200W laser,”Diploma thesis,Leibniz Universit¨a t Hannover(2009).20. E.D.Black,“An introduction to Pound-Drever-Hall laser frequency stabilization,”Am.J.Phys.69,79–87(2001).21.R.W.P.Drever,J.L.Hall,F.V.Kowalski,J.Hough,G.M.Ford,A.J.Munley,and H.Ward,“Laser phase andfrequency stabilization using an optical resonator,”Appl.Phys.B31,97–105(1983).22. A.Bullington,ntz,M.Fejer,and R.Byer,“Modal frequency degeneracy in thermally loaded optical res-onators,”Appl.Opt.47,2840–2851(2008).23.G.Mueller,“Beam jitter coupling in Advanced LIGO,”Opt.Express13,7118–7132(2005).24.V.Delaubert,N.Treps,ssen,C.C.Harb,C.Fabre,m,and H.-A.Bachor,“TEM10homodynedetection as an optimal small-displacement and tilt-measurement scheme,”Phys.Rev.A74,053823(2006). 25.P.Kwee,B.Willke,and K.Danzmann,“Laser power noise detection at the quantum-noise limit of32A pho-tocurrent,”Opt.Lett.36,3563–3565(2011).26. A.Araya,N.Mio,K.Tsubono,K.Suehiro,S.Telada,M.Ohashi,and M.Fujimoto,“Optical mode cleaner withsuspended mirrors,”Appl.Opt.36,1446–1453(1997).27.P.Kwee,B.Willke,and K.Danzmann,“Shot-noise-limited laser power stabilization with a high-power photodi-ode array,”Opt.Lett.34,2912–2914(2009).28. ntz,P.Fritschel,H.Rong,E.Daw,and G.Gonz´a lez,“Quantum-limited optical phase detection at the10−10rad level,”J.Opt.Soc.Am.A19,91–100(2002).1.IntroductionInterferometric gravitational wave detectors[1,2]perform one of the most precise differential length measurements ever.Their goal is to directly detect the faint signals of gravitational waves emitted by astrophysical sources.The Advanced LIGO(Laser Interferometer Gravitational-Wave Observatory)[3]project is currently installing three second-generation,ground-based detectors at two observatory sites in the USA.The4kilometer-long baseline Michelson inter-ferometers have an anticipated tenfold better sensitivity than theirfirst-generation counterparts (Inital LIGO)and will presumably reach a strain sensitivity between10−24and10−23Hz−1/2.One key technology necessary to reach this extreme sensitivity are ultra-stable high-power laser systems[4,5].A high laser output power is required to reach a high signal-to-quantum-noise ratio,since the effect of quantum noise at high frequencies in the gravitational wave readout is reduced with increasing circulating laser power in the interferometer.In addition to quantum noise,technical laser noise coupling to the gravitational wave channel is a major noise source[6].Thus it is important to reduce the coupling of laser noise,e.g.by optical design or by exploiting symmetries,and to reduce laser noise itself by various active and passive stabilization schemes.In this article,we report on the pre-stabilized laser(PSL)of the Advanced LIGO detector. The PSL is based on a high-power solid-state laser that is comprehensively stabilized.One laser system was set up at the Albert-Einstein-Institute(AEI)in Hannover,Germany,the so called PSL reference system.Another identical PSL has already been installed at one Advanced LIGO site,the one near Livingston,LA,USA,and two more PSLs will be installed at the second #161737 - $15.00 USD Received 18 Jan 2012; revised 27 Feb 2012; accepted 4 Mar 2012; published 24 Apr 2012 (C) 2012 OSA7 May 2012 / Vol. 20, No. 10 / OPTICS EXPRESS 10618site at Hanford,WA,USA.We have characterized the reference PSL and thefirst observatory PSL.For this we measured various beam parameters and noise levels of the output beam in the gravitational wave detection frequency band from about10Hz to10kHz,measured the performance of the active and passive stabilization schemes,and determined upper bounds for the cross coupling between different control loops.At the time of writing the PSL reference system has been operated continuously for more than18months,and continues to operate reliably.The reference system delivered a continuous-wave,single-frequency laser beam at1064nm wavelength with a maximum power of150W with99.5%in the TEM00mode.The active and passive stabilization schemes efficiently re-duced the technical laser noise by several orders of magnitude such that most design require-ments[5,7]were fulfilled.In the gravitational wave detection frequency band the relative power noise was as low as2×10−8Hz−1/2,relative beam pointingfluctuations were as low as1×10−7Hz−1/2,and an in-loop measurement of the frequency noise was consistent with the maximum acceptable frequency noise of about0.1HzHz−1/2.The cross couplings between the control loops were,in general,rather small or at the expected levels.Thus we were able to optimize each loop individually and observed no instabilities due to cross couplings.This stabilized laser system is an indispensable part of Advanced LIGO and fulfilled nearly all design goals concerning the maximum acceptable noise levels of the different beam pa-rameters right after installation.Furthermore all or a subset of the implemented stabilization schemes might be of interest for many other high-precision optical experiments that are limited by laser noise.Besides gravitational wave detectors,stabilized laser systems are used e.g.in the field of optical frequency standards,macroscopic quantum objects,precision spectroscopy and optical traps.In the following section the laser system,the stabilization scheme and the characterization methods are described(Section2).Then,the results of the characterization(Section3)and the conclusions(Section4)are presented.ser system and stabilizationThe PSL consists of the laser,developed and fabricated by Laser Zentrum Hannover e.V.(LZH) and neoLASE,and the stabilization,developed and integrated by AEI.The optical components of the PSL are on a commercial optical table,occupying a space of about1.5×3.5m2,in a clean,dust-free environment.At the observatory sites the optical table is located in an acoustically isolated cleanroom.Most of the required electronics,the laser diodes for pumping the laser,and water chillers for cooling components on the optical table are placed outside of this cleanroom.The laser itself consists of three stages(Fig.1).An almostfinal version of the laser,the so-called engineering prototype,is described in detail in[8].The primary focus of this article is the stabilization and characterization of the PSL.Thus only a rough overview of the laser and the minor modifications implemented between engineering prototype and reference system are given in the following.Thefirst stage,the master laser,is a commercial non-planar ring-oscillator[9,10](NPRO) manufactured by InnoLight GmbH in Hannover,Germany.This solid-state laser uses a Nd:Y AG crystal as the laser medium and resonator at the same time.The NPRO is pumped by laser diodes at808nm and delivers an output power of2W.An internal power stabilization,called the noise eater,suppresses the relaxation oscillation at around1MHz.Due to its monolithic res-onator,the laser has exceptional intrinsic frequency stability.The two subsequent laser stages, used for power scaling,inherit the frequency stability of the master laser.The second stage(medium-power amplifier)is a single-pass amplifier[11]with an output power of35W.The seed laser beam from the NPRO stage passes through four Nd:YVO4crys-#161737 - $15.00 USD Received 18 Jan 2012; revised 27 Feb 2012; accepted 4 Mar 2012; published 24 Apr 2012 (C) 2012 OSA7 May 2012 / Vol. 20, No. 10 / OPTICS EXPRESS 10619power stabilizationFig.1.Pre-stabilized laser system of Advanced LIGO.The three-staged laser(NPRO,medium power amplifier,high power oscillator)and the stabilization scheme(pre-mode-cleaner,power and frequency stabilization)are shown.The input-mode-cleaner is not partof the PSL but closely related.NPRO,non-planar ring oscillator;EOM,electro-optic mod-ulator;FI,Faraday isolator;AOM,acousto-optic modulator.tals which are longitudinally pumped byfiber-coupled laser diodes at808nm.The third stage is an injection-locked ring oscillator[8]with an output power of about220W, called the high-power oscillator(HPO).Four Nd:Y AG crystals are used as the active media. Each is longitudinally pumped by sevenfiber-coupled laser diodes at808nm.The oscillator is injection-locked[12]to the previous laser stage using a feedback control loop.A broadband EOM(electro-optic modulator)placed between the NPRO and the medium-power amplifier is used to generate the required phase modulation sidebands at35.5MHz.Thus the high output power and good beam quality of this last stage is combined with the good frequency stability of the previous stages.The reference system features some minor modifications compared to the engineering proto-type[8]concerning the optics:The external halo aperture was integrated into the laser system permanently improving the beam quality.Additionally,a few minor designflaws related to the mechanical structure and the optical layout were engineered out.This did not degrade the output performance,nor the characteristics of the locked laser.In general the PSL is designed to be operated in two different power modes.In high-power mode all three laser stages are engaged with a power of about160W at the PSL output.In low-power mode the high-power oscillator is turned off and a shutter inside the laser resonator is closed.The beam of the medium-power stage is reflected at the output coupler of the high power stage leaving a residual power of about13W at the PSL output.This low-power mode will be used in the early commissioning phase and in the low-frequency-optimized operation mode of Advanced LIGO and is not discussed further in this article.The stabilization has three sections(Fig.1:PMC,PD2,reference cavity):A passive resonator, the so called pre-mode-cleaner(PMC),is used tofilter the laser beam spatially and temporally (see subsection2.1).Two pick-off beams at the PMC are used for the active power stabilization (see subsection2.2)and the active frequency pre-stabilization,respectively(see subsection2.3).In general most stabilization feedback control loops of the PSL are implemented using analog electronics.A real-time computer system(Control and Data Acquisition Systems,CDS,[13]) which is common to many other subsystems of Advanced LIGO,is utilized to control and mon-itor important parameters of the analog electronics.The lock acquisition of various loops,a few #161737 - $15.00 USD Received 18 Jan 2012; revised 27 Feb 2012; accepted 4 Mar 2012; published 24 Apr 2012 (C) 2012 OSA7 May 2012 / Vol. 20, No. 10 / OPTICS EXPRESS 10620slow digital control loops,and the data acquisition are implemented using this computer sys-tem.Many signals are recorded at different sampling rates ranging from16Hz to33kHz for diagnostics,monitoring and vetoing of gravitational wave signals.In total four real-time pro-cesses are used to control different aspects of the laser system.The Experimental Physics and Industrial Control System(EPICS)[14]and its associated user tools are used to communicate with the real-time software modules.The PSL contains a permanent,dedicated diagnostic instrument,the so called diagnostic breadboard(DBB,not shown in Fig.1)[15].This instrument is used to analyze two different beams,pick-off beams of the medium power stage and of the HPO.Two shutters are used to multiplex these to the DBB.We are able to measurefluctuations in power,frequency and beam pointing in an automated way with this instrument.In addition the beam quality quantified by the higher order mode content of the beam was measured using a modescan technique[16].The DBB is controlled by one real-time process of the CDS.In contrast to most of the other control loops in the PSL,all DBB control loops were implemented digitally.We used this instrument during the characterization of the laser system to measure the mentioned laser beam parameters of the HPO.In addition we temporarily placed an identical copy of the DBB downstream of the PMC to characterize the output beam of the PSL reference system.2.1.Pre-mode-cleanerA key component of the stabilization scheme is the passive ring resonator,called the pre-mode-cleaner(PMC)[17,18].It functions to suppress higher-order transverse modes,to improve the beam quality and the pointing stability of the laser beam,and tofilter powerfluctuations at radio frequencies.The beam transmitted through this resonator is the output beam of the PSL, and it is delivered to the subsequent subsystems of the gravitational wave detector.We developed and used a computer program[19]to model thefilter effects of the PMC as a function of various resonator parameters in order to aid its design.This led to a resonator with a bow-tie configuration consisting of four low-loss mirrors glued to an aluminum spacer. The optical round-trip length is2m with a free spectral range(FSR)of150MHz.The inci-dence angle of the horizontally polarized laser beam is6◦.Theflat input and output coupling mirrors have a power transmission of2.4%and the two concave high reflectivity mirrors(3m radius of curvature)have a transmission of68ppm.The measured bandwidth was,as expected, 560kHz which corresponds to afinesse of133and a power build-up factor of42.The Gaussian input/output beam had a waist radius of about568µm and the measured acquired round-trip Gouy phase was about1.7rad which is equivalent to0.27FSR.One TEM00resonance frequency of the PMC is stabilized to the laser frequency.The Pound-Drever-Hall(PDH)[20,21]sensing scheme is used to generate error signals,reusing the phase modulation sidebands at35.5MHz created between NPRO and medium power amplifier for the injection locking.The signal of the photodetector PD1,placed in reflection of the PMC, is demodulated at35.5MHz.This photodetector consists of a1mm InGaAs photodiode and a transimpedance amplifier.A piezo-electric element(PZT)between one of the curved mirrors and the spacer is used as a fast actuator to control the round-trip length and thereby the reso-nance frequencies of the PMC.With a maximum voltage of382V we were able to change the round-trip length by about2.4µm.An analog feedback control loop with a bandwidth of about 7kHz is used to stabilize the PMC resonance frequency to the laser frequency.In addition,the electronics is able to automatically bring the PMC into resonance with the laser(lock acquisition).For this process a125ms period ramp signal with an amplitude cor-responding to about one FSR is applied to the PZT of the PMC.The average power on pho-todetector PD1is monitored and as soon as the power drops below a given threshold the logic considers the PMC as resonant and closes the analog control loop.This lock acquisition proce-#161737 - $15.00 USD Received 18 Jan 2012; revised 27 Feb 2012; accepted 4 Mar 2012; published 24 Apr 2012 (C) 2012 OSA7 May 2012 / Vol. 20, No. 10 / OPTICS EXPRESS 10621dure took an average of about65ms and is automatically repeated as soon as the PMC goes off resonance.One real-time process of CDS is dedicated to control the PMC electronics.This includes parameters such as the proportional gain of the loop or lock acquisition parameters.In addition to the PZT actuator,two heating foils,delivering a maximum total heating power of14W,are attached to the aluminum spacer to control its temperature and thereby the roundtrip length on timescales longer than3s.We measured a heating and cooling1/e time constant of about2h with a range of4.5K which corresponds to about197FSR.During maintenance periods we heat the spacer with7W to reach a spacer temperature of about2.3K above room temperature in order to optimize the dynamic range of this actuator.A digital control loop uses this heater as an actuator to off-load the PZT actuator allowing compensation for slow room temperature and laser frequency drifts.The PMC is placed inside a pressure-tight tank at atmospheric pressure for acoustic shield-ing,to avoid contamination of the resonator mirrors and to minimize optical path length changes induced by atmospheric pressure variations.We used only low-outgassing materials and fabri-cated the PMC in a cleanroom in order to keep the initial mirror contamination to a minimum and to sustain a high long-term throughput.The PMCfilters the laser beam and improves the beam quality of the laser by suppress-ing higher order transverse modes[17].The acquired round-trip Gouy phase of the PMC was chosen in such a way that the resonance frequencies of higher order TEM modes are clearly separated from the TEM00resonance frequency.Thus these modes are not resonant and are mainly reflected by the PMC,whereas the TEM00mode is transmitted.However,during the design phase we underestimated the thermal effects in the PMC such that at nominal circu-lating power the round-trip Gouy-phase is close to0.25FSR and the resonance of the TEM40 mode is close to that of the TEM00mode.To characterize the mode-cleaning performance we measured the beam quality upstream and downstream of the PMC with the two independent DBBs.At150W in the transmitted beam,the circulating power in the PMC is about6.4kW and the intensity at the mirror surface can be as high as1.8×1010W m−2.At these power levels even small absorptions in the mirror coatings cause thermal effects which slightly change the mirror curvature[22].To estimate these thermal effects we analyzed the transmitted beam as a function of the circulating power using the DBB.In particular we measured the mode content of the LG10and TEM40mode.Changes of the PMC eigenmode waist size showed up as variations of the LG10mode content.A power dependence of the round-trip Gouy phase caused a variation of the power within the TEM40mode since its resonance frequency is close to a TEM00mode resonance and thus the suppression of this mode depends strongly on the Gouy phase.We adjusted the input power to the PMC such that the transmitted power ranged from100W to 150W corresponding to a circulating power between4.2kW and6.4kW.We used our PMC computer simulation to deduce the power dependence of the eigenmode waist size and the round-trip Gouy phase.The results are given in section3.1.At all circulating power levels,however,the TEM10and TEM01modes are strongly sup-pressed by the PMC and thus beam pointingfluctuations are reduced.Pointingfluctuations can be expressed tofirst order as powerfluctuations of the TEM10and TEM01modes[23,24].The PMC reduces thefield amplitude of these modes and thus the pointingfluctuations by a factor of about61according to the measuredfinesse and round-trip Gouy phase.To keep beam point-ingfluctuations small is important since they couple to the gravitational wave channel by small differential misalignments of the interferometer optics.Thus stringent design requirements,at the10−6Hz−1/2level for relative pointing,were set.To verify the pointing suppression effect of the PMC we used DBBs to measure the beam pointingfluctuations upstream and downstream #161737 - $15.00 USD Received 18 Jan 2012; revised 27 Feb 2012; accepted 4 Mar 2012; published 24 Apr 2012 (C) 2012 OSA7 May 2012 / Vol. 20, No. 10 / OPTICS EXPRESS 10622Fig.2.Detailed schematic of the power noise sensor setup for thefirst power stabilizationloop.This setup corresponds to PD2in the overview in Fig.1.λ/2,waveplate;PBS,polar-izing beam splitter;BD,glassfilters used as beam dump;PD,single element photodetector;QPD,quadrant photodetector.of the PMC.The resonator design has an even number of nearly normal-incidence reflections.Thus the resonance frequencies of horizontal and vertical polarized light are almost identical and the PMC does not act as polarizer.Therefore we use a thin-film polarizer upstream of the PMC to reach the required purity of larger than100:1in horizontal polarization.Finally the PMC reduces technical powerfluctuations at radio frequencies(RF).A good power stability between9MHz and100MHz is necessary as the phase modulated light in-jected into the interferometer is used to sense several degrees of freedom of the interferometer that need to be controlled.Power noise around these phase modulation sidebands would be a noise source for the respective stabilization loop.The PMC has a bandwidth(HWHM)of about 560kHz and acts tofirst order as a low-passfilter for powerfluctuations with a-3dB corner frequency at this frequency.To verify that the suppression of RF powerfluctuations is suffi-cient to fulfill the design requirements,we measured the relative power noise up to100MHz downstream of the PMC with a dedicated experiment involving the optical ac coupling tech-nique[25].In addition the PMC serves the very important purpose of defining the spatial laser mode for the downstream subsystem,namely the input optics(IO)subsystem.The IO subsystem is responsible,among other things,to further stabilize the laser beam with the suspended input mode cleaner[26]before the beam will be injected into the interferometer.Modifications of beam alignment or beam size of the laser system,which were and might be unavoidable,e.g., due to maintenance,do not propagate downstream of the PMC tofirst order due to its mode-cleaning effect.Furthermore we benefit from a similar isolating effect for the active power and frequency stabilization by using the beams transmitted through the curved high-reflectivity mirrors of the PMC.2.2.Power stabilizationThe passivefiltering effect of the PMC reduces powerfluctuations significantly only above the PMC bandwidth.In the detection band from about10Hz to10kHz good power stability is required sincefluctuations couple via the radiation pressure imbalance and the dark-fringe offset to the gravitational wave channel.Thus two cascaded active control loops,thefirst and second power stabilization loop,are used to reduce powerfluctuations which are mainly caused by the HPO stage.Thefirst loop uses a low-noise photodetector(PD2,see Figs.1and2)at one pick-off port #161737 - $15.00 USD Received 18 Jan 2012; revised 27 Feb 2012; accepted 4 Mar 2012; published 24 Apr 2012 (C) 2012 OSA7 May 2012 / Vol. 20, No. 10 / OPTICS EXPRESS 10623of the PMC to measure the powerfluctuations downstream of the PMC.An analog electronics feedback control loop and an AOM(acousto-optic modulator)as actuator,located upstream of the PMC,are used to stabilize the power.Scattered light turned out to be a critical noise source for thisfirst loop.Thus we placed all required optical and opto-electronic components into a box to shield from scattered light(see Fig.2).The beam transmitted by the curved PMC mirror has a power of about360mW.This beam isfirst attenuated in the box using aλ/2waveplate and a thin-film polarizer,such that we are able to adjust the power on the photodetectors to the optimal operation point.Afterwards the beam is split by a50:50beam splitter.The beams are directed to two identical photode-tectors,one for the control loop(PD2a,in-loop detector)and one for independent out-of-loop measurements to verify the achieved power stability(PD2b,out-of-loop detector).These pho-todetectors consist of a2mm InGaAs photodiode(PerkinElmer C30642GH),a transimpedance amplifier and an integrated signal-conditioningfilter.At the chosen operation point a power of about4mW illuminates each photodetector generating a photocurrent of about3mA.Thus the shot noise is at a relative power noise of10−8Hz−1/2.The signal conditioningfilter has a gain of0.2at very low frequencies(<70mHz)and amplifies the photodetector signal in the im-portant frequency range between3.3Hz and120Hz by about52dB.This signal conditioning filter reduces the electronics noise requirements on all subsequent stages,but has the drawback that the range between3.3Hz and120Hz is limited to maximum peak-to-peak relative power fluctuations of5×10−3.Thus the signal-conditioned channel is in its designed operation range only when the power stabilization loop is closed and therefore it is not possible to measure the free running power noise using this channel due to saturation.The uncoated glass windows of the photodiodes were removed and the laser beam hits the photodiodes at an incidence angle of45◦.The residual reflection from the photodiode surface is dumped into a glassfilter(Schott BG39)at the Brewster angle.Beam positionfluctuations in combination with spatial inhomogeneities in the photodiode responsivity is another noise source for the power stabilization.We placed a silicon quadrant photodetector(QPD)in the box to measure the beam positionfluctuations of a low-power beam picked off the main beam in the box.The beam parameters,in particular the Gouy phase,at the QPD are the same as on the power sensing detectors.Thus the beam positionfluctuations measured with the QPD are the same as the ones on the power sensing photodetectors,assuming that the positionfluctuations are caused upstream of the QPD pick-off point.We used the QPD to measure beam positionfluctuations only for diagnostic and noise projection purposes.In a slightly modified experiment,we replaced one turning mirror in the path to the power sta-bilization box by a mirror attached to a tip/tilt PZT element.We measured the typical coupling between beam positionfluctuations generated by the PZT and the residual relative photocurrent fluctuations measured with the out-of-the-loop photodetector.This coupling was between1m−1 and10m−1which is a typical value observed in different power stabilization experiments as well.We measured this coupling factor to be able to calculate the noise contribution in the out-of-the-loop photodetector signal due to beam positionfluctuations(see Subsection3.3).Since this tip/tilt actuator was only temporarily in the setup,we are not able to measure the coupling on a regular basis.Both power sensing photodetectors are connected to analog feedback control electronics.A low-pass(100mHz corner frequency)filtered reference value is subtracted from one signal which is subsequently passed through several control loopfilter stages.With power stabilization activated,we are able to control the power on the photodetectors and thereby the PSL output power via the reference level on time scales longer than10s.The reference level and other important parameters of these electronics are controlled by one dedicated real-time process of the CDS.The actuation or control signal of the electronics is passed to an AOM driver #161737 - $15.00 USD Received 18 Jan 2012; revised 27 Feb 2012; accepted 4 Mar 2012; published 24 Apr 2012 (C) 2012 OSA7 May 2012 / Vol. 20, No. 10 / OPTICS EXPRESS 10624。
(1)To prospectively evaluate the effect of heart rate, heart rate variability, and calcification dual-source computed tomography (CT) image quality and to prospectively assess diagnostic accuracy of dual-source CT for coronary artery stenosis. by using invasive coronary angiography as the reference standard.前瞻性评价心率、心率变异性及钙化双源计算机断层扫描成像质量的影响及对冠状动脉狭窄的双源性冠状动脉狭窄诊断的准确性评价。
以侵入性冠状动脉造影为参照标准。
(2)Chest radiography plays an essential role in the diagnosis of thoracic disease and is the most frequently performed radiologic examination in the United States. Since the discovery of X rays more than a century ago, advances in technology have yieled numerous improvements in thoracic imaging. Evolutionary progress in film-based imaging has led to the development of excellent screen-film systems specifically designed for chest radiography.胸部X线摄影中起着至关重要的作用在胸部疾病的诊断,是最常用的影像学检查在美国。
遥感与地理信息系统方面的好杂志国内的期刊:1)遥感学报(98年前《环境遥感》杂志,国内比较好的遥感专业杂志,主编是原遥感所所长、现国家科技部部长徐冠华院士,遥感文章比较多,象国内比较牛的遥感理论研究的大牛复旦大学的金亚秋教授和北京师范大学的新当选的院士李小文教授经常有文章发表;基于遥感和GIS资源环境等应用的文章也比较好,主要是中科院地理所和遥感所的;还有就是图像处理的算法研究或新型的遥感方法如雷达干涉测量、高光谱方面的研究,主要由武汉大学测绘遥感信息工程国家重点实验室(L)和中科院遥感所的文章。
(2)测绘学报(侧重测量基础理论的研究,但经常有非常好的综述型的文章,上面的测绘学博士论文摘要是非常好,还有主编陈俊勇院士治学非常严谨,一般的假冒伪劣文章很难找到市场,该刊宁缺勿滥,2001年仍然是季刊,文章少,但很精。
不过该刊刊登的文章比较偏重大地测量(GPS),GIS的文章相比比较少)。
(3)武测学报(2001年改名《武汉大学学报》信息科学版)本杂志是原武汉测绘科技大学学报,主编是中国科学院和中国工程院双院士李德仁教授,很多具有创新性和理论性的测绘研究成果都在该刊发表,展示中国测绘学术研究的最高水平,引导测绘理论研究的方向。
我认为上面的博士论文摘要比较好,真正体现了我国3S技术的研究动向和学术水准。
本刊出版内容包括综述与瞻望、学术论文和研究报告、本领域重大科技新闻等,涉及测绘学研究的主要方面,尤其是摄影测量与遥感、大地测量与地球重力场、工程测量、地图制图、地球动力学、全球定位系统(GPS)、地理信息系统(GIS)、图形图像处理等。
该刊现同时有英文版,名为GEO-SPATIAL INFORMATION SCIENCE,是中文版的精华版,万方科技期刊网上可以下载全文。
(4)中国图象图形学报1996年创刊,由中国图象图形学会、中科院遥感所、中科院计算所共同主办,主编是科技部部长徐冠华院士。
2001年起《中国图象图形学报》分A、B版。
moorFLPI-2 – desktop setup.moorFLPI-2 – Capturing high resolution blood flow images in real timeThe moorFLPI-2 blood flow imager uses the laser speckle contrast technique to deliver real-time, high-resolution blood flow images, providing outstanding performance in a wide range of pre-clinical and clinical research applications.User-friendly features promote smooth workflow and enable the high through-put required to scan cohorts quickly and accurately. Advanced analysis functions help you to draw sound conclusions from your blood flow images. Product highlights include;• Non contact imaging technique.• Blood flow videos of any exposed tissue (skin or surgically exposed tissues).• Best spatial resolution of 10 microns per pixel to reveal detailed morphology.• Real-time video frames rates to capture dynamic changes in flow.•Add multiple “regions of interest” to assess and quantify blood flow changes in real time and post measurement. Area of ROIs calculated automatically.•Image areas range from 5.6mm x 7.5mm to 15cm x 20cm with motorised zoom and auto focus, offering flexible and convenient imaging.• Colour photo image matches blood flow images precisely to aid identification of features.•Compact design with flexible stand options for clinic or laboratory for convenient use in various experimental and clinical research settings.The laser speckle techniqueThe full-field laser perfusion technique, also known as laser speckle contrast imaging, exploits the random speckle pattern that is generated when tissue is illuminated by laser light and changes when blood cells move in the sampled tissue. When blood flow is high, the changing pattern becomes more blurred and the contrast in that region is reduced. Therefore high flow is related to low contrast and conversely low flow is associated with high contrast.The contrast image is processed to produce flux values that are colour-coded to correlate with blood flow in the tissue.The strength of the technique is two fold; video frame rate blood flow images enables the tracking of fast transients coupled with very high spatial resolution. It is possible to view pulsation in finger tips and spatial variations due to deep breath, occlusion, reactive hyperaemia and other stimuli.Technical advantages of Moor laser speckle are clear and include the use of motorised zoom and auto focus enabling the flexibility to image both small and large areas. Provision of spatial and temporal measurement modes allow optimum selection between image frame rate and spatial resolution. Colour photo provided by the measurement camera simplifies identification of key features.Triggering function enables control and synchronisation ofthe moorFLPI-2 with other systems.Low PowerCCD Camera Firewire & USB. Auto Focus &Applications and softwareEstablished pre-clinical and clinical research applications are wide-ranging; examples include;• Neuroscience – spreading cortical depression, stroke model assessment.• Dermatology – inflammation and irritancy research.• Oncology – experimental tumour growth,angiogenesis.• Pharmacology – local and systemic responses.• Plastic surgery – research into flap perfusion during surgery and post operatively.• Chemical toxicology – inflammation and irritancy (e.g.response to intradermal capsaicin).• Intraoperative measurements – limb and visceral ischaemia and reperfusion.• Cardiovascular research – e.g. endothelial function assessed with iontophoresis.Dedicated software for measurement and analysis is provided to take advantage of the high acquisition speeds and spatial resolution provided by moorFLPI-2.moorFLPI-2 software has been refined over a number of years in response to our customer feedback and is fully featured for setup, measurement, analysis, reporting and exporting.Setup offers full flexibility to choose scan size and temporal resolution, enabling you to collect data that is appropriate to your measurement, be it just a single image or a longer blood flow video. Zoom and auto focus is set easily using either the front panel buttons or via software control.Measurement starts with a simple click. Mark eventson the blood flow video and see changes in flow at pre-defined regions of interest (ROIs) that update graphically and histographically. Scan and ROI areas calculated automatically.Analysis post measurement includes the ability to replay and re-analyse data offline by re-positioning regions of interest. This allows maximum utilisation of the data set. Five different colour palettes, and optional functions such as smoothing, image XY shift (to counteract movement mid measurement) and variable speed playback of blood flow videos enable you to present your data clearly. Report templates can be custom defined to produce the analysis and reporting that is needed from your studies from standard statistical routines to FFT analysis.Export data to AVI, Matlab and graphical forms to extend the use of data for further analysis or presentations.Your researchPlease contact Moor or your nearest approved distributor to discuss your specific application. Ask to see the new system in action and evaluate it at your own facility.Current publications using Moor laser speckle are wideranging and updated online at .Forearm – wheel, flare and axon reflex due to capsaicin injection. Scan area approximately 15 x 20cm.Cerebral blood flow imaging – MCAO model showing baseline blood flow image. 10 micron resolution, intact skull. ROI analysis shows flow reduction.Issue 1Specifications:Quality ControlMoor Instruments is certified to ISO 13485: 2003. The moorFLPI-2™ is CE certified.Measured ParameterFlux (tissue perfusion).Range 0 – 5000PU.Accuracy ± 10% compared to a standard LDI measured from motility standard.Precision ± 3% of measured value.Image size measurement accuracy ± 0.5mm or ± 5%, whichever is greater.Distance measurement accuracy ± 5%.SoftwareWindows™ based control, image processing and analysis.External Trigger Input / Output1 x trigger input, 1 x trigger output, 0-5v (TTL) level transition range. All outputs / inputshave independent user selectable scaling.Zoom and Auto FocusControlled by motorised actuators via PC software or push buttons on scan head.Aiming BeamAlignment of aiming beams at 25cm.Moor Instruments, established in 1987, is a world leaderin the design, manufacture and distribution of monitoringand imaging systems for micro-vascular assessments. Weare proud now to include tissue oxygenation assessmentswithin this portfolio.Firsthand experience of laser Doppler research anddevelopment within Moor dates back to 1978 and withthis we have the breadth of knowledge to help with yourapplication and the enthusiasm to try and find answers toany of your questions.By giving priority to performance, quality and service, westrive to ensure the highest levels of customer satisfaction.Our dedicated design team is involved with a numberof development projects for other partners andmanufacturers. Whatever your needs, as a researcher,clinician or manufacturer, Moor will work harder for you.moorFLPI-2™ with optional panel PC and MS3b mobile stand. About Moor Instruments。
奇思妙想的英语作文范例英文回答:In the realm of imagination, where boundaries dissolve and creativity flourishes, the fusion of technology and art unveils a myriad of groundbreaking possibilities. Within this liminal space, where the tangible and intangible converge, extraordinary ideas emerge, transforming the mundane into the extraordinary.One such concept that has captured the imagination of visionaries is the notion of interactive art installations. These mesmerizing creations blur the lines between spectator and participant, inviting individuals to engage not as mere observers but as active collaborators in the artistic experience. Imagine an interactive sculpture that responds to human touch, its form and colors morphing in real-time based on the visitor's movements. Such an installation would not only showcase the artist's technical prowess but also foster a profound connection between theartwork and the audience, creating an unforgettable and deeply immersive experience.Artificial intelligence (AI) also plays a pivotal role in unlocking the potential of imaginative art. AI algorithms can analyze vast datasets of images and sounds, extracting patterns and insights that would be inaccessible to human artists alone. This collaboration between human creativity and machine intelligence has given rise to mesmerizing works of generative art, where unique and visually stunning creations are produced through algorithms trained on existing datasets.The convergence of technology and art has also led to the development of immersive virtual reality (VR) experiences that transport audiences into surreal and otherworldly realms. VR art challenges traditional notions of spatial and temporal boundaries, allowing artists to create breathtaking environments that defy the laws of physics and immerse viewers in a deeply sensory and interactive experience. In these virtual worlds, visitors can explore interactive sculptures, engage with virtualcharacters, and even contribute to the artwork itself, shaping its evolution through their actions.Furthermore, the intersection of art and science has sparked a movement known as "bioart," which explores the fascinating interplay between living organisms and artistic expression. Artists working in this field create thought-provoking installations that delve into the depths of biology, genetics, and ecology. Bioart encourages us to reconsider our relationship with the natural world, fostering an appreciation for the intricate tapestry oflife and the delicate balance of ecosystems.中文回答:在想象力的领域中,界限消融,创造力蓬勃发展,科技与艺术的融合揭示了无数突破性的可能性。
Spatial and temporal distribution of polycyclic aromatic hydrocarbons (PAHs)in sediments from Daya Bay,South ChinaWen Yan a ,*,Jisong Chi b ,Zhiyuan Wang a ,Weixia Huang a ,Gan Zhang caCAS Key Laboratory of Marginal Sea Geology,South China Sea Institute of Oceanology,Chinese Academy of Sciences,164West Xingang Road,Haizhu,Guangzhou 510301,China bGuiyang Environmental Monitoring Central Station,Guiyang 550002,China cState Key Laboratory of Organic Geochemistry,Guangzhou Institute of Geochemistry,CAS,Guangzhou 510640,ChinaA survey of sediments from Daya Bay serves as a baseline study for levels,distributions and possible sources of PAHs in surface sediments and both core sediments.a r t i c l e i n f oArticle history:Received 14October 2008Received in revised form 22January 2009Accepted 25January 2009Keywords:PAHsSediment Daya Baya b s t r a c tThe spatial and temporal distribution of polycyclic aromatic hydrocarbons (PAHs)has been investigated in Daya Bay,China.The total concentration of the 16USEPA priority PAHs in surface sediments ranged from 42.5to 158.2ng/g dry weight with a mean concentration of 126.2ng/g.The spatial distribution of PAHs was site-specific and combustion processes were the main source of PAHs in the surface sediments.Total 16priority PAH concentration in the cores 8and 10ranged from 77.4to 305.7ng/g and from 118.1to 319.9ng/g respectively.The variation of the 16PAH concentrations in both cores followed the economic development in China very well and was also influenced by input pathways.Some of the PAHs were petrogenic in core 8while pyrolytic source was dominant in core 10.In addition,pyrolytic PAHs in both cores were mainly from the coal and/or grass and wood combustion.Ó2009Elsevier Ltd.All rights reserved.1.IntroductionPolycyclic aromatic hydrocarbons (PAHs)are an important class of persistent organic pollutants (POPs).They are primarily derived from incomplete combustion of fossil fuels and burning of vege-tation and other organic materials (Yunker et al.,2002).The derivatives from crude oil seepage and diagenesis of organic matter in anoxic sediments are also the important PAHs’sources (Lima et al.,2005).PAHs are introduced into the environments via various routes and are ubiquitous environmental pollutants.They have been detected widespreadly in various environmental media,such as organism (Liang et al.,2007),atmosphere (Qi et al.,2001),water (Zhou and Maskaoui,2003),soils (Mielke et al.,2001),and sedi-ments (McCready et al.,2000).Because of their potentially hazardous properties,persistence and prevalence in the environ-ments,the efforts have been made to reduce PAH emission in many countries,for example,16of PAHs have been listed as priority control pollutants by the Environmental Protection Agency of the USA (Manoli et al.,2000).The marine sediment is one of the most important reservoirs of environmental pollutants (Voorspoels et al.,2004;Yang et al.,2005).Contaminated sediments can directly affect bottom-dwelling organisms.Moreover,once disturbed,the sediment can be resuspended and the contaminants would reenter the marine aquatic environment and circulate in ecosystems,resulting in second contamination (Zeng and Venkatesan,1999).Thus,the contaminated sediments represent a continuing source for toxic substances in aquatic environments that may affect wildlife and humans via the food chain (Kannan et al.,2005).Therefore,the distribution and fate of contaminants,such as heavy metals,PAHs,OCPs,and PCBs in coastal sediments have provoked considerable concern and have been largely documented.In recent decades,the Pearl River Delta,located in Southern China,has become one of the rapidest developing regions in China.The rapid economic development,however,has caused serious pollution problems,which have adversely affected the air (Qi et al.,2001)and water quality (Yang et al.,1997)in the region.The persistent organic pollutants in environment of the Pearl River Delta have been well documented (Fu et al.,2003).As one of the largest Bays in the South Sea,Daya Bay is located in the region and is one of the main aquacultural areas in the Guangdong Province thanks to its rich biological resources.In order to understand and assess the impact of contaminants on the aquatic ecosystem of Daya Bay,constant efforts are much needed to determine the distribution and fate of possible pollutants in the Bay.There are several studies which have analyzed levels of POPs in the water,*Corresponding author.Tel.:þ862089023150;fax:þ862089023121.E-mail address:wyan@ (W.Yan).Contents lists available at ScienceDirectEnvironmental Pollutionjournal homepage:/locate/envpol0269-7491/$–see front matter Ó2009Elsevier Ltd.All rights reserved.doi:10.1016/j.envpol.2009.01.023Environmental Pollution 157(2009)1823–1830surface sediment,and aquatic organisms of Daya Bay (Zhou et al.,2001;Zhou and Maskaoui,2003).A previous survey indicated that the mean concentration of total PAHs in surface sediment was 481ng/g (Zhou and Maskaoui,2003).However,the samples collected from the sediment in that survey were only focused on surface sediment,so the information for wholly assessing the contamination levels of the sediment of Daya Bay arose by PAHs was limited.The present study aimed to carry out a survey of sediments of Daya Bay to determine the concentration levels and distribution of selected PAHs.So fourteen surface sediment samples were collected for analysis to demonstrate the spatial distribution of PAHs in Daya Bay,and two sedimentary cores were collected for analysis to examine the temporal distribution of PAHs and to evaluate and reconstruct historical records of PAHs in recent decades.2.Materials and methods 2.1.Chemicals and reagentsA standard solution of the 16USEPA priority PAHs [naphthalene (Nap),ace-naphthylene (Acy),acenaphthene (Ace),fluorine (Fl),phenanthrene (P),anthracene (Ant),fluoranthene (Flu),pyrene (Pyr),benzo[a]anthracene (BaA),chrysene (Chr),benzo[b]fluoranthene (BbF),benzo[k]fluoranthene (BkF),benzo[a]pyrene (BaP),indeno[1,2,3-c,d]pyrene (InP),dibenzo[a,h]anthracene (DBA),and benzo[g,h,i]per-ylene (BgP)],and a mixture solution of the surrogate standards perdeuterated PAHs (naphthalene-d 8,acenaphthene-d 10,phenanthrene-d 10,chrysene-d 12,and perylene-d 12)were purchased from Ultra Scientific,Inc.(North Kingstown,RI,USA).Neat (99%)hexamethylbenzene was obtained from Aldrich Chemical Company (Mil-waukee,WI,USA).A standard reference material (SRM 1941)was purchased from National Institute of Standards and Technology (NIST,Gaithersburg,MD,USA).All solvents used for sample processing and analyses (dichloromethane,acetone,hexane and methanol)were analytical grade and redistilled twice before use.The Silica gel (80–100mesh)and alumina (120–200mesh)were extracted for 72h in a Soxhlet apparatus,activated in the oven at 150 C and at 180 C for 12h,respec-tively,and then deactivated with distilled water at a ratio of 3%(m/m).Deionised water was taken from a Milli-Q system.2.2.Environmental sample collectionSurface sediment samples were taken with a grab sampler in November of 2003and the locations of sampling stations are shown in Fig.1.The top 1-cm layers were carefully removed with a stainless steel spoon for subsequent analysis.Two sedi-ment cores of about 40cm were also collected at sites 8and 10in the same time,and then sliced at 1-cm intervals.A stainless steel static gravity corer (8cm i.d.)was employed to minimize the disturbance of the surface sediment layer.All the samples were packed into aluminum boxes and immediately stored at À20 C until required.2.3.Measurement of TOC of the sedimentsFreeze-dried samples were ground,and then carbonate was removed by treat-ment of sample with 10%(v/v)HCl.After the samples were dried at 60 C in an oven the content of total organic carbon (TOC)of sediment was measured by an Ele-mentar Vavio EL III elemental analyzer (Hanau,Germany).2.4.Dating of the sedimentary coresThe procedure of sediment dating has been described in detail elsewhere (Zhang et al.,2002).In summary,the 210Pb activities in sediment subsamples were deter-mined by analysis of the a -radioactivity of its decay product 210Po,on the assumption that the two are in equilibrium.The Po was extracted,purified,and self-plated onto silver disks at 75–80 C in 0.5M HCl,with 209Po used as yield monitor and tracer in quantification.Counting was conducted by computerized multi-channel-spectrometry with gold–silicon surface barrier detectors.Supported 210Po was obtained by indirectly determining the a -activity of the supporting parent 226Ra,which was carried by coprecipitated BaSO 4.A constant activity model of the 210Pb-dating method was applied to give average sedimentation rates for the sedi-mentary cores (Allen et al.,1993).114°30'114°35'114°40'114°45'114°50'E22°30'22°35'22°40'22°45'22°50'NN121079611812515414133D a y a B a yFig.1.Map of Daya Bay showing the locations where samples weretaken.Table 1Total 16EPA priority PAHs.W.Yan et al./Environmental Pollution 157(2009)1823–183018242.5.Extraction procedureSediment samples were homogenized and freeze-dried before extracting.About 5g of dried and homogenized sediment samples were extracted for 72h in a Soxhlet apparatus with 150ml dichloromethane.A mixture of deuterated PAH compounds (naphthalene-d 8,acenaphthene-d 10,phenanthrene-d 10,chrysene-d 12,and perylene-d 12)as recovery surrogate standards was added to all the samples prior to extraction.Activated copper granules were added to the collection flask to remove elemental sulfur.After extraction,the extract was concentrated up to a volume of about 2–3ml and solvent-exchanged into 10ml n-hexane which further reduced to approxi-mately 1–2ml with a rotary vacuum evaporator.A 1:2alumina/silica gel column was used to clean-up and fractionate the extract.The first fraction,containing aliphatic hydrocarbons,was eluted with 15ml of hexane.The second fraction containing PAHs was collected by eluting 60ml of hexane/dichloromethane (1:1).The PAH fraction was then concentrated up to 1ml by rotary vacuum evaporator and further to 0.2ml under a gentle gas stream of purified nitrogen.A known quantity of hex-amethylbenzene was added as an internal standard prior to gas chromatography–mass spectrometer (GC–MS)analysis.2.6.GC–MS analysisGC–MS analysis was carried out on a Hewlett–Packard 5890series gas chro-matograph/5972mass spectrometer in the selective ion monitoring (SIM)mode or in scanning mode.An HP-5fused silica capillary column (50m,0.32mm,0.17m m)was used for separation.Helium was used as carrier gas at a flow rate of 2ml/min with a head pressure of 12.5psi,and a linear velocity of 39.2cm/s at 290 C.The injection and interface temperature were maintained at 290 C.Oven temperaturewas initially isothermal at 80 C for 5min,and then ramped from 80to 290 C at a rate of 3 C/min,and then kept isothermal at 290 C for 30min.A 1m l sample was manually injected in the splitless injector with a 1min solvent delay.Mass spectra were acquired at electron impact (EI)mode under 70eV.The mass scanning ranged between m /z 50and m /z 500.2.7.Quality control and quality assuranceAll analytical data were subject to strict quality control.Method blanks (solvent),spiked blanks (standards spiked into solvent),sample duplicates,and a National Institute of Standards and Technology (NIST)standard reference material (SRM 1941)sample were processed.PAHs were quantified using the internal calibration method based on five-point calibration curves for individual compounds.The reported results were corrected with the recoveries of the surrogate standards.The surrogate recoveries were 53.26Æ7.47%for naphthalene-d 8,75.9Æ10.66%for ace-naphthene-d 10,89.42Æ8.78%for phenanthrene-d 10,96.75Æ9.61%for chrysene-d 12,and 89.56Æ12.97%for perylene-d 12with surface sediment samples,and were 64.12Æ15.6%for naphthalene-d 8,73.1Æ16.8%for acenaphthene-d 10,90.5Æ16.9%for phenanthrene-d 10,87.7Æ21.56%for chrysene-d 12,and 95.76Æ15.48%for per-ylene-d 12with sediment core 8samples,and were 43.08Æ18.56%for naphthalene-d 8,67.89Æ19.25%for acenaphthene-d 10,102.5Æ6.19%for phenanthrene-d 10,94.58Æ10.87%for chrysene-d 12,and 85.95Æ13.78%for perylene-d 12with sediment core 10samples.Recoveries of all the PAHs in the NIST 1941sample were between 80and 120%of the certified values.Nominal detection limits ranged from 0.2to 2.0ng/g.3.Results and discussionThe results of TOC levels and sedimentation rates were pre-sented in the Supplementary information .ThefollowingTable 20%20%40%60%80%100%12345678910111213146-rings5-rings 4-rings 3-rings 2-ringsStationA b u n d a n c eFig.2.Distribution of 2-,3-,4-,5-,6-ring in the surficial sediments from DayaBay.L M W /H M W1.52.0MP/P0.51.01.52.0Petrogenic sourcesPyrolytic origins0.51.0Fig.3.Plot of the isomeric ratios LWM/HWM vs MP/P.W.Yan et al./Environmental Pollution 157(2009)1823–18301825interpretation and discussion will be focused on the distribution and sources of PAHs in the surface sediments and sediment cores.3.1.Total concentrations and distribution of PAHs in the surface sediments in Daya BaySurface sediments can reflect the current sediment contaminant status.The concentrations of the polycyclic aromatic hydrocarbons (PAHs)in surface sediments are summarized in Table 1.As shown in Table 1,the total concentration of the 16USEPA priority PAHs in surface sediments ranged from 42.5to 158.2ng/g dry weight with a mean concentration of 126.2ng/g.The total concentrations of PAHs at most stations are of the same order of magnitude except the stations 7and 14,at which the lowest PAHs’concentration was found.The spatial distribution of PAHs in the surface sediment was site-specific.Station 7is located in the middle east of the Daya Bay and far away from coast,whilst those stations at which relatively high concentrations of PAHs were found are located on aquaculture area and near densely polluted area (stations 3,8and 13),or close to the Daya nuclear power station (stations 5and 6)or to Yihe harbour (station 10).This indicated that the amount of PAHs detected is possibly related to urban runoffs,and sewagedischarges.In addition,the TOC is also one important factor that controls the levels of PAHs in the sediments.The relatively low concentrations of PAHs at stations 7and 14could also be related to the low TOC there.A linear regression analysis showed that the total concentrations of PAHs in the surface sediments were corre-lated to the sediment organic carbon contents with p ¼0.56.If we excluded stations 1and 9,a significantly positive correlation (p ¼0.87)between PAH concentrations and TOC was obtained.The relatively high concentration of PAHs at stations 1and 9with lower TOC might suggested that there were other factors,such as non-point sources that affected the levels of PAHs pared with the previous studies in Daya Bay when the total concentrations of 16PAHs in sediment ranged from 115ng/g to 1134ng/g,with a mean concentration of 481ng/g (Zhou and Maskaoui,2003),the decreased levels suggest possible decreased inputs of PAHs from sources such as urban runoffs,sewage discharges.A comparison of PAHs’concentrations in surface sediment collected from different estuaries and bays is given in Table 2.The PAH concentrations in surface sediment from Daya Bay in the study are similar to those detected in Kyeonggi Bay,Korea (Kim et al.,1999),Northwestern Black Sea (Maldonado et al.,1999),Shenzhen Bay (Connell et al.,1998),South China Sea (Yang,2000),and Todos Santos Bay,Mexico (Macias-Zamora et al.,2002),but lower than others.To place the current concentrations of PAHs into an ecological perspective,we compared threshold effect concentrations (Long et al.,1995)with the surface sediment concentrations determined for the Bay.Concentrations of total PAHs in surface sediments of Daya Bay were far less than the threshold concentrations,sug-gesting that the probability of negative toxic effective caused by PAHs alone would be low.3.2.PAH composition and sources in the surface sedimentsThe composition pattern of PAHs by ring size in the surface sediment is shown in Fig.2.As shown in Fig.2,4-ring PAHs are most abundant,which is consistent with previous observation (Zhou and Maskaoui,2003).In addition,5-ring PAHs take second ually,high-molecular-weight PAHs predominated in sediment samples.The higher concentration of high-molecular-A n t /178Ant/178CombustionPetroleumPetroleum Petroleum Grass, Wood & CoalB a A /228CombustionMixed SourcesPetroleum00.10.20.30.10.20.30.40.5Flu/Flu+Pyr0.10.30.50.7I n P /I n P + B g PInP/InP + BgP Grass, Wood & CoalCombustionPetroleum CombustionPetroleumFig.4.Plot of isomeric ratios BaA/228,InP/InP þBgP,and Ant/178vs Flu/Flu þPyr.Table 3Total 16EPA priority PAHs.W.Yan et al./Environmental Pollution 157(2009)1823–18301826weight PAHs than that of low-molecular weight PAHs has been commonly observed in sediments from river and marine environ-ments (Magi et al.,2002;Guo et al.,2007a ).Based on characteristics in PAH composition and distribution pattern,the sources of anthropogenic PAHs,which are formed mainly via combustion processes and release of uncombusted petroleum products,can be distinguished by ratios of individual PAH compounds.Of anthropogenic PAHs,the lower-molecular-weight parent PAHs and alkylated PAHs have both petrogenic and combustion (low-temperature pyrolysis)sources,whereas the high-molecular parent PAHs have a predominantly pyrolytic source (Mai et al.,2002).Therefore,lower LWM/HWM (low-molecular-weight parent PAHs (2and 3rings PAHs)/high-molecular-weight parent PAHs (4,5,and 6ring PAHs except perylene))and MP/P (methylphenanthrene/phenanthrene)ratio are observed in the pyrolytic source.In general,a ratio of LWM/HWM <1suggests a pollution of pyrolytic origin (Magi et al.,2002;Soclo et al.,2000).An MP/P ratio less than 1is generally found in combustion mixtures,and a ratio between 2and 6presents in unburned fossil fuel mixtures (Zakaria et al.,2002;Youngblood and Blumer,1975).Besides the ratio of LWM/HWM and MP/P,PAH isomer pairs’ratios,such as Ant/178,Flu/Flu þPyr,BaA/228,and InP/BgP,have been developed for interpreting PAH composition and inferring possiblesources (Katsoyiannis et al.,2007;Bra¨ndli et al.,2007;Yunker et al.,2002).An Ant/178ratio <0.1usually is taken as an indication of petroleum while a ratio >0.1indicates a dominance ofcombustion;5,6-ring PAHs4-ring PäHs2,3-ring PAHsTotal PAHsperyleneY e a r1945195519651975198519952005Y e a r1945195519651975198519952005Fig.5.Down-core concentration variations of total PAHs,2,3-ring PAHs,4-ring PAHs,5,6-ring PAHs,and perylene in both cores 8and 10.W.Yan et al./Environmental Pollution 157(2009)1823–18301827Flu/Flu þPyr ratio <0.4is attributed to petrogenic source,ratio >0.5is suggested wood and coal combustion,while between 0.4and 0.5is characteristic of petroleum combustion;ratio of BaA/228<0.2implies petroleum,from 0.2to 0.35indicates either petroleum or combustion,and >0.35means pyrolytic origin;InP/BgP ratio less than 0.2is corresponded to petroleum pollution,higher than 0.5grass,wood or coal combustion,and between 0.2and 0.5petroleum combustion (Bra¨ndli et al.,2007;Yunker et al.,2002).In order to survey the sources of PAHs in the surface sediments from Daya Bay,LWM/HWM against MP/P (Fig.3),and BaA/228,InP/BgP,and Ant/178against Flu/Flu þPyr were plotted (Fig.4).As shown in Fig.3,the ratios of LWM/HWM and MP/P were below 1,suggesting a pyrolytic origin.This kind of source is confirmed by three other parameters,BaA/228(BaA/BaA þChr),Flu/Flu þPyr and InP/InP þBgP,which were ranged from 0.39to 0.46,from 0.56to 0.69,and from 0.56to 0.66respectively (Fig.4).However,some Ant/178ratios were less than 0.1,suggesting that the surface sediments were also contaminated by petrogenic PAHs.Normally,pyrolytic PAHs were mainly from the coal,grass and wood combustion and/or petroleum combustion.As shown in Fig.4,the ratios of Flu/Flu þPyr and InP/InP þBgP were all higher than 0.5,indicating biomass and coal combustion sources of pyrolyticPAHs.01219451955196519751985199520050.10.250.250.350.75MP/P Ant/178BaA/228LWM/HWMFlt/Flt+PyrInP/InP+BgPY e a rcore-81945195519651975198519952005MP/P Ant/178BaA/228LWM/HWMFlt/Flt+PyrInP/InP+BgPY e a rcore-10Fig.6.LWM/HWM,MP/P,BaA/228,InP/InP þBgP,Ant/178and Flu/Flu þPyr profiles for source identification in both cores 8and 10.I:combustion;II:petroleum;III:petroleum combustion;IV:grass,wood &coal combustion;V:mix.W.Yan et al./Environmental Pollution 157(2009)1823–18301828Besides anthropogenic PAHs,natural PAH(perylene)was also found widely in a variety of marine,lacustrine,riverine sediments (Luo et al.,2006;Chen et al.,2006;Liu et al.,2008).Perylene is a diagenetic product derived from its natural precursors during early diagenesis,while only small amounts of perylene are produced during combustion(Silliman et al.,1998;Luo et al.,2006). Relative concentrations of perylene higher than10%of the total penta-aromatic isomers suggest a probable diagenetic input, otherwise a probable pyrolytic origin of the compound is indicated (Baumard et al.,1998b)In the present study,perylene occurred at elevated levels(10.14–82.68ng/g)and was the most predominant component of PAHs in our study area.The percentage of perylene over the penta-aromatic isomer was from41%to79%,indicatinga diagenetic input of perylene in sediments.3.3.Concentrations and time trends of PAHs in the sediment profilesAnalytical results of PAH concentrations for cores8and10 sediments are summarized in Table3.Total16priority PAH concentrations in the core8sediments in this study ranged from 77.4to305.7ng/g with a mean value of92.1ng/g,while in the core 10sediments ranged from118.1to319.9ng/g with a mean value of 210.2ng/g.In terms of individual PAH composition,the compound of Phe is the most abundant in both cores.Other compounds,such as Nap,Flu and B(b)Flu,are posteriorly abundant.Perylene, a natural PAH,is also very abundant in the core8and core10,and its concentrations ranged from27.7to51.0ng/g with a mean value of41.2ng/g in the core8sediments,ranged from29.5to92.2ng/g with a mean value of67.4ng/g in the core10sediments.Fig.5shows the down-core concentration variations of total PAHs,2,3-ring,4-ring,5,6-ring(excluding perylene),and perylene in both sediment cores collected from Daya Bay.In core8,the total PAHs’concentration experienced two obvious peak-time periods in the1950s and1990s respectively.From the early1960s to the mid-1980s,the values of the total PAHs’concentrations showed widely fluctuating.In core10,the concentrations of PAHs generally showed relatively constant with a sharp rebound in the surficial slice.It is noticeable that an obvious peak-time period was observed in the late1980s.In the meantime,two high concentra-tions of total PAHs were identified in the1950s and1960s.The variation of the16PAHs’concentrations in cores8and10 from the founding of the People’s Republic of China in1949fol-lowed the economic development in China very well.The PAHs are good indicators of anthropogenic activities.Researchers have reported that the concentration of PAHs is proportional to the socioeconomic status of the country/region from which the samples were taken(Liu et al.,2005;Guo et al.,2007b).Thefirst peak-time period in1950s may correspond to the rapid economic development in thefirst Five-Year-Plan(1951–1955)after the founding of the People’s Republic of China,and the second peak-time period in late1980s or1990s may reflect the rapid economic growth and urbanization since the economic reform in the country in the late1970s(Liu et al.,2005).Thefluctuation of the total PAHs’concentrations in the1960s and1970s could be attributed to the social turbulence and confusion which let to thefluctuation of the country’s agricultural and industrial production.It is also noticeable that the sedimentary record of PAHs in sediment core8is more fluctuating than that in sediment core10,which indicates that the localization of the input sources played a very important role in PAH contamination.As stated above,the station8is close to the coast and near densely populated area,while the station10is located far away from the coast.So it is considered that much more sewage effluents and surface runoff were inputted into the area around the station8than into the station10.In the mean while,it is observed that the variation of the TOC content in the sediment core 10corresponded to the vertical distribution of PAH contamination level.This implies that the properties of the sediment such as organic carbon would also influence the vertical distribution and concentration of PAHs in sediment core10.3.4.Source of PAHs in sediment coresFig.6illustrates profiles of some PAH indicators including LWM/ HWM,MP/P,Ant/178,Flu/FluþPyr,BaA/228,InP/BgP in sediment cores collected from Daya Bay.In core8,ratios of LWM/HWM and Ant/178were0.66–1.96and0.06–0.20respectively,indicating that some of the PAHs were petrogenic(Magi et al.,2002;Soclo et al., 2000;Yunker et al.,2002).However,most values of MP/P in the core were below1,suggesting that pyrolytic source was dominant (Zakaria et al.,2002;Youngblood and Blumer,1975).In core10, ratios of LWM/HWM and MP/P were less than1,and most of An/178 values were less than0.1,suggesting that pyrolytic source was dominant in the site(Magi et al.,2002;Soclo et al.,2000;Zakaria et al.,2002;Youngblood and Blumer,1975).In addition,as shown in Fig.6,most of the values of Flu/FluþPyr,BaA/228and InP/InPþBgP were higher than0.5,0.35and0.5respectively,implying that pyrolytic PAHs in both cores were mainly from the coal and/or grass and wood combustion(Katsoyiannis et al.,2007;Bra¨ndli et al., 2007;Yunker et al.,2002).AcknowledgementsThe research work wasfinancially supported by Science Foun-dation of Guangdong Province(Grant No.06023662),and Planning Project of Guangdong Province(Grant No.2003C32804and No. 2006B36601005).The authors wish to thank Linli Guo,Guoqing Liu, Xiang Liu,Shichun Zhou and Kechang Li for their kindly help in sample collection and treatment and GC–MS analysis. Appendix.Supplementary dataSupplementary data associated with this article can be found,in the online version,at doi:10.1016/j.envpol.2009.01.023. ReferencesAllen,J.R.L.,Rae,J.R.,Longworth,G.,Hasler,S.E.,Ivanovich,H.,1993.A comparison of the210Pb dating technique with three other independent dating methods in an oxic estuarine salt-marsh sequence.Estuaries16,670–677.Baumard,P.,Budzinski,H.,Garrigues,P.,1998a.PAHs in Arcachon Bay,France:origin and biomonitoring with caged organisms.Marine Pollution Bulletin36,577–586.Baumard,P.,Budzinski,H.,Mchin,Q.,Garrigues,P.,Burgeot,T.,Bellocq,J.,1998b.Origin and bioavailability of PAHs in the Mediterranean Sea from mussel and sediment records.Estuarine,Coastal and Shelf Science47,77–90.Bra¨ndli,M.,Bucheli,T.D.,Kupper,T.,Mayer,J.,Stadelman,F.X.,Taradellas,J.,2007.Fate of PCBs,PAHs and their source characteristic ratios during composting and digestion of source-separated organic waste in full-scale plants.Environmental Pollution148,520–528.Chen,S.,Luo,X.,Mai,B.,Sheng,G.,Fu,J.,Zeng,E.Y.,2006.Distribution and mass inventories of polycyclic aromatic hydrocarbons and organochlorine pesticides in sediments of the Pearl River Estuary and the Northern South China Sea.Environmental Science and Technology40,709–714.Connell,D.W.,Wu,R.S.S.,Richardson,B.J.,1998.Occurrence of persistent organic contaminants and related substances in Hong Kong marine area:an overview.Marine Pollution Bulletin36,376–384.Fu,J.,Mai,B.,Sheng,G.,Zhang,G.,Wang,X.,Peng,P.,Xiao,X.,Ran,Y.,Cheng,F., Peng,X.,Wang,Z.,Tang,U.W.,2003.Persistent organic pollutants in environ-ment of the Pearl River Delta,China:an overview.Chemosphere52,1411–1422. Fu,J.,Wang,Z.,Mai,B.,Kang,Y.,2001.Field monitoring of toxic organic pollution in the sediments of Pearl River Estuary and its tributaries.Water Science and Technology43,83–89.Guo,W.,He,M.,Yang,Z.,Lin,C.,Quan,X.,Wang,H.,2007a.Distribution of polycyclic aromatic hydrocarbons in water,suspended particulate matter and sediment from Daliao River watershed,China.Chemosphere68,93–104.Guo,Z.,Lin,T.,Zhang,G.,Zheng,M.,Zhang,Z.,Hao,Y.,Fang,M.,2007b.The sedi-mentaryfluxes of polycyclic aromatic hydrocarbons in the Yangtze RiverW.Yan et al./Environmental Pollution157(2009)1823–18301829。
2023年9月第39卷第5期㊀沈阳建筑大学学报(自然科学版)JournalofShenyangJianzhuUniversity(NaturalScience)㊀Sep.㊀2023Vol.39ꎬNo.5㊀㊀收稿日期:2022-11-23基金项目:国家自然科学基金项目(51974189)作者简介:贾世龙(1976 )ꎬ男ꎬ副教授ꎬ主要从事结构安全和施工管理等方面研究ꎮ文章编号:2095-1922(2023)05-0907-08doi:10.11717/j.issn:2095-1922.2023.05.16基于PyroSim的老年公寓火灾烟气运动规律模拟贾世龙1ꎬ綦㊀韦1ꎬ2ꎬ李㊀畅1(1.沈阳建筑大学土木工程学院ꎬ辽宁沈阳110168ꎻ2.辽宁理工职业大学建筑学院ꎬ辽宁锦州121007)摘㊀要目的对某高层老年公寓火灾烟气分布特征和人员疏散规律进行研究ꎬ科学地进行疏散路径决策ꎬ提高疏散效率ꎬ以减少老年公寓火灾造成的人员伤亡ꎮ方法以苏州某高层老年公寓为研究对象ꎬ通过PyroSim软件进行火灾仿真模拟ꎬ分析火灾发生后温度㊁能见度㊁O2浓度㊁CO浓度和烟气层高度的变化规律ꎮ结果火灾模拟考虑温度㊁能见度㊁O2和CO浓度㊁烟气层高度5个因素的人体耐受临界值ꎬ得到可用疏散时间为172sꎻ位于2层及以上楼层的老年人在134s之前可通过3个楼梯间中的任意一个进行疏散ꎬ134s到142s之间可通过2㊁3号楼梯间进行疏散ꎬ142s后只能通过3号楼梯间进行疏散ꎮ结论得出老年公寓火灾燃烧产物的时空分布规律ꎬ计算出疏散时间ꎬ为老年公寓火灾疏散提供理论依据ꎮ关键词老年公寓ꎻ火灾烟气ꎻPyroSimꎻ数值模拟ꎻ疏散时间中图分类号TU352 5㊀㊀㊀文献标志码A㊀㊀㊀SimulationofFireSmokeMovementLawinSeniorApartmentBasedonPyroSimJIAShilong1ꎬQIWei1ꎬ2ꎬLIChang1(1.SchoolofCivilEngineeringꎬShenyangJianzhuUniversityꎬShenyangꎬChinaꎬ110168ꎻ2.SchoolofArchitectureꎬLiaoningVocationalUniversityofTechnologyꎬJinzhouꎬChinaꎬ121007)Abstract:Inordertodevelopevacuationroutescientificallyꎬimproveevacuationefficiencyꎬandreducecasualtiescausedbyfireinelderlyapartmentsasfaraspossibleꎬthefiresmokedistributioncharacteristicsandevacuationrulesofahigh ̄riseelderlyapartmentwerestudied.Takingahigh ̄riseelderlyapartmentinTakingahigh ̄riseelderlyapartmentinSuzhouastheresearchobjectꎬPyroSimsoftwarewasusedtoconductfiresimulationtoanalyzethechangesoftemperatureꎬvisibilityꎬO2concentrationꎬCOconcentrationandsmokelayerheightafterfire.ConsideringthecriticalvaluesofhumantoleranceoffivefactorsincludingtemperatureꎬvisibilityꎬconcentrationofO2andCOꎬandheightoffluegaslayerꎬtheavailablesafetyegresstimeis172s.Elderlypeoplelocatedonthe2ndfloorand908㊀沈阳建筑大学学报(自然科学版)第39卷abovecanbeevacuatedthroughanyofthe3stairwellsbefore134sꎬthroughstairwell2and3between134sand142sꎬandonlythroughstairwell3after142s.Thetemporalandspatialdistributionoffireproductsintheelderlyapartmentwasobtainedꎬandtheevacuationtimewascalculatedtoprovidetheoreticalbasisforthefireevacuationofelderlyapartments.Keywords:seniorapartmentꎻfiresmokeꎻPyroSimꎻnumericalsimulationꎻevacuationtime㊀㊀防灾减灾是火灾研究的重要方面ꎬ近年来ꎬ世界各地老年公寓火灾事故频发ꎬ老年人成为火灾中最大的受害群体ꎮ了解老年公寓中火灾烟气的运动规律ꎬ以缩短老年人疏散时间ꎮG Hadjisophocleous等[1]使用FDS软件对一座10层塔楼进行火灾模拟研究ꎬ并将其与实际试验数据进行了对比ꎬ最后得出ꎬ计算机仿真可以比较准确地模拟出高层建筑火灾发展过程及烟气蔓延情况ꎬ为排烟和疏散等提供一定数据指导ꎬ推动了计算机火灾数值模拟的发展ꎮT Tanaka等[2]研究了小尺寸竖井内的烟气特性ꎬ结合大量试验数据ꎬ得到烟气在竖井内的扩散规律ꎬ为了解高层老年公寓楼梯间烟气蔓延情况提供理论基础ꎮ王炜罡等[3]利用FDS软件模拟了某居民楼火灾情况ꎬ分析了火灾引起的CO浓度㊁能见度㊁烟气层高度变化情况ꎬ给出了影响人员安全的极限值ꎬ为实际应用提供参考ꎮ王维平[4]建立了某医院建筑模型ꎬ通过火灾仿真软件模拟多个工况下的CO浓度㊁CO2浓度㊁O2浓度㊁能见度和温度等变化情况ꎬ结合人员疏散软件进行模拟ꎬ提出火灾产物会降低人员疏散速度ꎮ刘海舰[5]以某老年公寓为研究对象ꎬ使用FDS仿真软件进行火灾模拟ꎬ总结出温度并不是消防安全㊁合理疏散需要着重考虑的关键因素ꎬ火灾产生的烟气对建筑内人员安全的影响更大ꎮ上述研究多集中于普通人使用的公共建筑且未考虑水喷淋系统的作用ꎬ缺少对老年人照料设施类建筑火灾蔓延方面的研究[6-8]ꎮ基于以上研究ꎬ笔者以苏州某高层老年公寓为研究对象ꎬ通过PyroSim软件进行水喷淋系统作用下的火灾仿真模拟ꎬ通过对温度㊁能见度㊁O2和CO浓度以及烟气层高度的时空分布特征进行分析ꎬ得到老年人最长可用疏散时间为172sꎮ位于2层及以上楼层的老年人在134s之前可通过3个楼梯间中的任意一个进行疏散ꎬ134~142s可通过2㊁3号楼梯间进行疏散ꎬ142s后只能通过3号楼梯间进行疏散ꎮ1㊀老年公寓火灾模型1 1㊀模型建立㊀㊀该老年公寓平面呈 L 型ꎬ建筑面积为9331 2m2ꎬ层高为3 3mꎬ共8层ꎬ建筑长102m㊁宽33mꎻ东西向走廊长94 2mꎬ南北向走廊长25 3mꎬ疏散走廊净宽为1 9mꎮ首层设有3个通往室外的安全出口ꎻ2~8层每层均设有可以通往1层的3个楼梯和4部电梯ꎮ为使火灾模拟更加符合实际情况ꎬ模型在餐厅内布置桌椅ꎬ卧室内布置衣柜㊁床头柜ꎬ休息室布置沙发和茶几ꎮ参照老年公寓CAD平面图纸绘制Revit三维建筑模型ꎮ首先绘制标高㊁轴网以确定垂直和水平方向的定位ꎻ其次ꎬ根据水平轴线与竖向轴线的位置依次对内外墙㊁门窗㊁楼梯和楼板等地上主体部分进行绘制ꎮ其构件清单如表1所示ꎮ表1㊀模型构件清单Table1㊀Modelcomponentlist专业分类构件名称建模内容建筑构件填充墙几何尺寸㊁结构层和面层材质门㊁窗洞口㊁材质㊁样式楼梯梯井㊁栏杆㊁踏步㊁休息平台尺寸屋面坡度其他雨棚㊁散水的几何尺寸㊁材质结构构件柱横截面尺寸㊁结构层和面层材质墙几何尺寸㊁结构层和面层材质梁几何尺寸㊁结构层和面层材质板厚度㊁结构层和面层材质设备及装消防设备感应探测器㊁喷淋的位置㊁数量饰构件桌椅等障碍物几何尺寸㊁材质第5期贾世龙等:基于PyroSim的老年公寓火灾烟气运动规律模拟909㊀㊀㊀建模完成后需要进行模型检查ꎬ针对 错㊁漏㊁碰㊁缺 和构件位置偏移等现象进行检验修正ꎮ最后以IFC格式作为中间文件导出ꎬ并将其导入到PyroSim软件中建立模型ꎬ如图1所示ꎮ图1㊀老年公寓火灾模型Fig 1㊀Thefiremodelofseniorapartment㊀㊀PyroSim中网格划分的大小对于模拟结果具有十分重要的影响ꎮ一般来说ꎬ网格大小的经验值δ取为火焰特征直径D∗的1/4到1/16ꎬD∗通过计算可得[9]:D∗=Qρ¥cpT¥gæèçöø÷25.(1)式中:Q为火源热释放速率ꎬkWꎻρ¥为空气密度ꎬ一般取为1 2kg/m3ꎻcp为空气比热ꎬ取为1kJ/(kg K)ꎻT¥为环境空气温度ꎬ取293Kꎻg为重力加速度ꎬ取9 81m/s2ꎮ经过计算ꎬD∗为1 388mꎬ为了提高仿真速度且保持仿真精度ꎬ网格大小取为D∗的1/5 5ꎬ设为0 25mˑ0 25mˑ0 25mꎬ且由于此老年公寓模型为L型ꎬ建筑物东西长度较长ꎬ若使用一个矩形网格模型ꎬ会造成无建筑物区域网格较多ꎬ降低仿真速度ꎬ因此使用多重网格ꎬ建筑物南北向网格为Mesh01ꎬ东西向网格命名为Mesh02ꎬ参数如表2所示ꎬ最终网格个数共2408320个ꎮ表2㊀网格参数Table2㊀Meshparameters网格名称轴最小值/m最大值/m网格数X-1.0103.0416Mesh01Y-1.010.044Z0.026.5106X91.0103.048Mesh02Y-24.0-1.092Z0.026.51061 2㊀火灾场景设置依据最不利原则ꎬ起火位置设在火灾发生几率比较高或产生危害最大的区域[10]ꎮ火灾发生时烟气等产物先在房间和走廊内横向传播ꎬ进入楼梯间后受到 烟囱效应 影响ꎬ火灾产物不断向上蔓延[11]ꎬ据此判定ꎬ首层起火危险性更大ꎮ因此ꎬ将起火位置设在一层起居室内ꎮ由于该老年公寓设有喷淋系统ꎬ根据«建筑防烟排烟系统技术标准»(GB51251 2017)[12]的规定ꎬ设定火源最大单位热释放速率(HRR)为2500kW/m2ꎬ火源面积为1m2ꎻ老年公寓中存放大量衣物㊁被褥等ꎬ根据«消防安全工程»[13]的规定ꎬ设定火灾增长类型为快速火ꎬ火灾发展系数α取为0 0469kW/s2ꎮ为便于分析ꎬ将3个楼梯间分别命名为1号㊁2号㊁3号楼梯间ꎬ按建筑平面将模型划分为4个区域:A㊁B㊁C㊁Dꎬ起火点位于B区域内ꎬ如图2所示ꎮ图2㊀1层平面图和火源位置Fig 2㊀Layoutofthefirstfloorplanandfirelocation㊀㊀该老年公寓地处江苏省ꎬ属于温带季风向亚热带季风过渡的气候ꎬ查阅物理参数设定其环境温度为20ħꎬ模拟时间为600sꎮ着火房间上方装有光电感烟探测器ꎮ火灾报警10s后着火房间门开启ꎬ67 4s后其他房间及楼梯间的门开启ꎬ火灾产生的烟气等迅速向外扩散ꎮ1 3㊀测点和切片设置杨胜州等[14-15]将火灾中人员的死亡原因归纳为化学窒息死亡㊁单纯窒息死亡㊁吸入黑烟㊁吸入热气ꎮ因此ꎬ笔者综合考虑各个因素ꎬ将温度㊁能见度㊁O2和CO浓度㊁烟气层高度5个因素的人体耐受临界值作为判断火灾达到危险状态的标准ꎬ如表3所示ꎮ910㊀沈阳建筑大学学报(自然科学版)第39卷表3㊀各因素的危险临界值Table3㊀Thecriticalriskvaluesforeachfactor火灾产物温度/ħ能见度/mCO体积分数/%O2体积分数/%烟气层高度/m危险临界值ȡ70ɤ10ȡ0 2ɤ15ɤ2㊀㊀火灾发生时ꎬ建筑内人员的疏散路线为房间 走廊 楼梯间 安全出口ꎬ因此ꎬ在走廊及楼梯间关键位置布置5种相应的监测设备ꎮ图3㊀走廊监测设备布置Fig 3㊀Thelayoutofmonitoringequipmentatcorridor图4㊀楼梯间监测设备布置Fig 4㊀Thelayoutofmonitoringequipmentinstairwells2㊀模拟结果分析PyroSim火灾动态仿真软件能够对火灾发生过程中建筑内的烟气㊁温度和有害气体浓度等进行仿真模拟ꎮ图5为火灾热释放速率模拟结果ꎮ热释放速率最终稳定在设定值2500kW附近ꎬ证明模拟网格大小设置适当ꎮ图5㊀火灾热释放速率变化曲线Fig 5㊀Thereleaserateoffireheat2 1㊀温度分布规律图6为1层温度监测结果图ꎮ监测结果显示ꎬ2~8层温度均在危险临界值以下ꎬ起火位置所在的1层受火灾影响最为严重ꎮ在109s时B区域走廊温度迅速达到70ħꎬ到394s时整个B区域的温度达到危险值ꎬ火灾发展到456s时B区域走廊大部分喷淋设备已启动ꎬ直到模拟结束ꎬ整个B区域不可用于人员疏散ꎮA㊁C㊁D区域走廊在模拟时间内温度波动很小ꎬ低于临界值ꎮ图6㊀1层走廊温度变化曲线Fig 6㊀Temperaturecurveonthe1stfloorcorridor第5期贾世龙等:基于PyroSim的老年公寓火灾烟气运动规律模拟911㊀㊀㊀3个楼梯间的温度均在安全限值以内ꎬ纵向温度切片如图7所示ꎬ其中1号和2号楼梯间受火灾影响相对较大ꎮ绘制1㊁2号楼梯间温度曲线如图8所示ꎬ由于5层及以上温度未受火灾影响ꎬ维持在环境温度20ħꎬ因此5~8层楼梯间温度曲线没有绘出ꎮ从图8中可以看出ꎬ2个楼梯间最高温度出现在1楼楼梯间入口处ꎬ楼层数越高温度所受影响越小ꎻ各层温度始终保持在70ħ以下ꎬ因此ꎬ3个楼梯间均是有效的逃生通道ꎮ图7㊀三个楼梯间的温度切片Fig 7㊀Temperatureslicesofthreestairwells图8㊀1号和2号楼梯间温度曲线Fig 8㊀Temperaturecurvesofstairwell1and22 2㊀能见度分布规律图9为起火层的能见度云图ꎬ黑色区域的能见度小于或等于危险临界值10mꎬ30s后1层的能见度开始降低ꎬ172s时整个起火层能见度均下降到10m以下ꎮ图9㊀起火层能见度分布云图Fig 9㊀Thecloudmapofvisibilitydistributionattheflooronfire㊀㊀图10为起火层各区域走廊的能见度变化曲线ꎮB区域走廊下降最快ꎬ在36s时即达到危险值ꎬA㊁C㊁D区域走廊分别在98s㊁130s㊁172s时能见度降到10mꎮ912㊀沈阳建筑大学学报(自然科学版)第39卷图10㊀起火层能见度情况Fig 10㊀Visibilityofthefirefloor㊀㊀图11为一层3个楼梯间的能见度变化曲线ꎮ1号楼梯间在390s时能见度达到10mꎬ在489s时降到最低ꎬ接近最低值ꎻ2号楼梯间从一层开始迅速向上蔓延ꎬ在246s后能见度达到临界值ꎬ340s后4层及以下能见度均达到临界值ꎻ3号楼梯间由于距离较远ꎬ烟气进入较慢ꎬ在460s达到临界值ꎮ11㊀楼梯间能见度曲线Fig 11㊀Thevisibilitycurveofstairwell2 3㊀CO体积分数分布特征起火层B区域CO体积分数在350s时达到0 2%ꎬ但后期波动较大ꎬ其他区域走廊的CO体积分数均在0 004%以下ꎬ二层及以上楼层基本不受CO体积分数影响(见图12)ꎮ2 4㊀O2体积分数分布特征起火层O2体积分数分布曲线如图13所示ꎮ在325s时起火层B区域走廊的O2体积分数降到危险临界值15%ꎬ其余区域走廊的O2体积分数略有下降ꎬ对人员通行影响不大ꎻ其他各层O2体积分数没有明显下降ꎮ图12㊀起火层走廊CO体积分数Fig 12㊀COconcentrationatthefirefloorcorridor图13㊀起火层的O2体积分数曲线Fig 13㊀O2concentrationcurveatthefirefloor㊀㊀图14为3个楼梯间的O2体积分数曲线ꎮ起火层楼梯间入口处O2体积分数最低ꎬ3个楼梯间的O2体积分数波动很小ꎬ与正常值21%相差不大ꎮ图14㊀各楼梯间的O2体积分数曲线Fig 14㊀O2concentrationcurveatthestairwelloffirefloor第5期贾世龙等:基于PyroSim的老年公寓火灾烟气运动规律模拟913㊀2 5㊀烟气层高度分布特征图15为老年公寓内的烟气蔓延情况ꎮ起火层A区域走廊烟气层高度在123s时迅速降低到2m以下ꎬB区域走廊烟气层稳定在2m以下的时间为108sꎬC㊁D区域分别在130s㊁306s时降到2m以下ꎻ其余楼层走廊烟气层高度均未达到临界值ꎬ对人员疏散几乎不产生影响ꎮ图15㊀烟气分布情况Fig 15㊀Thesmokedistribution㊀㊀图16为起火层各楼梯间的烟气层高度变化曲线ꎮ3个楼梯间的烟气层分别在134s㊁142s㊁172s时降到临界值以下ꎬ最终均维持在1 8m左右ꎮ其余楼层烟气层高度均未达到临界值ꎮ图16㊀各楼梯间的烟气层高度Fig 16㊀Thesmokelevelofthestairwells2 6㊀模拟结果分析对模拟结果进行分析ꎬ以得到可用疏散时间ꎬ表4为起火层各区域中各因素下的可用疏散时间ꎮ表4㊀起火层各区域可用疏散时间Table4㊀Theavailablesafetyegresstimeineacharea区域各因素下可用疏散时间/s温度能见度CO体积分数O2体积分数烟气层高度əA区域走廊 98 123B区域走廊39437350325108C区域走廊 130 130D区域走廊 172 3061号楼梯间 390 1342号楼梯间 246 1423号楼梯间460172㊀㊀当某一区域的某一项因素达到危险临界值后ꎬ该区域即不可用于人员疏散ꎮ尽管能见度会影响使老年人疏散速度ꎬ但没有致命伤害ꎬ故A区域人员应能在123s内疏散到安全区域ꎬB㊁C㊁D区域人员的可用疏散时间分别为108s㊁130s㊁306sꎮ烟气最先蔓延到起火层的楼梯间内ꎬ而其他楼层人员疏散又必须要经过楼梯间的一层ꎬ因此以3个楼梯间的一层可用疏散时间的最大值作为整个建筑物可用疏散时间[16]ꎮ根据表4中数据ꎬ1号㊁2号㊁3号楼梯间的可用疏散时间分别为134s㊁142s和172sꎬ当1号和2号楼梯间达到危险状态后ꎬ人员还可通过3号楼梯间进行疏散ꎬ位于2层及以上楼层的老年人在134s之前可通过3个楼梯间中的任意一个进行疏散ꎬ134~142s可通过2㊁3号楼梯间进行疏散ꎬ142s后只能通过3号楼梯间进行疏散ꎮ故全楼可用安全疏散时间取为最大值172sꎮ3㊀结㊀论(1)温度对于起火房间附近以及3号楼梯间影响较大ꎻ能见度对于起火楼层㊁以上楼层以及2㊁3号楼梯间影响较大ꎻ起火房间受烟气层高度的影响最严重ꎬ对其他区域影响不足以对人员疏散构成危害ꎻ火灾产物对于着火房间附近以及1㊁2号楼梯间影响较大ꎬ由于东西向走廊较长ꎬ且 L 形走廊在拐角处对火灾烟气产生阻碍作用ꎬ只有少部分烟914㊀沈阳建筑大学学报(自然科学版)第39卷气流入3号楼梯间ꎮ(2)计算得到得到老年公寓在火灾时的可用疏散时间最大值为172sꎬ且3号楼梯间可使用时间最长ꎮ参考文献[1]㊀HADJISOPHOCLEOUSGꎬKOYJ.UsingaCFDsimulationindesigningasmokemanagementsysteminabuilding[C].IEEE:Simulationconferenceꎬ2006.[2]㊀TANAKATꎬFUJITATꎬYAMAGUCHIJ.Investigationintorisetimeofbuoyantfireplumefronts[J].Internationaljournalonengineeringperformance ̄basedfirecodesꎬ2000ꎬ2(1):14-25.[3]㊀王炜罡ꎬ文虎ꎬ贾勇锋.基于FDS的高层居民楼火灾模拟[J].西安科技大学学报ꎬ2020ꎬ40(2):314-320.㊀(WANGWeigangꎬWENHuꎬJIAYongfeng.High ̄riseresidentialbuildingfiresimulationbasedonFDS[J].JournalofXiᶄanuniversityofscienceandtechnologyꎬ2020ꎬ40(2):314-320.)[4]㊀王维平.医院类高层建筑人员疏散数值模拟研究[D].广州:华南理工大学ꎬ2016.㊀(WANGWeiping.Numericalsimulationofevacuationinhospitalclassofhighrisebuildings[D].Guangzhou:SouthChinaUniversityofTechnologyꎬ2016.) [5]㊀刘海舰ꎬ张文修.养老院建筑火灾案例分析及数值模拟[J].门窗ꎬ2019(23):35-38.㊀(LIUHaijianꎬZHANGWenxiu.Nursinghomebuildingfirecaseanalysisandnumericalsimulation[J].Doors&windowsꎬ2019(23):35-38.)[6]㊀PROULXG.Occupantbehaviourandevacuation[C]//Proceedingsofthe9thinternationalfireprotectionsymposium.Iceland:IcelandFireAuthorityꎬ2001:219-232.[7]㊀刘翔.不同重量行李对楼梯中人员疏散速度影响的研究[D].成都:西南交通大学ꎬ2016.㊀(LIUXiang.Anexperimentalstudyonluggageofdifferentweightonthespeedonthestair[D].Chengdu:SouthwestJiaotongUniversityꎬ2016.) [8]㊀李胜利ꎬ李孝斌.FDS火灾数值模拟[M].北京:化学工业出版社ꎬ2019.㊀(LIShengliꎬLIXiaobin.FDSnumericalfiresimulation[M].Beijing:ChemicalIndustryPressꎬ2019.)[9]㊀黄丽蒂ꎬ罗开洲ꎬ刘莹ꎬ等.老年公寓火灾场景人群疏散模拟[J].中国安全科学学报ꎬ2020ꎬ30(3):137-142.㊀(HUANGLidiꎬLUOKaizhouꎬLIUYingꎬetal.Simulationofcrowdevacuationinfiresceneofelderlyapartment[J].Chinasafetysciencejournalꎬ2020ꎬ30(3):137-142.) [10]SUNQꎬTURKANY.ABIM ̄basedsimulationframeworkforfiresafetymanagementandinvestigationofthecriticalfactorsaffectinghumanevacuationperformance[J].Advancedengineeringinformaticsꎬ2020ꎬ8(44):13-28. [11]中华人民共和国住房和城乡建设部.建筑防烟排烟系统技术标准:GB51251 2017[S].北京:中国计划出版社ꎬ2017.㊀(HousingandUrban ̄RuralDevelopmentofthePeopleᶄsRepublicofChina.Technicalstandardforsmokemanagementsystemsinbuildings:GB51251 2017[S].Beijing:ChinaPlanningPressꎬ2017.)[12]中华人民共和国国家质量监督检验检疫总局.消防安全工程:第4部分设定火灾场景和设定火灾的选择:GB/T31593.4 2015[S].北京:中国计划出版社ꎬ2015.㊀(GeneralAdministrationofQualitySupervisionꎬInspectionandQuarantineofthePeopleᶄsRepublicofChina.Firesafetyengineering ̄part4:selectionofdesignfirescenariosanddesignfires:GB/T31593.4 2015[S].Beijing:ChinaPlanningPressꎬ2015.)[13]杨胜州ꎬ莫善军ꎬ潘迁宏.地下交通枢纽站火灾烟气控制数值模拟研究[J].中国安全生产科学技术ꎬ2012ꎬ8(12):48-52.㊀(YANGShengzhouꎬMOShanjunꎬPANQianhong.Numericalsimulationoffiresmokecontrolinundergroundtransportationhubstation[J].Journalofsafetyscienceandtechnologyꎬ2012ꎬ8(12):48-52.)[14]张江涛ꎬ王景刚ꎬ龚凯.基于FDS的办公大楼火灾数值模拟分析[J].建筑安全ꎬ2015ꎬ30(2):14-17.㊀(ZHANGJiangtaoꎬWANGJinggangꎬGONGKai.NumericalsimulationanalysisofofficebuildingfirebasedonFDS[J].Constructionsafetyꎬ2015ꎬ30(2):14-17.)[15]刘朝峰ꎬ许强ꎬ齐钦ꎬ等.高层住宅建筑火灾应急疏散模拟与策略研究[J].灾害学ꎬ2022ꎬ37(2):174-181.㊀(LIUChaofengꎬXUQiangꎬQIQinꎬetal.Studyonsimulationandstrategyoffireemergencyevacuationinhigh ̄riseresidentialbuildings[J].Journalofcatastrophologyꎬ2022ꎬ37(2):174-181.)[16]田鑫ꎬ苏燕辰ꎬ李冬等.地铁车站火灾疏散仿真分析[J].科学技术与工程ꎬ2017ꎬ17(16):333-337.㊀(TIANXinꎬSUYanchenꎬLIDongꎬetal.Firesafetyevacuationsimulationanalysisofsubwaystation[J].Sciencetechnologyandengineeringꎬ2017ꎬ17(16):333-337.)(责任编辑:王国业㊀英文审校:范丽婷)。
Simulation of Spatial and Temporal Radiation Exposures for ISS in the South Atlantic AnomalyBrooke M. AndersonNASA Langley Research Center, Hampton, VA, 23681John E. NealyOld Dominion University, Norfolk, VA, 23508Nathan J. LuetkeSwales Aerospace, Newport News, VA, 23606Christopher A. Sandridge and Garry D. QuallsNASA Langley Research Center, Hampton, VA 23681The International Space Station (ISS) living areas receive the preponderance of ionizing radiation exposure from Galactic Cosmic Rays (GCR) and geomagnetically trapped protons.Practically all trapped proton exposure occurs when the ISS passes through the SouthAtlantic Anomaly (SAA) region. The fact that this region is in proximity to a trapping“mirror point” indicates that the proton flux is highly directional. The inherent shieldingprovided by the ISS structure is represented by a recently-developed CAD model of thecurrent 11-A configuration. Using modeled environment and configuration, trapped protonexposures have been analytically estimated at selected target points within the Service andLab Modules. The results indicate that the directional flux may lead to substantiallydifferent exposure characteristics than the more common analyses that assume an isotropicenvironment. Additionally, predictive capability of the computational procedure shouldallow sensitive validation with corresponding on-board directional dosimeters.Nomenclaturemagnetic field intensity vectorB =proton kinetic energyE =F N= distribution function normalization factoraltitudeH =upper atmosphere scale heighth s =I = magnetic field dip angleJ =directional proton fluxJ4π =omni-directional (integrated) proton fluxparameter defined by Equation (4)K =Earth radiusR⊕ =proton gyroradiusr g =parameter defined by Equation (6)x =θ =pitch angle with respect to magnetic fieldλ =azimuth angle with respect to magnetic fieldσθ= standard deviation of pitch angleI.IntroductionThe ISS at the present time has evolved as a near-Earth space habitat suitable for continuous human occupation. Further evolution of ISS should render it as facility forming a vital part of an expanding space exploration infrastructure. This study will look at the radiation exposure aspect of astronaut health and safety by utilizing analytical procedures for determining ionizing radiation dose with a view toward implementation as a means of shield augmentation for the habitation modules. A CAD model of the ISS 11-A configuration specifically dedicated to exposure analysis has been developed for this study.The first step in the analytical process begins with establishment of an appropriate environment model. For the Low Earth Orbit (LEO) environment, the most important contributors to deposition of ionizing radiation energy are the trapped protons and the GCR. The present study addresses only the highly directional (vectorial) proton flux, which very roughly constitutes about half the total cumulative exposure for long duration missions. However, instantaneous dose rates are very much higher during the approximately 10 – 15 minute SAA transits for which most of the trapped proton exposure occurs during a 24-hour day. During the transits, both omni-directional and vector proton flux vary from near zero to maximum values, and directionality is controlled by the vehicle orientation with respect to the magnetic field vector components. Consequently, an added degree of complexity is introduced with the time variation of proton flux spectra along the orbit, for which individual transport properties through the shield medium must be taken into account. The deterministic high energy heavy ion transport code HZETRN1, developed at NASA-Langley, is used to describe the attenuation and interaction of the LEO environment particles along with the dosimetric quantities of interest. The ISS geometry defined by the CAD model is finally used to calculate exposures at selected target points within the modules, some of which represent locations of thermo-luminescent detectors (TLDs).II.LEO Environment and Proton TransportThis section describes the radiation environment selected for the present study and its spatial variation in the SAA region. Nominal ISS orbital conditions are prescribed as 400 km altitude at 51.6-degree inclination. Simple circular orbit equations have been used to tailor the SAA transits for passage through peak flux regions. Time variation of the exposure is defined by these transits.A.SAA Protons for ISS TransitThe standard NASA trapped proton model AP8MIN2 has been chosen to define a near-worst-case scenario for the fluxes. Fig. 1 depicts the orbital tracks in ascent and descent passing through the high flux regions.Figure 1. Ascent and descent orbital tracks for ISS through the South Atlantic Anomaly. Symbol spacing represents 1-min. intervals; flux contours are in units of protons(>100 MeV)/cm2-sec.The differential flux spectra obtained from the environment model are plotted in Figs. 2a and 2b for selected points near the region of peak flux. The chosen points are identified by time values in minutes elapsed after ascending node point.igure 2. Omni-directional differential flux spectra obtained from the AP8MIN model in central region ofF SAA for (a) descending track and (b) ascending track.101001000F l u x , p r o t o n s /(c m ^2-M e V -m i n )101001000F l u x , p r o t o n s /(c m ^2-M e V -m i n )The complex low-energy behavior in the proton spectra is not readily explained and is most likely due to several influences. Since only higher energy protons (> ~50 MeV) penetrate the ISS structure, the low energy fluctuations are unimportant. In order to introduce directionality into the flux spectra, the local magnetic field properties become a major factor in the environment.Near a mirror point, the spiraling particle paths are nearly normal to the field lines (i. e., pitch angle approaches 90°). A good account of the theoretical basis for the vector flux of protons in the SAA may be found in Heckman and Nakano 3, and computational models have been developed for analyzing the effects of directionality 4,5. Using critical assumptions and approximations, an expression for the directional flux has been found 3 in terms of local magnetic field vector, B ; altitude, H; ionospheric scale height, h s , and the pitch and azimuth angles (θ and λ). This formula, in the nomenclature of Kern 5, is expressed as a ratio of the vector flux to the omnidirectional (integrated) value:⎥⎦⎤⎢⎣⎡⎥⎦⎤⎢⎣⎡−−=s g h I r F J JN λσθπθπcos cos exp 2)2/(exp 224 (1)where I is the magnetic dip angle, and r g is the proton gyroradius given by(2) BE E r g r301876sin 2+=θwith the proton kinetic energy, E, in MeV and magnetic field strength, B, in gauss. The standard deviation of pitch angle is given byI K h ssin =θσ(3)whereII HR K sin )cos 2()3/4(2++=⊕ (4)with R ⊕ representing the earth radius. F N is a normalization factor, parameterized by Kern 5 as:)exp()8533)(./075(.x x F N −+=θσ (5) whereθsin cos s g h I r x=(6)When the omni-directional flux is redistributed according to the distribution function of Equation (1), a pattern emerges in which most particles are directed in a very pronounced band of zenith and azimuth angles.B. Energetic Proton Transport in Shield MediumThe spectra of Figure 2 have been used as input to the HZETRN code to compute transport through thickness ranges of shield material (Al). Subsequent exposures in simulated tissue (H 2O) are evaluated as dose equivalents using ICRP 6 quality factors for normally incident flux on semi-infinite slab geometry. The NASA-Langley HZETRN code is a well-established deterministic procedure allowing rapid and accurate solution to the Boltzmann transport equation. Details concerning the interaction and attenuation methodology are described at length elsewhere 1,7. Figures 3a and 3b show the resultant dose vs. depth functions obtained from the transport calculations that are used to evaluate ultimate exposures at target points within complex shield configurations defined by the desired CAD solid model of the full-scale geometric structure.0.010.11101000.1110100Scaled Thickness in Al, g/cm^2D o s eE q u i v a l e n t R a t e , m r e m /m i n0.010.11101000.1110100Scaled Thickness in Al, g/cm^2D o s eE q u i v a l e n t R a t e , m r e m /m i nFigure 3. Dose vs. Depth functions calculated for Aluminum slab geometry at selected times during SAAtransit: (a) descending track and (b) ascending track.III.CAD Solid Model of ISS 11-A ConfigurationThe primary components of the ISS 11-A configuration are the U. S. Destiny Lab Module, the U. S. Unity Connections Module (Node 1), the U. S. Airlock and the three U. S. Pressurized Mating Adaptors (PMAs). Also included are the Russian Functional Cargo Block (FGB, or Zarya), the Russian Service Module (SM, or Zvezda, the Russian Soyuz Spacecraft, the Russian Progress re-supply vehicle, the Russian Docking Compartment and the truss structures. A simplified model of this configuration has been constructed for dedicated shield analysis using the commercially available CAD software I-DEAS®. This model consists of 460 separate components, each having its own dimensions, orientation, and density distribution defined in near conformity with the actual hardware. A large part of the inherent shielding for the astronauts results from the distributed micrometeoroid shield and the pressure vessel itself. The cargo in the primary modules also provides additional shielding. In this analysis it is assumed that these components are primarily made up of aluminum. A description of a predecessor (configuration 7-A) of the present model may be found in Hugger et al.8 Figures 4, 5, and 6 show an external view of the 11-A CAD model as it appears on a computer screen and split-view illustrations of the 6 target points chosen for this analysis.Figure 4. External perspective view of CAD Modeled ISS 11-A configuration.Figure 5. Split view of U. S. Lab Module showing selected target points.Figure 6. Depiction of selected target points in Russian Service Module.The distributions of thickness for 970 directions has been evaluated in terms of the scaled thickness in g/cm 2 for each of the 6 chosen target points for a spherical coordinate system with origin at the point. The ray directions are determined for 22 polar angles and 44 azimuth angles plus 2 separate polar angles at top and bottom. The spherical coordinate grid is defined so that each directional ray subtends a constant solid angle. The cumulative distributions are given for the 6 points in Figure 7.00.10.20.30.40.50.60.70.80.910.11101001000Scaled Thickness, t, gm/cm^2F r a c t i o n o f t h i c k n e s s < tFigure 7. Cumulative thickness distribution for selected target points in ISS 11-A configuration.IV.Results of CalculationsTable I. Calculated Dose Equivalent Rates (mrem/min) for Selected Target Points for Isotropic andDirectional SAA Proton EnvironmentsDESCENDING TRACKRACK01 LAB1 LAB4Time Step Directional Omni Directional Omni Directional Omni53 0.72 0.61 0.45 0.45 1.09 0.9754 1.29 1.11 0.67 0.79 1.99 1.8055 1.44 1.29 0.70 0.92 2.41 2.2156 1.87 1.78 0.92 1.27 3.63 3.3657 1.91 1.95 1.00 1.37 4.01 3.7758 1.43 1.58 0.84 1.10 3.40 3.2759 0.88 1.07 0.58 0.72 2.36 2.41NODE1_1 SM5 SM6Time Step Directional Omni Directional Omni Directional Omni53 1.44 0.88 0.63 0.63 1.26 1.1854 2.47 1.59 1.11 1.16 2.41 2.1055 3.07 1.98 1.25 1.33 2.89 2.5556 4.66 3.03 1.66 1.80 4.16 3.7457 4.79 3.35 1.71 1.94 4.66 4.5558 4.05 2.93 1.28 1.54 3.91 3.6759 2.90 2.15 0.77 1.01 2.83 2.98ASCENDING TRACKRACK01 LAB1 LAB4Time Step Directional Omni Directional Omni Directional Omni77 0.67 0.72 0.41 0.49 1.17 1.6478 1.10 1.21 0.71 0.82 1.90 2.6679 1.41 1.57 0.95 1.08 2.39 3.3280 1.39 1.56 0.96 1.07 2.35 3.2781 1.10 1.24 0.77 0.86 1.86 2.5882 0.63 0.72 0.44 0.49 1.10 1.54NODE1_1 SM5 SM6Time Step Directional Omni Directional Omni Directional Omni77 1.88 1.47 0.72 0.68 1.82 2.1578 3.01 2.37 1.19 1.16 2.98 3.2079 3.73 2.96 1.55 1.52 3.74 4.1380 3.66 2.92 1.51 1.51 3.61 3.8181 2.89 2.30 1.20 1.21 2.91 3.0182 1.73 1.36 0.68 0.69 1.68 1.97Each of the entries in the preceding table represents the solid-angle integration of dose equivalent rate resulting from protons incident on the target point from all directions. Even though the total doses are of the same magnitude for both isotropic and vectorial external environments, the directional properties of the radiation field may be vastly different for the two cases. This is illustrated in Fig. 8 for the target point designated RACK01 as spherical coordinate angle contour plots of the directional dose.Figure 8. Contour plots of directional dose equivalent as functions of spherical coordinate angles about target point RACK01 for isotropic environment (top) and directional environment (bottom). Units are inmrem/(min-sr).V.Analysis of ResultsThe contour maps of Fig. 8 portray the differences in directional dose distribution and illustrate quantitavely the angular variation of exposure intensity. However, such renditions are difficult to interpret and diagnose analytically. Present 3-D computer graphic visualization techniques may be implemented to provide displays that lend themselves to much more convenient and rapid interpretation.Figure 9. Computer-generated distributions of dose equivalent on spherical surfaces centered on target point within ISS CAD model for isotropic environment (top) and directional environment (bottom).The illustrations shown in Fig. 9 represent the application of visualization software exhibiting color-coded patterns of directional dose mapped onto a spherical surface. The example chosen is a point near that designated as SM6 in a relatively lightly-shielded region of the Service Module. The mapping is for a time step on the ascent path and demonstrates a case for which the isotropic and directional doses contrast markedly. Such images clearly show the impact of the normalized distribution function that results in a re-direction, or “focusing” of the isotropic flux. Consequently, in some cases the integrated dose in the directional case may be substantially less than for that of the isotropic environment. In other cases, the reverse may occur as may be seen in the tabular results. Such variations arise because of the complex interactions of the charged particle environment with the local magnetic field and the changing orientation of the vehicle structure.VI.Summary and ConclusionThe primary purpose of this study is to demonstrate by realistic simulation a procedure for accurately analyzing and predicting radiation exposures in the confines of a shielded spacecraft. The procedure described can readily be implemented in comprehensive specific mission analyses and shield design efforts. In the present study, we have attempted only to portray results pertaining to the exposures encountered by ISS in transit through the higher flux regions of SAA. A more detailed analysis along these lines would necessarily address the more realistic 2 or 3 SAA transits per day of ISS over an extended time period. Near-term plans are to progress from spatial/temporal simulation to real-time analyses as directional dosimeter data becomes available from ISS. Such validations will provide a stringent test of the adequacy of the theoretical developments and serve to quantify the predictive capabilities as they may apply to future human missions as well as to remote sensing platforms.References1Wilson, J. W., et al., “HZETRN: Description of a Free-Space Ion and Nucleon Transport and Shielding Computer Program,” NASA TP-3495, May 1995.2Sawyer, D. M. and Vette, J. J., “AP-8 Trapped Proton Environments for Solar Maximum and Solar Minimum,” NSSDC WDC-A-R85 76-06, 1976.3Heckman, H. H. and Nakano, G. H., “Low-Altitude Trapped Protons during Solar Minimum Period,” J. Geophys. Res. – Space Physics, Vol. 74, No. 14, July 1969, pp. 3575-3590.4Watts, J., Parnell, T., and Heckman, H. H., “Approximate Angular Distribution and Spectra for Geomagnetically Trapped Protons in Low Earth Orbit,” AIP Conference Proceedings on High Energy Radiation in Space, eds. Rester, A. C. and Trombka, J. I., Sanibel Is., FL, 1989, pp. 75-85.5Kern, J. W., “A Note on Vector Flux Models for Radiation Dose Calculations,” Radiation Meas., Vol. 23, No. 1, 1994, pp. 75-85.61990 Recommendations of the International Commission on Radiological Protection, ICRP Publication 60, Annals of the ICRP, Vol. 21, Elsevier Science, N. Y., 1991.7Wilson, J. W., et al., “Transport Methods and Interactions for Space Radiations,” NASA RP-1257, Dec. 1991.8Hugger, C. P., et al., “Preliminary Validation of an ISS Radiation Shielding Model,” Proceedings of AIAA Space 2003 Conference, AIAA Paper No. 2003-6220, Long Beach, CA, 23-25 Sept. 2003.11American Institute of Aeronautics and Astronautics。