A proposed DAQ system for a calorimeter at the International Linear Collider
- 格式:pdf
- 大小:275.81 KB
- 文档页数:14
全国2018年4月自学考试英语科技文选试题课程代码:00836PART A: VOCABULARYI. Directions: Add the affix to each word according to the given Chinese, making changes when necessary.(8%)1. artificial 人工制品 1. __________________2. fiction 虚构的 2. __________________3. coincide 巧合 3. __________________4. organic 无机的 4. __________________5. sphere 半球 5. __________________6. technology 生物技术 6. __________________7. formid 可怕的7. __________________8. harmony 和谐的8. __________________II. Directions: Fill in the blanks, each using one of the given words or phrases below in its proper form.(12%)stand for exposure to at work on the edge of short ofend up focus on a host of give off a sense ofin memory of comply with9. We were on a hill, right _________ the town.10. UNESCO _________ United Nations Educational, Scientific and Cultural Organization.11. I am a bit _________ cash right now, so I can’t lend you anything.12. The milk must be bad, it’s _________ a nasty smell.13. The traveler took the wrong train and _________ at a country village.14. The material will corrode after prolonged _________ acidic gases.15. _________ problems may delay the opening of the conference.16. The congress opened with a minute’s silence _________ those who died in the struggle for the independence of their country.17. Tonight’s TV program _________ homelessness.18. He promised to _________ my request.19. Farmers are _________ in the fields planting.20. She doesn’t sleep enough, so she always has _________ of fatigue.III. Directions: Fill in each blank with a suitable word given below.(10%)birth to unmarried had premature among were between such past The more miscarriages or abortions a woman has,the greater are her chances of giving birth to a child that is underweight or premature in the future,the research shows.Low birthweight (under 2500g) and premature birth(less than 37 weeks)are two of the major contributors to deaths 21 newborn babies and infants. Rates of low birthweight and 22 birth were highest among mothers who 23 black, young or old, poorly educated, and 24 . But there was a strong association 25 miscarriage and abortion and an early or underweight 26 , even after adjusting for other influential factors, 27 as smoking, high blood pressure and heavy drinking. Women who had 28 one, two, or three or more miscarriages or abortions in the 29 were almost three, five, and nine times as likely to give birth130 an underweight child as those without previous miscarriages or abortions.21. _________ 22. _________ 23. _________ 24. _________ 25. _________26. _________ 27. _________ 28. _________ 29. _________ 30. _________PART B: TRANSLATIONIV. Directions: Translate the following sentences into English, each using one of the given words or phrases below. (10%)precede replete with specialize in incompatible with suffice for31.上甜食前,每个用餐者都已吃得很饱了。
中考英语经典科学实验与科学理论深度剖析阅读理解20题1<背景文章>Isaac Newton is one of the most famous scientists in history. He is known for his discovery of the law of universal gravitation. Newton was sitting under an apple tree when an apple fell on his head. This event led him to think about why objects fall to the ground. He began to wonder if there was a force that acted on all objects.Newton spent many years studying and thinking about this problem. He realized that the force that causes apples to fall to the ground is the same force that keeps the moon in orbit around the earth. He called this force gravity.The discovery of the law of universal gravitation had a huge impact on science. It helped explain many phenomena that had previously been mysteries. For example, it explained why planets orbit the sun and why objects fall to the ground.1. Newton was sitting under a(n) ___ tree when he had the idea of gravity.A. orangeB. appleC. pearD. banana答案:B。
digestive system 相关英语词汇The digestive system is a complex network of organs and tissues responsible for breaking down food into nutrients that the body can absorb and use. It involves a series of physical and chemical processes that occur in the mouth, esophagus, stomach, small intestine, large intestine, and accessory organs such as the liver, pancreas, and gallbladder.Here are some key English vocabulary related to the digestive system:$$1. Mouth and Teeth:$$* **Mouth:** The entry point for food intake. * **Teeth (Teeth):** Hard, calcified structures used for chewing and grinding food. * **Tongue:** A muscular organ that moves food around in the mouth and helps in swallowing. ***Saliva:** A watery secretion that moistens food, begins the chemical breakdown of carbohydrates, and helps in swallowing.**2. Esophagus:*** **Esophagus:** A muscular tube that carries food from the mouth to the stomach. * **Peristalsis:** The rhythmic muscular contractions that propel food through the esophagus.**3. Stomach:*** **Stomach:** A hollow, muscular organ that stores food, secretes gastric juices, and mixes food with these juices to form a semisolid mass called chyme. * **Gastric Juice:** A mixture of hydrochloric acid, enzymes, and mucus secreted by the stomach. * **Hydrochloric Acid:** A strong acid that helps in the digestion of protein and creates an acidic environment that kills bacteria. * **Enzyme:** A biological catalyst that speeds up chemical reactions in the body, including the breakdown of food into nutrients. * **Mucus:** A slippery, viscous substance that coats the lining of the stomach, protecting it from the corrosive effects of gastric juice.**4. Small Intestine:*** **Small Intestine:** A long, coiled tube that continues from the stomach and is the primary site of digestion and absorption of nutrients. * **Duodenum:** Thefirst part of the small intestine, closest to the stomach. * **Jejunum:** The middle part of the small intestine. ***Ileum:** The final part of the small intestine, leading to the large intestine. * **Villi:** Tiny, finger-like projections on the inner lining of the small intestine that increase its surface area for absorption. * **Microvilli:** Minute projections on the surface of the villi that further enhance the absorption capacity of the small intestine.**5. Large Intestine:*** **Large Intestine:** A wider, shorter tube that absorbs water and forms feces. * **Colon:** The major part of the large intestine. * **Rectum:** The final, straight section of the large intestine, leading to the anus. ***Feces:** Solid waste product formed in the largeintestine and expelled from the body through the anus.**6. Accessory Organs:*** **Liver:** A large organ that produces bile, metabolizes fats, stores vitamins and minerals, and detoxifies the blood. * **Bile:** A yellowish fluid produced by the liver and stored in the gallbladder. It helps in the digestion of fats. * **Gallbladder:** A small,pear-shaped sac that stores bile until it is needed for digestion. * **Pancreas:** A gland that produces enzymes that break down carbohydrates, fats, and proteins, as well as hormones that regulate blood sugar levels.**7. Digestive Processes and Functions:*** **Digestion:** The process of breaking down food into smaller molecules that can be absorbed by the body. ***Absorption:** The process of nutrients passing through the walls of the small intestine into the bloodstream. ***Metabolism:** The set of chemical reactions that occur in the body to convert food into energy and building blocksfor cells and tissues.These are just a few of the many terms related to the digestive system. The digestive system is a highly complex and interconnected network of organs and processes, and its efficient functioning is crucial for maintaining overall health and well-being. Disorders of the digestive system can lead to a range of symptoms and health issues, makingit important to maintain a healthy diet and lifestyle to promote optimal digestive health.。
AN INTRODUCTION TO THE AMERICAN LEGAL SYSTEMThe American legal system is a complex and multifaceted system that governs a wide range of laws and regulations. It is crucial to understand the different aspects of the legal system in order tonavigate it effectively, whether you are a legal professional or an ordinary citizen. In this guide, we will provide an overview of the American legal system and some practice questions to help you understand the key concepts.Overview of the American Legal SystemThe American legal system is based on the principle of federalism, which means that the federal government shares power with individual state governments. This means that laws can vary from state to state, which can sometimes lead to confusion and inconsistency.The ConstitutionThe Constitution is the supreme law of the land in the United States. It outlines the structure of the federal government and provides rights to individual citizens. The Constitution is made up of seven articlesand 27 amendments.The Legislative BranchThe legislative branch is responsible for creating the laws that govern the country. It is made up of two parts: the Senate and the House of Representatives. The Senate has 100 members – two from each state –and the House of Representatives has 435 members, with the number of representatives from each state determined by its population.The Executive BranchThe executive branch is responsible for enforcing the laws that the legislative branch creates. It is headed by the President of the United States and also includes the Vice President, the Cabinet, and various government agencies.The Judicial BranchThe judicial branch is responsible for interpreting the laws and deciding cases that arise from them. It is made up of a system offederal and state courts. At the federal level, the Supreme Court is the highest court in the land and has the final say in all legal matters.Practice Questions1.What is federalism and how does it impact the American legalsystem? the three branches of government and briefly describetheir roles.3.What is the Supreme Court and what is its role in theAmerican legal system?4.What is the Constitution and why is it important to thelegal system?5.How are laws created in the American legal system?。
Final Concept PaperQ11: Q&As on Selection and Justification of Starting Materials for the Manufacture of DrugSubstancesFocus on chemical entity drug substancesdated 22 October 2014Endorsed by the ICH Steering Committee on 10 November 2014Type of Harmonisation Action ProposedAn Implementation Working Group (IWG) is proposed to prepare a Questions and Answers (Q&A) document for ICH’s Development and Manufacture of Drug Substances (Q11) Guideline to provide clarification on what information about the selection and justification of starting materials should be provided in marketing authorisation applications and/or Master Files.The IWG will provide clarification of the existing principles and will not re-open ICH Q11. As appropriate, references will be made to existing ICH Guidelines, e.g., ICH Q7, ICH Q9, ICH Q10, ICH Q11 and ICH M7, to ensure continuity across all ICH Quality Guidelines. The focus of the Q&A document will be on chemical entity drug substances as that is where most of the differences of opinion have been experienced.Statement of the Perceived ProblemEvaluation of information related to the manufacturing process and controls for drug substances is an important part of marketing authorisations. Decisions made about the proposed starting material(s) determine what expectations apply to the Quality-related information for both pre-market assessment and for post-market changes. The acceptability of the applicant's proposed starting material also has implications for Good Manufacturing Practices (GMPs), process validation requirements, and inspection-related activities (as outlined in ICH Q7). While it is recognised that ICH Q11 provided good scientific guidance when published in 2012, differences in the interpretation of that guidance are causing problems for industry and regulators.Issues to be ResolvedExamples of issues that a Q&A document might help resolve include, but are not limited to the following:Significant regional differences between regulatory authorities in terms of:o Which aspects contribute to the potential unsuitability of starting materials (e.g., number of distinct chemical steps separating starting material(s) from final drug substance, potentiallymutagenic impurities, stereochemistry);o The amount of regulatory attention given to steps prior to the proposed starting material(e.g., how much of the synthesis of the proposed starting material should be disclosed aspart of the justification for the starting material);o What information is necessary to support the justification of the starting material.International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use ICH Secretariat, Chemin Louis-Dunant 15, P.O. Box 195, 1211 Geneva 20, Switzerland∙Significant resources are frequently used to resolve differences of opinions (regulatory and industry);∙The information provided by industry can be inadequate for regulators to evaluate whether the proposed starting material, manufacturing process, and control strategy provide sufficient assurance of the quality of the drug substance (especially if the proposed starting material occurs late in the manufacturing process);∙Additional burden on industry associated with conservative approaches to defining starting material can include, for example:o Validating early steps before the proposed starting material;o Evaluating every step of the process for known and potential impurities with the same intensity as the final few steps;o Expecting steps prior to proposed starting material to be manufactured under GMP conditions.Background to the Proposal and IssuesQ11 provided guiding principles to be considered in the selection and justification of starting materials for the manufacture of drug substances. It has become apparent, based on public workshops, symposia, and industry experience with global submissions, that differences of opinion can arise between regulators and industry about how those principles should be applied in specific situations. While it is recognised that each dossier needs to be judged on its own merit, further clarification of the principles of Q11 through a Q&A document (including perhaps, case studies) could help address differences in understanding and interpretation.The Q&A should provide several benefits for industry, regulators, and patients:∙Improvement in global harmonisation regarding the selection and justification of starting materials used in the manufacture of drug substances for new and generic applications;∙Clarification regarding the relationship between the selection of appropriate starting material and GMP considerations, control strategy, length of synthetic process, and impact of manufacturing steps on DS quality. Clarification: ICHQ7 / GMP is not in scope and not for this IWG;∙Clarification on the type of information that industry should provide in submissions to justify starting material selection;∙Clarify expectations for lifecycle management of starting material .Type of Expert Working Group and ResourcesThe proposed Q&A document will provide clarification to complement ICH Quality Guidelines for chemical entity drug substances. In general, biotechnological/biological drug substances will not be within scope; however, the Q&A may clarify special cases. The working group should include representatives from the ICH official members (EU, EFPIA, FDA, PhRMA, MHLW, JPMA, Health Canada and Swissmedic). One member can also be nominated by WHO Observer, EDQM, WSMI, IGPA, and API industry as well as RHIs, DRAs/DoH (if requested).The primary mechanism for advancing the work of the IWG will be through teleconferences. However, one face-to-face meeting of the IWG may be requested to meet the tight timeline proposed. Given the time zone challenges for scheduling within business hours, the complex nature of this topic, and the anticipated challenges in reaching harmonisation, it will be difficult to complete the Q&A document within the compressed timeline using only teleconferencing and email. A single face-to-face meeting at an ICH meeting would approximately double the amount of time available for discussions between the full IWG. Additionally, face-to-face discussions are more effective than teleconferences especially for members who must participate using a second language.TimingApproval of Topic/Rapporteur & IWG Defined November 10, 2014 First IWG Meeting (teleconference) November 2014 Step 2a/b document to present to SC November 2015 Step 4 document sign-off TBD。
The problem is the heart of mathematics.(问题是数学的心脏。
)??哈尔莫斯He who seeks for methods without having a difinite problem in mind seeks for the most part in vain.(心中没有一定的问题而要寻找方法的人,多半都是徒劳无获的。
)?? 希尔伯特The problem solver may do creative work even if he does not succeed in solving his own problem;his effort may lead him to means applicable to other problems,Then the problem solver may be creative indirectly be leaving a good unsolved problem which eventually leads others to discovering fertile means.(即使在解某一道题时,解题者未获成功,他也可能做了有独创性的工作;他的努力可能使他得到适用于解决其他问题的工具。
此外,他可能留下一个很好的未解决问题,这个问题最终能使其他人发现更有成效的解题手段。
这样,他间接地作出了独创性的贡献。
)??波利亚One of the virtues of a good problem is that it generates other good problems.(一道好题的价值之一在于它能产生其他一些好题。
)??波利亚Each problem that I solved became a rule which served afterwards to solve other problems.(我解决过的每一个问题都成为日后用以解决其他问题的法则。
ORIGINAL PAPER -PRODUCTION ENGINEERINGEffect of viscosity and interfacial tension of surfactant–polymer flooding on oil recovery in high-temperature and high-salinity reservoirsZhiwei Wu •Xiang’an Yue •Tao Cheng •Jie Yu •Heng YangReceived:22April 2013/Accepted:28August 2013/Published online:8September 2013ÓThe Author(s)2013.This article is published with open access at Abstract This study aims to analyze the influence of viscosity and interfacial tension (IFT)on oil displacement efficiency in heterogeneous reservoirs.Measurement of changes in polymer viscosity and IFT indicates that vis-cosity is influenced by brine salinity and shearing of pore media and that IFT is influenced by salinity and the interaction between the polymer and surfactant.High concentrations (2,500and 3,000mg/L)of polymer GLP-85are utilized to reduce the effect of salinity and maintain high viscosity (24mPa s)in formation water.After shearing of pore media,polymer viscosity is still high (17mPa Ás).The same polymer viscosity (17mPa Ás)is utilized to displace oil,whose viscosity is 68mPa Ás,at high temperature and high pressure.The IFTs between surfac-tant DWS of 0.2%in the reservoir water of different salinities and crude oil droplet are all below 10-2mN/m,with only a slight difference.Surfactant DWS exhibits good salt tolerance.In the surfactant–polymer (SP)system,the polymer solution prolongs the time to reach ultra-low IFT.However,the surfactant only has a slight effect on the viscosity of the SP system.SP slugs are injected after waterflooding in the heterogeneous core flooding experiments.Recovery is improved by 4.93–21.02%of the original oil in place.Furthermore,the core flooding experiments show that the pole of lowering the mobility ratio is more sig-nificant than decreasing the IFT of the displacing agent;both of them must be optimized by considering the injec-tivity of the polymer molecular,emulsification of oil,and the economic cost.This study provides technical support in selecting and optimizing SP systems for chemical flooding.Keywords Chemical flooding ÁViscosity ÁInterfacial tension ÁOil displacement efficiency ÁSalinityAbbreviations IFT Interfacial tension,mN/mGLP-85The polymer,modified polyacrylamides,whoserelative molecular mass is 1.759107OOIP Original oil in place EOR Enhanced oil recovery SP Surfactant–polymer ASP Alkali–surfactant–polymer DWS The surfactant,an anionic sulfate,whoseaverage relative molecular weight is 560to 600PV Injection pore volume CMC Critical micelle concentrationIntroductionPolymer flooding has been employed successfully in Daqing Oilfield in China for decades;it contributed to the oil recovery of more than 10%of original oil in place (OOIP)after water flooding (Wang et al.2009).This Project (2011ZX05009-004)was supported by National NaturalScience Foundation of China.Z.Wu (&)ÁX.Yue ÁJ.Yu ÁH.YangMOE Key Laboratory of Petroleum Engineering in China University of Petroleum,Beijing 102249,China e-mail:wuzhiwei1987@Z.Wu ÁX.Yue ÁJ.Yu ÁH.YangPetroleum Engineering Faculty,China University of Petroleum,Beijing 102249,ChinaT.ChengShell China Exploration and Production Co.Ltd,Beijing 100000,ChinaJ Petrol Explor Prod Technol (2014)4:9–16DOI 10.1007/s13202-013-0078-6Alkaline–surfactant–polymer(ASP)flooding can effec-tively reduce oil residual saturation to reduce interfacial tension(IFT)and the mobility ratio between the water phase and oil phase(Clark et al.1988;Meyers et al.1992; Vargo et al.1999).Alkali is added in ASPflooding to decrease the quantity of the surfactant through competitive adsorption with the surfactant and reaction with petroleum acids in crude oil to generate a new surfactant(Pope2007; Rivas et al.1997).However,the use of alkali has intro-duced problems in the injection of the ASP solution These problems include the deposition of alkali scales in the reservoir and bottom hole(Hou et al.2005;Bataweel and Nasr-El-Din2011;Jing et al.2013),difficulty of treating the produced water(Deng et al.2002),and reduction of the viscosity of the combined ASP slug(Wang et al.2006; Nasr–El–Din H.A.et al.Nasr-El-Din et al.1992).Many methods were introduced to solve these problems.Elraies (2012)proposed a new polymeric surfactant and conducted a series of experiments to evaluate this surfactant in the absence and presence of alkali.Some studies(Maolei and Yunhong2012;Flaaten et al.2008;Berger and Lee2006 replaced strong alkalis with weak alkalis,such as sodium carbonate,sodium metaborate,and organic alkaline,to reduce their effect on the viscosity of the ASP slug.Alkali-free SPflooding avoids the drawbacks associated with alkali.Surfactants with concentrations higher than the critical micelle concentration(CMC)can achieve ultra-low IFT.However,such surfactants are expensive.The use of a hydrophilic surfactant mixed with a relatively lipophilic surfactant or a new surfactant was also investigated(Rosen et al.2005;Aoudia et al.2006;Cui et al.2012).However, studies on SPflooding only focused on the screening and evaluation of the polymer and surfactant and their inter-action.Reduction in mobility ratio and IFT is influenced by reservoir brine salinity,reservoir temperature,concentra-tion of chemical ingredients and oil components,and others (Gaonkar1992;Ferdous et al.2012;Liu et al.2008;Gong et al.2009;Cao et al.2012;Zhang et al.2012).Dis-placement performance is affected by the interaction of the physical properties of the reservoir and those of thefluid. The primary influencing factors must be identified.SP flooding can enhance recovery because of its capability to control viscousfingering and reduce IFT.In formulas involving the capillary number,ultra-low IFT between the binary system and oil drop in a homogenous core yields the lowest residual oil saturation and the highest oil recovery. In a heterogeneous core with high permeability,sweep efficiency has a larger influence on oil recovery than dis-placement efficiency.Highest oil recovery can be achieved under optimum IFT and not under the lowest IFT of the binary system.However,this concept(Wang et al.2010)is based on light oil reservoir with high permeability and low temperature.Dagang Oilfield is a reservoir with medium–low permeability characterized by high temperature,sig-nificant heterogeneity,and high brine salinity.These rough conditions bring about a significant challenge in SP flooding and demand different IFTs and viscosities of the SP system.Based on the reservoir condition of Dagang Oilfield, static experiments were conducted to study the influence of loss parameters of viscosity and IFT on the SP system. Combined with coreflooding,the respective effect of viscosity and IFT in the binary system on displacement efficiency was investigated.The results of this study pro-vide insights into chemical screening,slug optimization, and injection methods in thefield.Equipment and materialsEquipmentThe main equipment for the experimentalflow is shown in Fig.1.The heterogeneous core holder is30cm long.The coreflooding model is30cm long, 4.5cm wide,and 4.5cm thick.Each layer of the model is1.5cm thick. Other equipment include a RheoStress6,000rheometer from HAAKE,a Brookfield DV-II?viscosimeter,several high-pressure intermediate containers,an automatic mea-suring cylinder,a thermostat oven,a pressure collection system,and a constantflow pump.Water was pumped into high-pressure intermediate containers at a certain speed, and formation water and crude oil were forced into the core with a certain difference in pressure.A30cm long core holder was utilized to hold the core with external pressure that is1–2MPa more than the inlet pressure.The pressure was determined by a pressure collection system.An oven was utilized to maintain stable experimental temperature. The product was gathered and measured by a product acquisition system.MaterialsThe brine(experimental water)was composed of simulated pure water,formation water,and simulated formation water.The ion concentrations of these components are listed in Table1.A three-layer artificial heterogeneous sandstone core was created.The core has an average per-meability ranging from55.38910-3to106.009 10-3l m2and a porosity percentage of24.2%.All other parameters of the core are shown in Table2.Modified polyacrylamide GLP-85was utilized as the polymer.This polymer,whose relative molecular mass is 1.759107,has a high tolerance for salinity.The viscosity of the polymer was measured with HAAKE Rotational Rheometer-6000at78°C.The main active material of surfactant DWS is an anionic sulfonate component,of which 50wt %is active content,16.8wt %is unsulfonated oil,31.2wt %is vol-atile content,and 2.0wt %is inorganic salt.The average relative molecular weight ranges from 560to 600.The polymer (2,000and 2,500mg/L)and the surfactant (0.08–0.3wt %)were mixed with formation water to form the SP system (binary system).Ground dehydration degassed oil and kerosene were mixed at a volume ratio of 5–1to maintain consistent viscosity between the simulated oil and the crude oil in the reservoir.The viscosity of oil is 68mPa Ás at 78°C.A constant reservoir temperature of 78°C was maintained throughout the experiment.Table 3shows the reservoir condition and the basic characteristics of the pore fluid.Viscosity and IFT measurementThe viscosities of the SP solutions were determined at a shear rate of 7.34s -1with HAAKE Rotational Rheometer-6000at 78°C.The IFTs between the surfactant solutions and oil were measured at 78°C with a spinning drop ten-sion meter (Model Texas-500).The spinning oil droplet was stretched in the chemical agent solution until the oil/water phase reached equilibrium at a rotation speed of 6,000r/min.The images were stored at regular intervals.InFig.1Main experimental setupTable 1Ion concentration of simulated injection water and formation water (mg/L)Water typeNa ??K ?Ca 2?Mg 2?Cl -HCO 3-Total salinity Simulation injection water 38185553285452Formation water9,423404309,48562320,001Simulated formation water10,9935256317,73960529,952Table 2Core parameters and oil displacement efficiency of chemical flooding Core Number Porosity/%Permeability/10-3l m 20.3PV chemical system Reduction in water cut/%Water drive/%OOIP Increase inrecovery/%OOIP Total recovery/%OOIP DG-F418.5155.380.2%DWS4.4147.76 4.9352.69DG-F1527.4267.772,000mg/L GLP-85?0.08%DWS29.5849.6310.4860.12DG-F1327.2665.242,000mg/L GLP-85?0.2%DWS19.6748.0418.6966.72DG-F1426.7881.072,000mg/L GLP-85?0.3%DWS39.1854.3114.7169.01DG-F1125.8677.252,500mg/L GLP-85?0.2%DWS5849.8121.0270.83DG-F1627.8296.112,500mg/L GLP-85?0.2%DWS after shearing4.7850.213.5753.78When the water cut was 98%,water flooding was ceased and the SP system was injected.The increase in recovery was observed in the stage of injecting SP system and subsequent water flooding.Total recovery includes the recovery of water flooding and the increase in recoverythe images,the height of the spinning oil drop was mea-sured to calculate the IFT when the ratio of the length to the height of oil drop was more than4.However,length and height should be measured only when the ratio of the length to the height of the oil drop is between1and4.The IFTs of the different concentrations of the surfactant were obtained with the abovementioned surfactants or their mixtures with a polymer.Coreflooding experiments1.The heterogeneous core was vacuumized and saturatedwith formation water.Pore volume was then measured.2.The model was saturated with crude oil at an injectionrate of0.2mL/min.Original oil saturation and irreducible water saturation were then calculated.3.Formation water was injected at a rate of1.2mL/minuntil the water cut reached98%.The produced oil and water and pressure change in the inlet were monitored.4.An SP system solution of0.3PV was injected at a rateof1.2mL/min.Waterflooding was performed at the same rate until the water cut reached98%.Ultimate recovery was then calculated.Results and discussionInfluencing factors of binary system’s performance PolymerThe polymer solutions were generally fabricated with pure water in chemicalflooding,thereby reducing the influence of salinity on the polymer mother solution.The solution was diluted with reservoir water to guarantee that the chemical system matched the formation water with high salinity.Polymer viscosity was measured in high salinity under constant temperature because salinity affects the viscosity and IFT of the binary system.The viscosity of the polymer solution must be determined to displace the crude oil with high viscosity.Therefore,the polymer solution with a high concentration was utilized.Polymer solutions of different concentrations were fabricated with a forma-tion brine of different salinities at78°C.The results of viscosity changes are shown in Table4.As shown in Table4,the viscosity of the polymer solu-tion decreased sharply with the increase in salinity when the polymer concentration was determined.When the polymer concentration was1,500mg/L,the viscosity of the polymer solution decreased from29to8mPaÁs,and the viscosity retention rate was27.59%.However,when the polymer concentration was3,000mg/L,the viscosity of the polymer solution decreased from172to25mPaÁs,and the viscosity retention rate was14.53%.The viscosity retention rate decreased and the loss of polymer solution increased with the increase in polymer concentration.With the increase in salinity,the polymer molecular chain became compressed that it could not interweave with another polymer molecular chain.In addition,a small molecular group was formed.The viscous force among the polymer molecules was reduced after the group was formed,resulting in the loss of viscosity of the polymer solution.However,viscosity increased in each style of formation water with the increase in polymer solution.High concentration of the polymer solution was necessary to maintain high velocity.Thus,2,500mg/L was determined based on the polymer’s injectivity,economic cost,and the demand of viscosity.The polymer solution had toflow through pumps,pipes, valves,perforated holes,and so on at a high speed before it was injected.To simulate the effect of mechanical shearing on viscosity,2,500and3,000mg/L of the polymer solution were dissolved with formation water and simulated forma-tion water and sheared in a Waring device at a speed of 3,000r/min for20s.The viscosities were measured before and after shearing at78°C.The results are shown in Table5.As shown in Table5,the viscosity retention rates at 2,500and3,000mg/L of the polymer solution were70.83 and66.67%in formation water,respectively,and86.67 and76%in simulated formation water,respectively,after shearing.Therefore,this type of polymer solution dissolved with high salinity of brine has a strong ability to resist shearing.Thisfinding indicates that the solution can be applied in the reservoir.SurfactantThe mixture of surfactant and polymer solution injected into the formation is affected by many factors,such as temperature,salinity,shearing,retention,adsorption,and dilution of formation brine.Therefore,surfactant DWS wasTable3Reservoir condition and crude oil propertiesItem Permeability/10-3l m2Porosity/%Variation coefficient of permeability Reservoir temperature/°CReservoir condition55.38–106.0024.20.678Item Reservoir depth(m)Formation water type Crude oil viscosity/(mPaÁs)Crude oil density/(g/cm3) Reservoir condition2,100–2,300MgCl268(at78°C)0.922–0.968(on the ground)utilized to create solutions of different concentrations at reservoir temperature.The IFTs were measured,and the results are shown in Fig.1.As shown in Fig.2,IFT between the oil droplet and the solution decreased gradually when the surfactant concen-tration increased from0.05to0.4%.IFT reached an ultra-low level when the concentration was0.3%.With the increase in surfactant concentration,the surfactant mole-cules were adsorbed onto the oil/water interface constantly with the hydrophilic in the water phase and lipophilic in the oil phase.When the concentration was more than0.3%, the adsorption on the oil/water interface reached saturation, and IFT remained stable.Thus,the concentration of0.3% was the CMC.Surfactant concentration of0.2–0.3% should be selected because of its economic cost and loss in the pore media.The process of dissolving the surfactant with pure water and diluting it with formation water would seriously influ-ence the activity of the surfactant.Given that the salinity of the injected water was lower than that of the original for-mation water,the salinity of the areas washed for long-term waterflooding was reduced,whereas that of the unwashed areas remained high.In SP systemflooding,the mobility control of the polymer solution causes the chemical system toflow toward the area unwashed with water.As a result,the chemicals are placed in contact with the original for-mation water and are affected by salinity.Therefore, studying the influence of salinity on IFT is essential.Fig-ure3shows the influence of different salinities on IFT between the DWS of0.2%and the crude oil droplet.With increasing salinity,the IFTs of all types of brine can reach an ultra-low level.However,the prolonged time of reaching ultra-low IFT would affect the time of chemicalflooding in the marine oilfield.With constant time,the increase in salinity can increase IFT.The reason for such is that the surfactant molecules adsorbed on the oil/water interface desorbed constantly and entered into the oil phase with the increase in salinity,especially from several hundred to 30,000mg/L.However,ultra-low IFTs were reached with different salinities,indicating that0.2%surfactant can adapt to the reservoir with different salinities.The compatibility between the polymer and surfactant in the SP system had an interaction problem.We analyzed the interaction by studying how the addition of surfactant DWS influences the viscosity of polymer and how the addition of a polymer solution affects the IFT of the sur-factant.Table6shows the effect of the addition of sur-factant on polymer viscosity.Table7shows the effect of the addition of polymer solution on the IFT of the surfac-tant.Tables6and7show that ultra-low IFT can be reached by0.2%DWS surfactant with the increase in the con-centration of the polymer solution.However,longer time was required.The velocity of the surfactant molecules to the oil/water phase decreased because of the long organic chains of polymer molecules.Therefore,more migration time was required.The SP system can reach ultra-low IFT with longer interfacial contacting time,which matches the SP systemflooding.Theflowing velocity of the SP system in the reservoir was much slower because the mobility of the SP system was smaller than that of a single surfactant solution.Therefore,contact time with crude oil was longer, thereby reducing oil–water IFT and enhancing oil dis-placement efficiency.However,the surfactant did notTable4Viscosity change in polymer GLP-85with different salini-ties of waterPolymer concentration (mg/L)Simulation purewater(salinityof452mg/L)Formationwater(salinityof20,000mg/L)Simulatedformation water(salinity of29,952mg/L) Viscosity/(mPaÁs)Viscosity/(mPaÁs)Viscosity/(mPaÁs)4,000––733,500––503,00017248252,5007924152,0005017131,5002998Table5Viscosity change in polymer GLP-85before and after shearingPolymer concentration/ (mg/L)Viscosity/(mPaÁs)Formation water(salinity of20,000mg/L)Simulated formation water(salinity of29,952mg/L) BeforeshearingAftershearingBeforeshearingAftershearing2,50024171513 3,00048322519significantly affect the viscosity of the SP system;it merely affected dilution.Therefore,the SP solution has the same tackifying property as that of the polymer solution at the same concentration.It also allowed for the reduction of IFT with prolonged contact time.Surfactant concentration should be increased and polymer concentration should be decreased to reduce IFT instantly and achieve instant emulsification,given that a certain relationship exists between emulsification and IFT reduction.However,such procedures are expensive and lead to less activity of tackifying and poor ability of the SP system to control mobility ratio.The surfactant and polymer can be mixed to prolong contact time with the crude oil;such would be a significant contribution to the study of injection patterns in chemicalflooding after waterflooding.Effect of viscosity on oil displacementTable2shows the results of core displacement of various chemical pared with the oil displacement results of DG-F4,DG-F13,and DG-F11,the viscosity of the SP system increased gradually and the recovery of flooding was enhanced on the condition of similar oil recovery of waterflooding at a certain surfactant concen-tration and with increasing polymer concentration.Based on the change in pressure curve and water cut curve,the increase in the system’s viscosity increased theflowing resistance of the water phase in the high-permeable layer. As a result,the pressure on the entry side increased grad-ually.The SP systemflowed into the middle-and low-permeable layers where residual oil was abundant,and the water cut significantly decreased.When the system vis-cosity increased from1to15mPaÁs,oil recovery increased by13.76%.When the viscosity increased from15to 22.5mPaÁs,enhanced recovery increased only by2.33%. However,the pressure gradient on the entry side increased from11.05to15.23MPa/m,indicating that viscosity contributed73.62%to the increase in oil recovery and that the proportion declined with the increase in viscosity. Thus,oil recovery did not increase when viscosity increased(Fig.4).The SP system(2,500mg/L GLP-85?0.2%DWS) was sheared in the Waring device and then utilized to displace residual oil in heterogeneous cores.Figure5 shows the dynamic change in recovery before and after shearing.The displacement results of core DG-F11and DG-F16showed that viscosity changed greatly after shearing and that recovery declined sharply correspond-ingly.Recovery after shearing was53.78%OOIP and only increased by3.57%OOIP after waterflooding.The water cut was reduced only by4.78%.However,recovery before shearing was70.83%OOIP and increased by21.02% OOIP after waterflooding.The water cut was reduced by 58%before shearing.The pole of lowering the mobility ratio was obvious in the heterogeneous cores.Table6Changes in SP system viscosity with surfactant concentration Polymer concentration/(mg/L)Viscosity/(mPaÁs)0%DWS?GLP-8 50.1%DWS?GLP-850.2%DWS?GLP-850.3%DWS?GLP-852,0001716.515142,500242322.521Table7IFT of the SP system changes with polymer concentrationSurfactant concentration/%DWS2,000mg/L GLP-85?DWS2,500mg/L GLP-85?DWSt/min r/(10-3mN/m)t/min r/(10-3mN/m)t/min r/(10-3mN/m) 0.23 5.31 6.59.231212.15Influence of IFT on oil recoverySurfactants of different concentrations were added to the polymer with a concentration of2,000mg/L.Core dis-placement experiments were conducted with the mixtures. Compared with core DG-F13,DG-F14,and DG-F15,IFT decreased from5.6910-2to1.5910-3mN/m when the surfactant concentration increased from0.08to0.3% under constant system viscosity(Table2).Recovery increased from10.4to14.71%OOIP after waterflooding. Errors in the oil displacement experiment of core DG-F14 might have caused the different results.However,we can still consider the contribution of IFT to the recovery of heterogeneous cores,ranging from4to8%OOIP,which only accounts for approximately30%;this percentage is less than the pole of lowering the mobility ratio,which is nearly70%.Therefore,control of mobility between oil and water in the heterogeneous cores and increase in the displacement resistance of high-permeable layers should be consideredfirst.The increase in the recovery of reducing IFT was much less than that of increasing viscosity.Fig-ure6shows the relationship between IFT and recovery as well as that between IFT and the pressure gradient.With the decrease in IFT,recovery initially increased and then decreased.Therefore,other principles could have increased recovery other than the decrease in IFT.By changing pressure,oil–water emulsification was strengthened because of the decrease in IFT from 5.6910-2to 9.23910-3mN/m.Moreover,the emulsified oil exhib-ited coalescence,which increased the displacement resis-tance,formed an oil block,and significantly increased the pressure gradient.The low IFT of1.5910-3mN/m made oil-in-water emulsion stable.Thus,the oil block was not formed easily.ConclusionsPolymer viscosity was seriously affected by salinity.The effect of shearing on polymer viscosity and oil recovery was significant.Thus,high concentration of polymer was utilized to maintain high viscosity.The CMC of DWS was 0.3%;this CMC value was employed to maintain low IFT. The IFTs with the brine at all salinity levels could be ultra low,indicating that salinity only had a slight effect on the activity of0.2%DWS.The time of reaching ultra-low IFT between the oil droplet and SP system was longer than that of a single surfactant because of the polymer’s existence. The injection pattern of the surfactant and polymer mixture was used to maintain low IFT in the binary system.In the core whose permeability contrast was4and average permeability ranged from55.38910-3to106.009 10-3l m2,viscosity and IFT contributed approximately70 and30%to the increase in oil recovery,respectively.In the heterogeneous,heavy oil reservoirs whose permeability contrast was4and temperature was78°C,increasingdisplacement resistance in the high-permeable layers and displacing the residual oil caused by microheterogeneity are important to improve oil recovery.When screening the properties of agents in chemicalflooding,viscoelasticity is thefirst thing that should be considered.The second is how to reach ultra-low IFT between oil and water.Viscosity and IFT must be optimized to maximize oil recovery in the heterogeneous cores on the condition that the injectivity and emulsification of the SP system are considered.When viscosity is high,injectivity becomes a problem.When IFT reaches an ultra-low level,oil-in-water emulsion remains stable,and the coalescence of emulsified oil droplet would not easily occur.Finally,an oil block would be formed. Acknowledgments The authors would like to express their appre-ciation for thefinancial support received from National Natural Sci-ence Foundation of China(2011ZX05009-004)and China University of Petroleum,for permission to publish this paper.Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use,dis-tribution,and reproduction in any medium,provided the original author(s)and the source are credited.ReferencesAoudia M,Al-Shibli MN,Al-Kasimi LH,Al-Maamari R,Al-Bemani A(2006)Novel surfactants for ultralow interfacial tension in a wide range of surfactant concentration and temperature.J Sur-factants Deterg9:287–293Bataweel MA,Nasr-El-Din HA(2011)Minimizing scale precipitation in carbonate cores caused by alkalis in ASPflooding in high salinity/high temperature applications,SPE14151presented at the SPE International Symposium on Oilfield Chemistry held in The Woodlands.Texas,USA,pp11–13Berger PD,Lee CH(2006)Improve ASP process using organic alkali.SPE99581presented at the SPE/DOE symposium on improved oil recovery,Tulsa,Oklahoma22–26April2006Cao Y,Zhao R,Zhang L,Xu Z,Jin Z,Luo L,Zhang L,Zhao S(2012) Effect of electrolyte and temperature on interfacial tensions of alkylbenzene sulfonate solutions.Energy Fuels26:2175–2181 Clark SR,Pitts MJ,Smith SM(1988)Design and application of an alkaline-surfactant-polymer recovery system for the West Kiehl field.Paper SPE17538presented at the SPE rocky mountain regional meeting,Casper,WYCui Z,DU X,Pei X,Jiang J,Wang F(2012)Synthesis of didodecylmethylcarboxyl betaine and its application in surfac-tant–polymerflooding.J Surfactants Deterg15:685–694Deng S,Bai R,Chen JP,Yu G,Jiang Z,Zhou F(2002)Effects of alkaline/surfactant/polymer on stability of oil droplets in produced water from ASPflooding.Colloids Surf A Physico-chem Eng Asp211:275–284Elraies KA(2012)An experimental study on ASP process using a new polymeric surfactant.J Petrol Explor Prod Technol 2:223–227Ferdous S,Ioannidis MA,Henneke DE(2012)Effects of temperature, pH,and ionic strength on the adsorption of nanoparticles at liquid–liquid interfaces.J Nanopart Res14:850Flaaten AK,Nguyen QP,Pope GA,Zhang J(2008)A systematic laboratory approach to low-cost,high-performance chemical flooding.SPE113469presented at the SPE/DOE Improved Oil Recovery Symposium.Tulsa,Oklahoma,19-23April2008 Gaonkar AG(1992)Effects of salt,temperature,and surfactants on the interfacial tension behavior of a vegetable oil/water system.J Colloid Interface Sci149(1):256–260Gong H,Guiying X,Zhu Y,Wang Y,Dan W,Niu M,Wang L,Guo H,Wang H(2009)Influencing factors on the properties of complex systems consisting of hydrolyzed polyacrylamide/triton x-100/cetyl trimethylammonium bromide:viscosity and dynamic interfacial tension studies.Energy Fuels23:300–305Hou JR,Liu ZC,Zhang SF,Yue XA,Yang JZ(2005)The role of viscoelasticity of alkali/surfactant/polymer solutions in enhanced oil recovery.J Petrol Sci Eng47:219–235Jing G,Tang S,Li X,Yu T,Gai Y(2013)The scaling conditions for ASPflooding oilfield in the fourth Plant of Daqing oilfield.J Petrol Explor Prod Technol3:175–178Liu L,Hou J,Yue XA,Zhao J(2008)Effect of active species in crude oil on the interfacial tension behavior of alkali/synthetic surfactants/crude oil systems.Petrol Sci5:353–358Maolei C,Yunhong D(2012)Study of interfacial tension between a weakly alkaline three-componentflooding system and crude oil, and evaluation of oil displacement efficiency.Chem Technol Fuels Oils48(1):33–38Meyers JJ,Pitss MJ,Wyatt K(1992)Alkaline-Surfactant-Polymer flood of the west kiehl,minnelusa unit.SPE24144presented at the SPE/DOE8th symposium on enhanced oil recovery,Tulsa, Oklahoma,April22–24Nasr-El-Din HA,Hawkins BF,Green KA(1992)Recovery of residual oil using the alkaline/surfactant/polymer process:effect of alkali concentration.J Petrol Sci Eng6:381Pope GA(2007)Overview of chemical EOR.Presentation:casper eor workshopRivas H,Gutierrez X,Zirrit JL,Anto0n,RE,Salager JL(1997) Industrial applications of microemulsions.305–329Rosen MJ,Wang H,Shen P,Zhu Y(2005)Ultralow interfacial tension for enhanced oil recovery at very low surfactant ngmuir21:3749–3756Vargo J,Turner J,Vergnani B,Pitts M,Wyatt K,Surkalo H,Patterson D(1999)Alkaline-Surfactant-Polymerflooding of the cambridge minnelusafield.SPE55633presented at SPE Rocky Mountain Regional Meeting held in Gillette,Wyoming,15–18May1999 Wang D,Han P,Shao Z,Chen J,Serigh RS(2006)Sweep improvement options for Daqing oilfield.SPE99441presented at SPE/DOE symposium on improved oil recovery,Tulsa, Oklahoma22–26April2006Wang D,Dong H,Lv C,Fu X,Nie J(2009)Review of practical experience by polymerflooding at Daqing.SPE Reserv Eval Eng 12(3):470–476Wang Y,Zhao F,Bai B,Zhang J,Xiang W,Li X,Zhou W(2010) Optimized surfactantfit and polymer viscosity for surfactant-polymerflooding in heterogeneous formations.SPE127391 presented at SPE improved oil recovery symposium held in Tulsa,Oklahoma,USA,24–28April2010Zhang H,Dong M,Zhao S(2012)Experimental study of the interaction between NaOH,surfactant,and polymer in reducing court heavy oil/brine interfacial tension.Energy Fuels26:3644–3650。
The evolution of emergency management and the advancement towards a profession in theUnited States and FloridaJennifer Wilson a,*,Arthur Oyola-Yemaiel baFlorida Division of Emergency Management,2555Shumard Oak Blvd.,Tallahassee,FL32399,USA b Department of Sociology and Anthropology,Florida International University,Miami,FL33199,USA AbstractThe occupation of ‘‘emergency management’’is the official organizational structure estab-lished by governments (federal,state,county,and city)to manage the social repercussions of natural and/or technological emergencies.This field has evolved quite extensively from its Cold War-civil defense origins and some have stated that the field is ‘‘professionalizing’’.We examine the process of professionalization in the United States and Florida.Our research explores how emergency management organizations are modifying in order to develop the capacity to prepare for,respond to,recover from,and mitigate against disaster events more effectively.#2001Elsevier Science Ltd.All rights reserved.Keywords:Emergency management;Professionalization;Accreditation;Certification;Florida1.IntroductionThroughout time societies have dealt with natural and/or man-made disastrous events and calamities.For example,the Greco–Roman cities of Pompeii,Hercula-neum and Stabiae were covered with and instantly preserved by volcanic ash up to 65feet deep that resulted from the eruption of Mount Vesuvius in AD 79(Maiuri,1970;Parslow,1995).The great fire of London in 1666destroyed a large part of the city over 4days including most of the civic buildings,a cathedral,87parish churches and about 13,000homes (Bell,1971).The Indonesian island of Krakatoa was an underwater volcano that erupted and exploded in 1883to completely destroy the island,cover 300,000square miles with ash and pumice,and cause greattsunamis,Safety Science 39(2001)117–131/locate/ssci0925-7535/01/$-see front matter #2001Elsevier Science Ltd.All rights reserved.P I I:S 0925-7535(01)00031-5*Corresponding author.E-mail addresses:jennifer.wilson@dca.state.fl.us (J.Wilson),omaielson@(A.Oyola-Yemaiel).118J.Wilson,A.Oyola-Yemaiel/Safety Science39(2001)117–131which took36,000lives(Thornton,1996).These and other events caused natural disruption as well as social and human losses.Doubtlessly,few would argue that these disruptions are disasters.2.Defining disasterBut,what exactly is a disaster?Is disaster the disruption of natural ecosystems?Is it the destruction of property?Is it the loss of human life?Is it the loss of vital human resources?Or a combination of these things?Furthermore,is disaster a societal phenomenon or is it a local phenomenon?Although there is no argument that catastrophic events occur to individuals and to small social groups such as the family,we classify disaster in this study as a catastrophic event that affects a large portion of a community.We concur with Fritz (1961)who defines disaster as:an event,concentrated in time and space,in which a society,or a relatively self-sufficient subdivision of a society,undergoes severe danger and incurs such losses to its members and physical appurtenances that the social structure is disrupted and the fulfillment of all or some of the essential functions of the society is prevented.Fritz’s(1961)definition encompasses the human responses and adaptations to events that have been defined as disasters.It also illustrates how this behavior differs from that occurring in the crises of everyday life and in ordinary accidents such as traffic accidents.This contrasts disasters from somewhat more routine public issues like crime,unemployment,and poverty.Although a consensus on the proper academic definition of‘‘disaster’’has not been reached,it is certain that disasters affect whole communities in many significant ways.Indeed,the potential for highly destructive events is increasing as the world’s population increases,as certain potentially dangerous technologies become more widespread,and as populations become more concentrated in urban areas(Hoet-mer,1991,p.xxii).3.Defining emergency managementIn view of the social calamity that past disasters have produced and in order to reduce catastrophic events,societies in the modern era have established structures to attempt to‘‘manage’’natural and technological hazards and their impacts on life and property.The delineation of‘‘emergency management’’as a separate and specific body of knowledge is a fairly recent innovation,codifying the discipline of managing disasters and emergencies(Marshall,1987).Emergency management is the discipline and profession of applying science,technology,planning,and management to deal with extreme events that can injure or kill large numbers of people,do extensiveJ.Wilson,A.Oyola-Yemaiel/Safety Science39(2001)117–131119 damage to property,and disrupt community life(Hoetmer,1991,p.xvii).In other words,emergency management is the management of risk so that societies can live with environmental and technological hazards and deal with the disasters that they cause(Waugh,2000).In the United States,emergency management generally has been conceptualized as a problem and legal responsibility of government—local,state,and federal(Lindell and Perry,1992).Indeed,elected officials have an inherent or a statutory duty to protect lives and property with co-ordinated response to disasters(Daines,1991,p. 161).Although,historically,emergency management was considered only a function of law enforcement andfire departments,(Petak,1985),today,the function of emergency management requires a permanent,full-time program to co-ordinate a variety of resources,techniques,and skills to reduce the probability and impact of extreme events—and,should a disaster occur,to bring about a quick restoration of routine.Throughout this century,each United States presidential administration has been concerned for its citizen’s safety from calamity—natural or man-made.Since Franklin D.Roosevelt’sfirst administration permanent government agencies con-cerned with domestic and defense emergencies have been in place(Lindell and Perry, 1992).For example,drought and wind erosion that caused the Dust Bowl in the 1930s was gradually halted with federal aid in which windbreaks were planted and grassland was restored from overgrazing(Hurt,1981).The Flood Control Act of 1936provided for a wide variety of projects,many of which were completed under the authority granted to the US Army Corp of Engineers under which hundreds of dams,dikes,and levees were erected to reduce vulnerability tofloods(Drabek, 1991,p.7).Thus,because the land was considered tamed and vulnerability to natural hazards seemingly reduced,by the1950s the primary threat of disaster to the general popu-lation was considered by the federal government to be from an outside source in the manifestation of nuclear attack.Growing concerns about potential uses of nuclear weapons resulted in the creation of an independent federal agency through the enactment of the Federal Civil Defense Act of1950(Drabek,1987).Throughout the 1950s and1960s especially and until the conclusion of the Cold War,this country’s federal emergency management program focused upon civil defense.But,continued population growth in high-risk areas such as barrier islands,along fault lines and inflood plains,fueled growing vulnerability to natural disasters in the United States.With diminishing threat of nuclear attack at the closing stages of the Cold War and ever-increasing impacts of major natural hazards such as hurri-canes,earthquakes,andfloods,federal emergency management began to split its focus between both types of threat—war and natural disasters.By the early1970s, specific emphasis was placed on peacetime as well as wartime emergencies(Drabek, 1987).The Carter administration created the Federal Emergency Management Agency (FEMA)in1979in response to administrative and structural difficulties,as well as to concern that the scope of the functions performed as part of emergency management were too narrow and that too many resources were devoted to‘‘after-the-fact’’120J.Wilson,A.Oyola-Yemaiel/Safety Science39(2001)117–131disaster response and too few to the issues of prevention and control(Lindell and Perry,1992).This reorganization pulled together programs and personnel scattered throughout the federal bureaucracy and gave increasing legitimacy to comprehen-sive emergency management(Drabek,1991,p.17–18).In line with the comprehensive emergency management concept,FEMA has emphasized an all-hazards approach since the mid-1980s.The basis of this approach acknowledges that for many disaster management needs and problems,the con-sequences are often the same regardless of the particular type of disaster(natural or technological),i.e.displaced people need to be fed and sheltered,damaged infra-structure needs to be repaired,etc.(Kreps,1991,p.37).According to FEMA,if one looks across the range of threats we face,fromfire,to hurricanes,to tornados,to earthquakes,to war,one willfind there are common preparedness measures that we deal with in trying to prepare for those threats.These common preparedness elements include evacuation,shelter,communications,direction and control,con-tinuity of government,resource management,and law and order(FEMA,1983).It is the establishment of common preparedness measures that then becomes a foun-dation for all threats in addition to the unique preparedness aspects relevant to each individual threat(Thomas,1982,p.8as cited in Vale,1987,p.84).Thus,many emergency management functions are appropriate to a range of hazards.Oper-ationally,emergency management capabilities are now based on these functions,i.e. warning,shelter,public safety,evacuation,and so forth,that are required for all hazards(Hoetmer,1991).4.Emergency management levelsSince the comprehensive consolidation effort of federal emergency management programs into FEMA there has been subsequent development of state and local emergency management programs along similar lines.While some emergency responsibilities and functions are common to all three levels of government,each also has its own unique responsibilities(Table1).The basic role of the state emergency management office is to support local gov-ernment in all aspects of disaster mitigation,preparedness,response,and recovery (Durham and Suiter,1991,p.111).States directly engage in emergency management of hazards with scopes of impact that may encompass multiple localities.Some threats require states to co-ordinate the emergency management actions of local jurisdictions as well as commit their own resources(Lindell and Perry,1992,p.8). In addition,state government serves as the pivot in the intergovernmental system between the local and federal levels.The state emergency management office has the responsibility for administering federal funds(primarily from FEMA)to assist local government in developing and maintaining an all-hazard emergency management system.In its pivotal role,the state is in a position to determine the emergency management needs and capabilities of its political subdivisions and to channel state and federal resources to local government,including training and technical assis-tance,as well as operational support in an emergency(Durham and Suiter,1991,J.Wilson,A.Oyola-Yemaiel/Safety Science39(2001)117–131121 Table1State agency emergency management role in support of local government(adapted from Drabek and Hoetmer,1991,p.109)Emergency management role Principal state agenciesDirection and control in emergency State emergency management(through governor) Warning and notification State emergency management Communications State emergency management Public safety Public information Governor’s office State emergency management Shelter and mass care American Red Cross Human services Evacuation State emergency management National Guard Law enforcement Public safety National GuardDamage assessmentPublic buildings General servicesElectric power Public service commissionUnemployment Employment securityHousing Human servicesFarms AgricultureWater supply Environmental protectionRoads and bridges TransportationState agency coordination for damage assessment State emergency managementHazardous materialsIdentification and assessment Environmental protectionEmergency response State emergency managementCleanup Environmental protectionRadiological monitoring Environmental protectionp.101).In this capacity,the state office has a unique relationship with local gov-ernment governed by two related objectives:(1)to ensure that federal dollars are used in a manner consistent with federal policy,and(2)to provide direct support to local governments as they develop emergency management capability(Durham and Suiter,1991,p.107).It has been argued that disasters,whether natural or technological,are local events.It is the local community that experiences the impact of disasters and it is incumbent on the locality to undertake some positive action(Lindell and Perry, 1992).Local government has traditionally had thefirst line of official public responsibility forfirst response to a disaster(Clary,1985).Therefore,local govern-ments have to develop and maintain a program of emergency management to meet their responsibilities to provide for the protection and safety of the public (McLoughlin,1985).County and municipal offices tend to get much less exposure and have much less availability of resources compared with the federal and state levels.Although the locality is the component closest to the disaster,it is the one with the smallest resource base and with the least access to resources through its constituency. Financial resources and technical capacity can be provided by state and federal122J.Wilson,A.Oyola-Yemaiel/Safety Science39(2001)117–131agencies to augment local capacities,but local officials typically are required to manage disasters until help arrives.What is done during thosefirst hours or days may well determine the success or failure of the response and the costs of recovery (Waugh,1994).For state and local emergency managers,disaster response and mitigation responsibilities have increased dramatically over the past decade.There have been a number of catastrophic natural disasters since1989that have raised the conscious-ness of average citizens and public administrators.Hurricane Hugo in the Carolinas and Virgin Islands,and the Loma Prieta(San Francisco Bay area)earthquake,both in1989,Hurricane Andrew in1992,plus the Midwestfloods of1993were cata-strophes that focused public attention not only on disaster,but on the adequacy of the response to each event and on the need for better disaster mitigation and recovery(Grant,1996).In spite of the fact that tremendous effort has been placed on reduction of risk to natural and technological hazards,it is evident that the overall social consequences of disasters have increased over the past few decades(Blaikie et al.,1994;Peacock et al.,1997;Mileti,1999).There is now more pressure for organizations and civil society to cope with disasters.Consequently,major shifts in the practice of emer-gency management have taken place in the United States.Better emergency response systems and functional co-ordination between federal,state,and local levels of government has evolved into a greater degree of expertise in thefield of emergency management.5.Professionalization of emergency managementIndeed,there has been acceptance of the notion among researchers and practi-tioners that the occupation of emergency management is professionalizing(e.g. Drabek,1987,1989,1994;Drabek and Hoetmer,1991;Lindell and Perry,1992; Sylves and Waugh,1996).Drabek(1994)maintains that the entire nation has experienced a major redirection in disaster preparedness since the1960s that reflects the rapid emergence of a new profession.Increasingly,local government officials have recognized the need for improved co-ordination within the emergency response system.Increasingly this function has been explicitly assigned to an agency directed by a professional with specialized training and job title.For Drabek(1994),the new era of emergency management is indicated by four major areas of change:(1) developments in preparedness theory,(2)new training opportunities,(3)technolo-gical innovations,and(4)increased linkages between research and practitioner communities.But are these areas of change real indicators of occupational(emergency man-agement)professionalization?First,defining exactly what is a‘‘profession’’is not simple.The word‘‘profession’’can be used in quite different ways in everyday lan-guage(Selander,1990).Sociological theory identifies professions as occupations enjoying,or seeking to enjoy,a unique position in the labor force of industrial-ized countries(Collins,1979;Rothman,1987).In this context,professions areJ.Wilson,A.Oyola-Yemaiel/Safety Science39(2001)117–131123 occupations that have been able to establish exclusive jurisdiction over certain kinds of services and to negotiate freedom from external intervention and control over the conditions and content of their work(Freidson,1977).In other words,the core characteristics of a profession are autonomy and monopoly(Rothman,1987).Pro-fessionalization then is the movement of an occupation towards the acquisition of autonomy and monopoly.One substantive indicator that emergency management as afield is indeed pro-fessionalizing results from the institution of the Federal Response Plan in1992.The importance of the Federal Response Plan with its Emergency Support Functions to emergency management professionalization is that it implicitly lays the foundation for standardization of emergency management knowledge,skills,and abilities.A wide number of federal agencies and the American Red Cross must work together incorporating an assumption of co-ordination and communication.Thus,the Fed-eral Response Plan recognizes that many agencies and organizations are part of a disaster response—as opposed to onlyfirst responders such as police and firefighters—highlighting the need for more trained and educated specialists in a variety offields.According to Petak(1985,p.6),emergency managers must have the conceptual skill to understand:(1)the total system,(2)the uses to which the pro-ducts of the efforts of various professionals will be put,(3)the potential linkages between the activities of various professional specialists,and(4)the specifications for output formats and language which are compatible with the needs and under-standings of others within the total system.Professionalization occurs through sponsoring,development,and execution of training by organizations in order to certify individuals as professional emergency managers.The increasing availability of specialized degree programs in disaster management and the certification program for professionals in thefield are indica-tive of monopolization of emergency management.A system of professional licens-ing or certification ensures that practitioners can perform their duties with a certain degree of expertise(Barnhart,1997;Green,1999).Through certification,offered by the International Association of Emergency Managers(IAEM),thefield is able to ensure that future entrants have passed through an appropriate system of selection, training,and socialization,and turned out in a standardized professional mold (MacDonald,1995).The basis of emergency management certification is training.FEMA’s Emergency Management Institute(EMI)provides extensive training that is also available through states.Training is an attempt to build a specialized body of knowledge leading to authority of expertise(Haskell,1984;Beckman,1990).Emergency man-agers are endeavoring to control the educational input in order to develop,define, and monopolize professional knowledge and ensure that practitioners pass through an appropriate system of training and socialization(Larson,1978).Moreover,a national effort of state emergency management directors[National Emergency Management Association(NEMA)]with support from FEMA,is developing a process by which states can receive professional accreditation for their emergency management programs.Accreditation is basically a rigorous,compre-hensive evaluation process to assess an agency or a program against a set of standards124J.Wilson,A.Oyola-Yemaiel/Safety Science39(2001)117–131(National Committee for Quality Assurance,1999).Accreditation is a form of self-regulation,which implies less regulation from outsiders and thus greater autonomy for the emergency management practitioners.This process includes complying with a standard of performance where the state must meet certain requirements in the practice of emergency management such as debris management,sheltering and feeding,and damage assessment.NEMA’s(1998,p.29)report defines state emer-gency management accreditation in the following way.Emergency management accreditation is a voluntary national program of excellence dedicated to ensuring disaster-resistant communities through national standards,demonstrated emergency management capabilities and performance,partnerships,and continuous self-improvement.Emergency managers at the federal,state,and local levels of government have not always shared the same idea of what a‘‘professional emergency manager’’or even the‘‘emergency management profession’’entails.But this has been changing and may continue to change rapidly with increased co-ordination among all levels of government through the accreditation and certification processes.If so,the emer-gency management profession will have a greater degree of intergovernmental consistency.6.The process of emergency management professionalization:the case of Florida Florida is an important and interesting state within which to study the changing field of emergency management.First,Florida is the state geographically and his-torically most vulnerable to hurricanes.Between1900and1997Florida has been directly hit by a little more than one-third of the hurricanes that struck the United States(57of159).This is far more than that experienced by any other state (Lecomte and Gahagan,1998,Williams and Duedall,1997).Table2displays a comparison of hurricane landfall between United States coastal states during most of the twentieth century.Fig.1illustrates the tropical storms and hurricanes that have crossed Florida between1992and1999.Second,Florida is one of the fastest-growing states in terms of population. Throughout the twentieth century Florida’s population has grown at a rate con-siderably faster than the rate of growth for the nation as a whole—usually two, three,or four times as rapidly.In1998,the state’s population reached15million, growing15.9%since the1990census of12.9million,which is more rapid than the nation as a whole(8.3%).The current population of Florida is approximately 14,650,000—the4th most populous in the nation(United States Bureau of the Census,1998).By2025,the United States Bureau of the Census(1995)projects Florida will surpass New York to become the third most populous state in the nation with20.7million people.Table3represents Florida’s tremendous growth rate. Moreover,Florida’s coastal population(Fig.2)has grown from just under7.7 million in1980to over10.5million by the mid-1990s,an increase of37%(LecomteTable2United States mainland hurricane strikes by state,1900–1996(National Hurricane Center,1997)Area Category number All Major123451,2,3,4,53,4,5 United States(Texas to Maine)58364715215864 Texas1299603615 Louisiana858312512 Mississippi1150186 Alabama41500105 Florida171617615724 Georgia1400050 South Carolina64220144 North Carolina104101a02511 Virginia211a0041a Maryland01a0001a0 Delaware0000000 New Jersey1a00001a0 New York31a5a0095a Connecticut23a3a0083a Rhode Island02a3a005a3a Massachusetts22a2a0062a New Hampshire1a1a0002a0 Maine5a00005a0a Indicates all hurricanes in this group were moving faster than30mph.State totals will not necessarilyequal United States totals,and Texas or Florida totals will not necessarily equal sum of sectionaltotals.Fig. ndfalling storms in Florida,1992–1999(provided by Florida Department of Community Affairs,1999).J.Wilson,A.Oyola-Yemaiel/Safety Science39(2001)117–131125and Gahagan,1998).According to Florida Department of Community Affairs (1999),more than 9million Floridians live within 10miles of the coast in 1998,more than 62%of the state’s total population.Due to such high coastal population there are more people at risk in Florida from hurricanes than in any other state in the nation.With people comes property;Florida also has the most coastal property exposed to wind storms.From 1980to the mid-1990s,the value of insured residential prop-erty increased by 135%from $178billion to $418billion and insured commercial property increased by 192%from $155billion to $453billion (IIPLR and IRC,1995).Table 4portrays the increase in the value of insured residential and com-mercial property in Florida.According to the Insurance Services Office (2000),Florida had the most insured losses in the country in the period from 1990to 1999with $19.3billion.California was second with $17.5billion and Texas was third with $6.6billion in insured losses.Throughout the 1980s and early 1990s,several legislative attempts to overhaul the emergency management system in Florida were made with little success.But Hurri-cane Andrew demonstrated decidedly that the state lacked sufficient expertise and Table 3Population Growth in Florida (inmillions)Fig.2.Florida’s coastal population (provided by Florida Department of Community Affairs,1999).126J.Wilson,A.Oyola-Yemaiel /Safety Science 39(2001)117–131resources to co-ordinate an operation to handle a major disaster (FEMA,1993).Florida response was uncoordinated,confused,and often inadequate.In response,Governor Lawton Chiles issued an executive order (92–242)establishing the Gov-ernor’s Disaster Planning and Response Review Committee to evaluate current state and local statutes,plans and programs for natural and man-made disasters,and to make recommendations to the Governor and the State Legislature before the 1993legislative session.The 1993Florida legislature acted on most of the committee’s recommendations,including its call for the creation of the Emergency Management Preparedness and Assistance Trust Fund (EMPATF).House Bill 911created the EMPATF through a $2surcharge levied on all private insurance policies,and a $4surcharge on com-mercial policies (Koutnik,1996).Florida Governor Chiles set up the Division of Emergency Management under the Department of Community Affairs,and appointed Joseph Myers to lead the emer-gency program,as per FEMA guidelines (Kory,1998).The specific powers and authorities of the emergency management division are set forth in Chapter 252,part 1of the Florida Statutes,entitled the ‘‘State Emergency Management Act’’(Mittler,1997).This act amended the Florida Statutes in 1995to provide broad powers for the governor to order evacuations and to demand mutual aid agreements between counties,municipalities,and the state.Each local government must also prepare an emergency operations pliance,co-operation,and co-ordination are man-dated by the state (Kory,1998).Thus,the state of Florida is committed to acquiring the needed proficiency to reduce disaster impacts,and therefore Florida Division of Emergency Management (DEM)is seeking to become accredited through the NEMA accreditation process.But,in order for the state to meet this standard of performance each county must also match this standard since the state only assists counties in responding to,recovering from,preparing for and mitigating against local disasters within the state.The DEM is currently conducting focus groups with local emergency managers in order to determine the tasks that local emergency management programs should be able to perform.The determination of these tasks will then be used by the state as part of its standards of performance.Therefore,the state’s attempt to Table 4Increase in value of insured residential and commercial property in Florida (in millions ofdollars)。
3d meeting of the Vicious Circle Society14october2005,Haus der Universit¨a t,BernProgramme09.30Welcome in the Haus09.45T.Studer(Bern):Probabilistic Description Logic10.15L.Keiff(Lille):Logical dynamics10.45F.Schang(Nancy):Philosophy of a philosophical Logic(From epistemiclogic to epistemo-logic)11.15Coffe time in the Haus11.30F.Correia(Barcelona/Tarragona):Modality,quantification,and manyVlach operators12.00M.Kretz(Bern):A proof-theoretical result about the modal mu-calculus 12.30Lunch in the Haus14.00J.-Y.B´e ziau(Neuchˆa tel):Absolute maximality14.30A.Moretti(Nice):Expressing inside n-Opposition Theory the forkings andisolated points of the modal graphs15.00S.Salaet(Barcelona):An abstract algebraic logic view on Nelson’s para-consistent logic N415.30Coffee time in the Haus15.45K.Br¨u nnler(Bern):Is there a proof theory of temporal logic?16.15A.Costa-Leite(Neuchˆa tel):Paraconsistentization of logics-the generaltheory of paraconsistent logics16.45L.Kwuida(Bern)-to be confirmed-17.15Searching for a metalogical-ap´e ro1AbstractsT.Studer(Bern):Probabilistic Description LogicMost probabilistic extensions of description logics focus on the terminologi-cal apparatus.While some allow for expressing probabilistic knowledge about concept assertions,systems which can express probabilistic knowledge about role assertions have received no attention as yet.We present a system PALC which allows us to express degress of belief in concpet and role assertions for individuals.We introduce syntax and semantics for PALC and we define the corresponding reasoning problem.An independence assumption regarding the assertions for different individuals yields additional constraints on the possi-ble interpretations.This considerably reduces the solution space of the PALC reasoning problem.L.Keiff(Lille):Logical dynamicsIn the context of legal reasoning,several dialectical models have been proposed in order to capture1.the procedural nature of this kind of reasoning;2.the defeasible character of its inferences;3.the layered organization of the institution of justice,where distinct fea-tures of an initial court decision can be brought into debate and subject to revision.Making use of the work of logicians such as Toulmin,Rescher,Prakken,Sartor or Loui,we will make some remarks on the concept of logic which underlies this approach,how it sheds light much farther than its initial domain,and how it connects with dialogical pragmatic epistemology.We will emphasise on the dynamic relation between the processes of the formation of judgment on the one hand,and of its critical discussion on the other hand.F.Schang(Nancy):Philosophy of a philosophical Logic (From epistemic logic to epistemo-logic)Epistemic logic will be assessed as an enlightening subspecies towards a state-of-the-art philosophy of logic.It will be assumed that Logic as a whole is an em-pirical modeling of conceptual structures within some feed-back process.Given philosophical logic as any interpreted system from pure algebras to natural lan-guages,Logic will be reviewed in its current practice and presented as a general theory of streamlining conceptual thought.From this,the usual tripartition in philosophy of logic between monism,pluralism and instrumentalism will be con-flated into just one theoretical activity,that is:to organize various conceptual inputs(pluralism)into a set of structural sequences(instrumentalism)aiming at unity(monism).For this purpose,epistemic logic will appear as a conspicuous2case in point in displaying our phenomenalist and idoneist view of Logic:Logic as an open and historically-rooted activity.F.Correia(Barcelona/Tarragona):Modality,quantifica-tion,and many Vlach operatorsActualist quantifiers are definable in terms of possibilist quantifiers,the ex-istence predicate,and truth-functional connectives:there is(in the actualist sense)an x such that P iffthere is(in the possibilist sense)an x which exists and is such that P.In order to define possibilist quantifiers in terms of actual-ist quantifiers,more is needed that just truth-functional connectives and other standard pieces of vocabulary.As shown in G.Forbes The Language of Pos-sibility,one way of doing so is to make use of the possibility operator and of a pair of sentential operators called Vlach operators.In my presentation I will study the effect of introducing infinitely many pairs of such operators to certain quantified modal languages,and examine some philosophical consequences for the debate between actualism and possibilism.M.Kretz(Bern):A proof-theoretical result about the modal mu-calculus(Based on joint work with Gerhard J¨a ger and Thomas Studer)In order to overcome certain limitations of expressivity modal logic may be extended by variables taken to range over sets of worlds.A formula A[X]which is positive in the variable X may then be interpreted as a monotone operator, that is as a function which maps sets of worlds to sets of worlds and which respects set inclusion.It is well–known that such operators always possess a least and a greatestfixed point,that is given a Kripke structure K there is a minimal set S and a maximal set T such that A[X] K[X:=S]=S and A[X] K[X:=T]=T.Taking our modal language enriched with second order variables we may thus further add symbolsµandνfor least and greatestfixed points respectively where µX.A[X] K=S and νX.A[X] K=T.The logic obtained in this manner is known as the modalµ–calculus and it is in a sense the most general modal logic withfixed point extensions.Notable fragments of the modalµ–calculus include Logic of Common Knowledge,PDL,CTL and CTL∗.When comparing different fragments of the modalµ–calculus a great deal is known about their relative expressivity in terms of the definability of certain properties of Kripke structures.Prime examples of this are the strictness results by Bradfield[1]with respect to the alternation hierarchy and Berwanger,Gr¨a del and Lenzi[2]with respect to the variable hierarchy.In both cases the argument centers aroundfinding a formula at a certain level of the respective hierarchy which expresses a property of Kripke structures not expressible at any lower level.Such a notion of expressivity is not suitable when talking about valid formulae,i.e.formulae which are true and thus satisfied in every world of every Kripke structure.In this case a different measure of expressivity is nevertheless3conceivable,namely the maximum number of iterations of the corresponding operator needed in order to establish the truth of a anyfixed point formula defined in the logic under consideration.With respect to this alternative concept of expressivity only limitative results are known at present.The Logic of Common Knowledge is known to be capable of expressing only operators which close atωmany iterations,whereωrefers to the least infinite ordinal.On the other hand Lenzi has produced a valid formula of the full modalµ–calculus which does not close atω.In this presentation we prove a conjecture by Lenzi[3]stating that all validfixed point formulae of the modalµ–calculus restricted to leastfixed points close atω.Our proof proceeds by providing a sound and complete axiomatization for the modalµ–calculus which is cut–free and in which the length of the proof of a validfixed point formula can be seen to correspond to the number of iterations needed for closure.We willfirst sketch the completeness proof for our axiomatization before using proof–theoretic reasoning to show the conjecture. References[1]Julian Bradfield,The modal mu-calculus alternation hierarchy is strict,Theoretical Computer Science,vol.195(1998),133-153[2]Dietmar Berwanger and Erich Gr¨a del and Giacomo Lenzi,On the vari-able hierarchy of the modalµ–calculus,CSL’02:Proceedings of the16th International Workshop and11th Annual Conference of the EACSL on Computer Science Logic,Springer(2002),352-366[3]Giacomo Lenzi,Personal communication,june,2004.J.-Y.B´e ziau(Neuchˆa tel):Absolute maximality (Forthcoming joint work with A.Facchini)In this talk I will present a new concept of maximality for logics which extends the previous one(according to which for example,due to a famous theorem of Post,classical propositional logic is maximal.A.Moretti(Nice):Expressing inside n-Opposition Theory the forkings and isolated points of the modal graphs (Joint work with R.Pellissier)The previous recent work spread on the emerging field of”n-opposition theory”(a generalised theory of opposition,both geomet-rical and modal logical,generalising strongly the logical square and hexagon of oppositions)has been developed,for reasonable pragmatic reasons,under the assumption of the linearity of the so-called”modal graphs”or”modal n(m)-graphs”(the modal graph of a system expresses the implications and contra-dictions between the”basic modalities”of this system).This has lead to deep elegant theoretical results concerning the general abstract frame of n-opposition4theory,establishing among others the notions of”logical bi-simplexes”(orα-structures),of(n-1)-dimensional”modal n(m)-graphs”,the logical series of n-dimensional”hyper-flowers”and”hyper-tetraicosahedra”(orβ-structures)and the powerful”setting method of modal decoration”.In order to be able to for-malise geometrically(i.e.n-oppositionally)the existing systems of modal logic (i.e.in order to show and explore the concrete use and utility of abstract n-opposition theory as applied to any particular system of modal logic,normal or non-normal)we need to go beyond this commitment to graph linearity(for clarity,if the modal graph of S5is linear,the one of S4has two nested fork-ings).We do it in the present study,where we show how to interpret,inside n-opposition theory,forkings and isolated points(or isolated basic modalities)of classical and non-classical modal graphs by means of the set-theoretic decoration method newly adapted.S.Salaet(Barcelona):An abstract algebraic logic view on Nelson’s paraconsistent logic N4The N4logic is the paraconsistent version of the N3logic.N3logic was devel-oped by Nelson in the forties to solve a problem of the negation in Intuitionistic logic.The so-called problem of the strong negation:in the later logic we cant derive the negation ofϕorψfrom the negation ofϕ∧ψ.In the last years, Odintsov makes a sistematically study of the variety VN4which gives complet-ness for N4.He also characterizes it with help of the socalled twist-structures over implicative lattices.We make in relation these results with the theory of algebraic abstrac logic.In particular we show VN4gives an algebraic equivalent semantics to N4and we also show it is exactly Mod*S,the class of reduced mod-els of S.This proves N4is strongly axiomatizable.We also develop an example which proves N4is paraconsistent and not regularly axiomatizable.Finally we show specialfilters,as defined by Odintsov,are all thefilters for every A in VN4.K.Br¨u nnler(Bern):Is there a proof theory of temporal logic?Currently known sequent systems for temporal logics such as propositional lin-ear time temporal logic(PLTL)either include a cut rule in some form or an infinitary rule,which is a rule with infinitely many premises.Both kinds of systems are unsatisfactory for automated deduction and for studying cut elim-ination.I will discuss the question of whether there is a satisfactoryfinitary cut-free sequent system and what satisfactory means in thefirst place.A.Costa-Leite(Neuchˆa tel):Paraconsistentization of log-ics-the general theory of paraconsistent logicsGiven a logic,how to obtain its paraconsistent counterpart?Given a paracon-sistent logic,how to obtain the non-paraconsistent counterpart of this logic?5Many systems of paraconsistent logics were already proposed using different methods and techniques.Although this fact,there is not a singular way to generate paraconsistent logics.I propose an unification of all these ways to create paraconsistent logics from logics which are non-paraconsistent by means of what I call paraconsistentization of logics-the general theory of paracon-sistent logics.I examine the plurality of ways which can be used to create a paraconsistent logic:defining paraconsistent negations in the language(Beziau’s argument showing how to generate paraconsistent logics from modal logics and vice-versa),by restrictions in the axioms(the example of some modal para-consistent logics),fibring a given logic with a paraconsistent one and changing valuations(many-valued logics).An unrealist logic is presented as an example of paraconsistentization at the semantical level.I also examine how to paracon-sistentizate classical propositional logic and how this method should preserve soundness and completeness.6。
a r X i v :p h y s i c s /0611299v 1 [p h y s i c s .i n s -d e t ] 30 N o v 2006LC-DET-2006-008November 2006A proposed DAQ system for a calorimeter at the International Linear Collider M.Wing 1,†,M.Warren 1,P.D.Dauncey 2and J.M.Butterworth 1for CALICE-UK groups 1University College London ,2Imperial College London †Contact:mw@ Abstract This note describes R&D to be carried out on the data acquisition system for a calorimeter at the future International Linear Collider.A generic calorimeter and data acquisition system is described.Within this framework modified designs and potential bottlenecks within the current system are described.Solutions leading up to a technical design report will to be carried out within CALICE-UK groups.1IntroductionWith the decision on the accelerator technology to be used for a future International Linear Collider(ILC),detector R&D can become more focused.The time-line for an R&D programme is also clearer with,assuming a technical design report to be written by2009,three years to define the make-up of a given sub-detector.Within the CALICE collaboration,which is designing a calorimeter for the ILC,a collection of UK groups (CALICE-UK)are part of the initial effort to prototype a calorimeter composed of silicon and tungsten[1].The electromagnetic section of the calorimeter(ECAL)has been taking test-beam data at DESY and CERN in2006.The UK has designed and built electronics to readout the ECAL[4]-these are also now being used by the analogue hadronic calorimeter-and is taking part in the current data-taking period.Building on this expertise,CALICE-UK has defined an R&D programme.A significant part of this programme is the design of the data acquisition(DAQ)system for a future calorimeter. In the work,DAQ equipment will be developed which attacks likely bottlenecks in the future system and is also sufficiently generic to provide the readout for new prototype calorimeters,such as the prototype to be built in the EUDET project[5].The main aim is to start an R&D programme which will work towards designing the actual DAQ system of the future calorimeter.Alternative designs of a DAQ system which could affect the layout of thefinal detector or functionality of components are also considered. The concept of moving towards a“backplaneless”readout is pursued.A strong under-pinning thread here is to attempt to make use of commercial components and identify any problems with this approach.Therefore the system should be easily upgradable, both in terms of ease of acquiring new components and competitive prices.This note is organised as follows.The parameters of the superconducting accelerator design and calorimeter structure and properties which impinge upon considerations of the DAQ system for a calorimeter are discussed in Section2.The main body of the note in Section3discusses the DAQ design and proposes areas of R&D within it.The work is will investigate the three principal stages of the DAQ system:the connection along the calorimeter module;the connection from the on-to off-detector;and the off-detector receiver.In Section4a model DAQ system for thefinal ECAL is proposed. This necessarily makes many assumptions but gives an idea of the scale of the system involved:it can also be the start of an initial costing.The note ends with a brief summary in Section5.The programme detailed below will allow CALICE-UK groups to continue to assist in development of new technologies for the DAQ system.We would expect to write a chapter in the future technical design report on the DAQ system for the calorimeter. For thefinal calorimeter,the DAQ should ideally be the same for the ECAL and HCAL. Although CALICE-UK has so far concentrated on the ECAL,our proposals for R&D contained in this document are sufficiently generic that both calorimeter sections should be able to converge to use the DAQ system we design.This will place us in a position to build the DAQ system for future large-scale prototype calorimeters(e.g.EUDET)and thefinal system.Indeed the principle of a generic design using commercial components should be applicable to many detector sub-systems.Therefore,the R&D to be performedhere may have consequences or applications to the global DAQ system for a future detector.2General detector and accelerator parametersThe design[1]for a calorimeter for the ILC poses challenges to the DAQ system mainly due to the large number of channels to be read out.The TESLA design[1]for a sampling electromagnetic calorimeter is composed of40layers of silicon interleaved with tungsten. The calorimeter,shown in Fig.1,has eight-fold symmetry and dimensions:a radius of about2m,a length of about5m and a thickness of about20cm.Mechanically,the calorimeter will consist of6000slabs,of length1.5m,each containing about4000silicon p-n diode pads of1×1cm2,giving a total of24million pads.More recent designs for the detector collaborations consider fewer layers,29for LDC[2]and30for SiD[3],and also smaller pad sizes of5×5mm2,or even3×3mm2.Figure1:View of the barrel calorimeter modules and detail of the overlap region between two modules,with space for the front-end electronics.A generic scenario for the DAQ system is as follows.At the very front end(VFE), ASIC chips will be mounted on the PCBs and will process a given number of pads.The ASICs will perform pre-amplification and shaping and should also digitise the data and may even apply a threshold suppression.The current design[6]of such chips has each containing64channels,although this may be higher in thefinal calorimeter.The ASIC power consumption has to be minimised as they are difficult to cool due to the small gaps between layers which are required to take advantage of tungsten’s Moli`e re radius. The data will be digitised in the ASIC and transferred to the front-end(FE)electronics which are placed in the detector at the end of the slab as shown in Fig.1.It is expected that zero suppression will be done in the FE(using FPGAs)to significantly reduce therate.The data will then be transferred offthe detector,probably via a network switch,to a receiver of many PCI cards in a PC farm.If we assume the TESLA design for data taking at800GeV,the following parametershave to be considered.There will be a4886bunch crossings every176ns in a bunch train,giving a bunch train length of about860µs.The bunch train period is250ms,giving a duty factor between trains of about0.35%.The ECAL is expected to digitisethe signal every bunch crossing and readout completely before the next bunch train.In a shower,up to100particles/mm2can be expected,which in a1×1cm2pad equates to10000minimum ionising particle deposits.The ADC therefore needs a dynamic rangeof14bits.Assuming no threshold suppression and that2bytes are used per pad per sample,then the raw data per bunch train is24·106×4886×2=250GBytes which equates to0.3-2.5MBytes for each ASIC depending on whether they process between 32and256channels.The data appears within a bunch train length of860µs giving arate out of the ASIC of0.4-3GBytes/s,which we take to be1GBytes/s from now on.Threshold suppression and/or buffering(to allow readout between bunch trains)within the ASIC could reduce this rate.However,suppression in the ASIC may not beflexible enough compared with doing this in the FE and buffering requires some ASIC power to remain on between bunch trains,potentially generating too much heat.Hence the rates after the VFE depend on the assumptions made and system layout and will be discussed for each individual case where necessary.3Design of a DAQ system3.1Transmitting digitised data from the VFE chipThe transmission of digitised data from the ASIC is very heavily influenced by what can be done within the slab given the low heat-load requirements due to the difficulties of cooling.It is not yet known what the capabilities of the VFE ASIC will be,so various possibilities were considered.In general,somewhere in the readout system,there will have to be an ADC and a threshold discriminator.These tasks could in principle be performed in either order and could be done in the VFE or in the FE.There is also the possibility of buffering events in either the VFE or FE.This would allow the data to be read out between bunch trains rather than bunch crossings.This entails a dramatic decrease in the rate of read out due to the large spacing between bunch trains.There is then a matrix of possibilities,with some number of the functionalities,ADC,thresholding and buffering,being done in the VFE or FE.Below the four possibilities are considered for the ADC and thresholding, and also the buffering in the VFE.1.Neither ADC nor thresholding is done in the VFE2.Only the ADC is done in the VFE3.Only the thresholding is done in the VFE4.Both are done in the VFE5.Buffering done in the VFEWe consider that threshold discrimination is best done after the ADC step rather than before.This allows much easier monitoring the pedestals and noise,etc.,by allowing some readout at a low rate even when below the threshold.In addition,setting a stable analogue threshold is not easy;any drifts will change the level.The uniformity over all channels might not be good enough which would then require a large number of trim DACs.1)If neither an ADC or threshold discriminator is built into the VFE ASIC(due to them taking too much power),then the raw analogue signals will be sent out to the FE. This is2k analogue channels which require around14bits precision,which is not trivial to achieve.Even if this can be done,digitising the data at the FE would be hard.The space is limited and so it is likely only a restricted number of ADCs could be mounted in this area.Assuming20channels of ADCs would be possible,then each would have to handle100pads,with these being multiplexed in turn into the ADC.To keep up with the sampling rate needed,i.e.176ns for each channel would therefore require the ADCs to sample at1.76ns.Finding a14-bit FADC which can do this would not be easy.The alternative would be to use an analogue pipeline;assuming one for each of the20ADC channels would result in each pipeline storing about500k analogue samples which is difficult.Putting an analogue threshold in front of the ADCs would clearly cut the rate down but would need a major ASIC development to handle this;a variable length analogue pipeline with time-stamps would be needed.This is in addition to the pedestal monitoring problems mentioned above.2)Only doing the ADC on the VFE seems a much more reasonable option.The14-bit requirement is much easier to achieve with a short signal path before the ADC.The digitised data can be transmitted from the VFE to the FE more easily than analogue data.The rates are not trivial however;these would be around50GByte/s per slab,or 1GByte/s from each wafer/ASIC.This is at the level where afibre would be needed; commercialfibres now carry5GBytes/s.Fibres are also less noisy than copper.This use offibres within the slab would raise many other issues such as the power needed to transmit the light out(or could it be supplied by an external laser and then only modulated on the ASIC),how to reliably attach thefibres at each end(a total of 300000fibres would be needed for6000slabs each with50ASICs),how large thefibre connectors would be(the total thickness within the slabs is limited to some mm only), etc..Although this is an active area of commercial development,it is not clear if opto-electronic intra-PCB communications will become standard enough on the time-scale needed[7].It is clear some development would be needed for this to be an option;the equivalent system in ATLAS has threefibres transmitting a total of10MBytes/s with a2mm high connector needed.Self-aligning silicon-fibre interfaces are possibilities;while we could not do significant R&D compared with the commercial sector,we could test industrial prototypes and do R&D in conjunction with industry.Once the data are on afibre direct from the ASIC,the idea of whether any FE electronicswould be needed at all was raised,as thefibre would go10s of metres,bypassing the FE completely.However,shipping out all the raw data to the offline seems an expensive overkill,but is considered as this may change with commercial development.3)Only doing the threshold in the VFE suffers from the same problems as mentioned above;there is a difficulty of monitoring the pedestals as well as the complexity of the ASIC needed to handle the channels.4)Doing both ADC and threshold in the VFE places the easiest requirements on the FE,with a corresponding increase in difficulty for the VFE.Assuming the threshold is applied after the ADC,some communication of the threshold and other configuration data from the FE to the VFE will still be needed.The data rate out is clearly reduced; it would be around400MBytes/s for the slab,or20MBytes/s for each wafer/ASIC. Although easiest for transferring data from the VFE to FE,due to the low rates,it is not clear if the threshold can be reliably applied in the VFE.This scenario also looks like the situation if the Monolithic Active Pixel Sensors(MAPS)technology-essentially a digital calorimeter-were used rather than silicon diodes.For the diode option in this scenario,it is also questionable as to whether significant FE electronics logic is needed.As the ASIC performs both ADC and threshold suppression,the data could be transferred directly offthe detector.5)It is generally assumed that buffering in the FE is possible,with large amounts of memory available in modern FPGAs.However,the issue of buffering in the ASICs is more technically challenging.The challenges for such a procedure are having a large enough memory integrated into the ASICs and the keeping the power low whilst the data is being read out between bunch trains.The advantages are clear:the rate of transmission from the ASIC to the FPGA is reduced by about two orders of magnitude. For the proposed electrical connections along the board this will ease the transmission significantly.We,therefore,propose R&D for two scenarios where only the ADC is done in the VFE and where the ADC and thresholding are done in the VFE,both coupled with buffering in the VFE because they provide realistic solutions and have complementary applications.In favour of only performing the ADC,any threshold suppression can be performed more accurately in the FPGA at the FE rather than in the VFE.When thresholding is also done in the VFE along with the ADC,the data transfer rate from the VFE and FE is significantly smaller.A schematic of scenario2)is shown in Fig.2. In both scenarios,we intend to set-up a mock data transfer system which requires having a test board with FPGAs linked byfibres.This will simulate a link between the VFE ASICs and the FE FPGAs.Any developed system e.g.a new VFE chip design or the MAPS set-up could also be tested in our prototype system.We will also demonstrate that the system would work for the hadronic calorimeter as well as the ECAL.This would require modifying the system to have a more links but a lower rate. The prototype will incorporate,wherever possible,commercially available components such as Virtex-4FPGAs[8]which has multi-gigabit serial transceivers and is compatible with10/100/1000Mbit/s ethernet and PCI express x16and higher.Thefinal chip should have around64channels and would be embedded inside the de-SlabFE FPGA PHYASIC Data ASIC ASIC ASIC ConfClock + ControlFigure 2:Design of VFE to FE link.tector.The ADC(s)should be included in the chip in order to output digital data serially at high rate (typically 1-2Gbit/s).The DAQ would thus look more like “an event builder”than a traditional DAQ.It would perform the data reformatting (from “floating”gain +10bit to 16bit),calibration,possibly linearisation and some digital filtering.It is possible that at this level,some event processing will be performed.The other task of the DAQ is to load all the parameters needed by the front-end,control the power cycling and run the calibration.These specifications fit in well with our current generic system.The current version of the VFE ASIC chip [11]is being used to read out the existing CALICE ECAL.This chip does not meet the requirements for the ILC ECAL and the development of the design is an ongoing project in LAL/Orsay [6].In the next 1–2years,it is expected to have a version of such a chip with low enough power and noise that would serve as a realistic prototype.This ASIC is expected to have (at least)32channels,an internal ADC per channel,multiple gain ranges,and optional threshold suppression and digital buffering to reduce the required output rate.Instead of using silicon diodes,the feasibility of using the MAPS technology is to be investigated [10].The use of this technology would also have an impact on the design of the DAQ system.Here,there would be no ADC and a threshold has to be applied on the wafer,by definition.The data rate for a final detector would be 3GBytes/s per slab,or 150MBytes/s per wafer,which is low enough for non-fibre communication.If threshold suppression or buffering could be done in the VFE ASIC,the rate to the FE would be reduced by two orders of magnitude.Current designs cannot do this and it may not even be desirable or practical,so we have to allow for data rates of order GByte/s needing to be transferred out of each VFE ASIC during the bunch train.Whether an electrical or optical connection would be needed has to be investigated.Although chip-to-chip fibres are not yet standard technology,this is an active area of industrial research [7].Issues of how the data would be transported from the VFE to FE have to be considered and can be done already without a real prototype.Transporting of order GByte/s of data over 1.5m in a very limited space is a challenge.The conventional approach would be to use copper but here the effects of noise and interference will have to be considered.There is also the possibility of using optical fibres although here there are also design considerations:the size of connectors would have to be investigated as the vertical clearance at the VFE is of the order of mm and the power needed to transmit light out would also need to be investigated.This work ties in closely with the mechanical and thermal aspects of the design.In preparation for a real prototype,a test system will be built with a 1.5metre PCB containing FPGAs linked optically or electronically.The data transfer would then be considered as a function of the number of VFEs,whether zero suppression is done in the VFEs and whether data is buffered during the bunch train.The bandwidth and cross-talk of the data transfer can be simulated using CAD tools.The clock and control distribution from the front-end to the VFE chips can be investigated as to whether one transmission line per chip is needed or multi-drop is possible.3.2Connection from on-to off-detectorIn this section,we consider two widely differing scenarios,(shown in Figure 3):PC1000xMacro Event Builder PC Busy Network Switch100Gb S l a bLayer-1 Switch Macro Event Builder PC PC PC PC S l a bEventBuilderPCs Macro Event Builder PC Macro Event Builder PCMacro Event Builder PC Event Builder PCs Event Builder PCs Large Network Switch Farm 5Tb S l a b S l a b S l a b S l a b S l a bS l a b S l a b S l a bS l a b S l a b S l a b S l a b S l a b S l a bS l a b S l a b S l a bS l a b S l a b S l a bS l a b S l a b Large Network Switch Farm 5Tb Large Network Switch Farm 5Tb Figure 3:A comparison of the two scenarios,“Alternative configuration”(left)and “Standard configuration”(right)for the on-to off-detector DAQ system.Standard ConfigurationIn our assumed standard detector configuration,communication from the VFE will pass via the electronics at the FE to an off-detector receiver.We assume that threshold suppression will be done at the FE,and hence the rate would be significantly reduced from that at the VFE.Assuming that the rate is reduced to1%of the original data volume of250GBytes per bunch train and each sample above threshold needs a channel and timing label,the total data volume to be read out from the calorimeter is about 5GBytes or about1MByte per slab.These data have to be read out within a bunch train period of250ms,giving a rate of5MBytes/s.Alternative configurationHere we imagine that the FE is removed and the communication is directly from the VFE to the off-detector.We assume that the ASIC only digitises the data and250GBytes has to be transported offthe detector per bunch train.This will require a high-speed optical network.It should be noted,however,that the need for FE electronics also becomes questionable if more processing is done on the ASIC chip,such as threshold suppression. In such a scenario,transporting the data directly from the ASIC offthe detector could be done and so the FE would become redundant.The number offibres required to read-out the24million channels would vary between750000to about90000depending on whether the ASIC handles32or256channels.If we assume that the diameter of afibre is150µm,or with cladding250µm then if half of the circumference of12m hadfibres running along it,the bundle would be up to1cm in depth,but could be as little as1mm,depending on the number of ASICs and hencefibres.This would leave ample room for other cables and power supplies.This concept would revolutionise the whole calorimeter design and so needs to be considered now when changes in its general structure could be considered.Our research in this area will provide important feedback to groups designing the ASIC chips.The off-detector receiver,as described later,is assumed to consist of PCI cards housed in PCs.The reliability of large PC farms is an issue for reading out the data;if one PC goes down,all of the data in that region of the calorimeter is lost.Current PC farms show a rate of1PC failure per day in a farm of200.This is not large but is also not small and would require some surplus of PCs(say10%)above the number required based just on the number of detector channels.For afinal working calorimeter readout system these PCs would need to be repaired and put back into the system on a regular basis.The standard scenario would require less high-speed equipment offthe detector,whereas the alternative would require many opticalfibres with dedicated optical switching.The alternative scenario would,however,remove material from inside the detector which would ease construction and have a potentially advantageous impact on event reconstruction and,hence,physics measurements.It would also reduce the number of processing components within the detector which could be attractive since they would be inaccessible for long periods of time.Using the experience gained from the two scenarios above,a hybrid path can be ex-plored.By optimising the functionality in the VFE and matching it to an FE,overall instantaneous data rates can be reduced as well as loweringfibre count.For example, if data is buffered in the VFE,some form of passive multiplexer can be envisaged thatwill combine data from all the VFEs.This could take the form of spliced optic-fibres, or an OR gate.This would not necessarily remove the need for an FE,but the reduced size and complexity will have many benefits(thermal,configuration time,SEUs,cost, etc.).To transmit data onto the detector,we will attempt to use the same commercial hard-ware used for off-detector communication.The requirements are different,though,as the detector front-end requires clock and synchronisation signals as well as low-level configuration and data mercial network hardware is not ideally suited to synchronous tasks but the significant cost and reliability benefits make it worthy of in-vestigation.This is split into two areas:failsafe configuration prior to normal running, and clock and control signal distribution.Failsafe Configuration In a scenario requiring an FPGA on-detector,it is imperative that the device can be re-configured remotely.This is necessary not only because of the number of FPGAs but also because of the uncertainty in the detector performance in terms of data suppression,pedestal drifts,bad channel identification etc.,all of which have to be implemented in the FE FPGA.The optimal algorithms for these tasks will only be determined after some experience of operating the calorimeter.As the FPGAs are relatively inaccessible,it is important that a failsafe method of reseting and re-configuring the devices exists.Avoiding the use of additional control lines necessitates a means to extract signals from the communications interface.Most probably this will involve“spying”on the serial data line.Of primary importance is to force the FPGA into a known state under any conditions (i.e.a hard-reset).It may simply be possible to send an exceptionally long series of“1”s to the slab(say40M=1second)where an RC type-circuit with a long time-constant will trigger the reset.A more complex method would require some discrete logic(like a shift-register with parallel output into a comparator)searching the incoming data-stream for a“magic”number.Power cycling the board is also a(less elegant)solution, but still requires attention in circuit design to ensure the FPGA will boot.Once hard-reset,the FPGA will follow its start-up logic to initiate the boot-making use of2distinct methods:•Use a non-volatile base configuration that is either hardwired into the FPGA or provided by a EEPROM external to the FPGA.In this case the FPGA would assist in writing the configuration to its internal RAM and then issue a re-boot-using-internal-configuration command to itself.•By waiting for external stimulus(usually a clock)to control data the data being fed into the device.This requires additional hardware to format the the serial-data into data/clock/control for FPGA consumption.In both cases some external hardware is required,the amount of which will only fully understood after a more detailed evaluation.A possible outcome could be that a small non-volatile programmable device could provide the mostflexible solution,but the effects of radiation need further examination.Larger devices with non-volatile configuration memories could be used for the entire front-end logic,but apart from the increased cost and limited re-programming cycles making these less desirable,these devices are not able to re-program themselves. Reliability in the detector environment of these memories over the lifetime of an ex-periment,as well as SEUs,also need to be addressed.Non-volatile memories have limited programming cycles,whereas modern SRAM-based FPGAs boast that they can re-configured an infinite number of times.By refreshing the configuration periodically, the effect of SEU corruption can be minimised.Considering an FPGA with one million logic gates(which specifies a4Mbit PROM),configuring this device at40MHz would take100ms-a re-config every250ms bunch-train!Being able to re-configure the FE when needed also allows the selection of a smaller component that can be configured for specific function,rather than a large one-sizefits all device.Reducing the number of components on the front-end-module is advantageous from a cost,reliability and dead material point of view,so we will focus on methods of generating a data-stream onto the detector that requires the minimum of components to extract signals from the physical network interface module.Clock and Control To ensure the on-detector electronics captures data from actual collisions,all components in the detector are synchronised to the bunch-crossing clock. Bunch-train start,stop and ID signals are also be required.Traditionally this is done with a bespoke,low-latency,highly synchronous system,but here we will look at using a commercial network switch.Switches are not designed for synchronous signal fanout.In fact the now obsolete forerunner to the switch,the hub,is much better suited to the task.Modes for signal broadcasting do exist,but these need to be closely examined.Studies on latency and port-to-port skew will be undertaken.To obtain maximal control over timing,the lowest level protocols will need to be used. Directly accessing the network at the physical layer(i.e.as a simple serial link)would facilitate complete control over data-packet composition and timing.For this a specialist and customisable network interface card is required.To regenerate the clock at the detector end,the time structure of the data transmitted needs to be structured accordingly.This would probably take the form of a local os-cillator being periodically resynced to the data clock.As the bunch-crossing interval is 7ns,a1Gbit(1ns)link would be the minimum rate needed.Board-level signals,such as train-start may also need to be decoded directly from the data stream(as with the failsafe system above).In summary,this is as an investigation to verify that a clock,control and configuration system can be constructed using commercial network hardware for distribution.。