The empirical evidence for predictability in common st...
- 格式:pdf
- 大小:79.35 KB
- 文档页数:34
中考英语经典科学实验与科学理论深度剖析阅读理解20题1<背景文章>Isaac Newton is one of the most famous scientists in history. He is known for his discovery of the law of universal gravitation. Newton was sitting under an apple tree when an apple fell on his head. This event led him to think about why objects fall to the ground. He began to wonder if there was a force that acted on all objects.Newton spent many years studying and thinking about this problem. He realized that the force that causes apples to fall to the ground is the same force that keeps the moon in orbit around the earth. He called this force gravity.The discovery of the law of universal gravitation had a huge impact on science. It helped explain many phenomena that had previously been mysteries. For example, it explained why planets orbit the sun and why objects fall to the ground.1. Newton was sitting under a(n) ___ tree when he had the idea of gravity.A. orangeB. appleC. pearD. banana答案:B。
八年级科技前沿英语阅读理解25题1<背景文章>Artificial intelligence (AI) has been making remarkable strides in the medical field in recent years. AI - powered systems are being increasingly utilized in various aspects of healthcare, bringing about significant improvements and new possibilities.One of the most prominent applications of AI in medicine is in disease diagnosis. AI algorithms can analyze vast amounts of medical data, such as patient symptoms, medical histories, and test results. For example, deep - learning algorithms can scan X - rays, CT scans, and MRIs to detect early signs of diseases like cancer, pneumonia, or heart diseases. These algorithms can often spot minute details that might be overlooked by human doctors, thus enabling earlier and more accurate diagnoses.In the realm of drug development, AI also plays a crucial role. It can accelerate the process by predicting how different molecules will interact with the human body. AI - based models can sift through thousands of potential drug candidates in a short time, identifying those with the highest probability of success. This not only saves time but also reduces the cost associated with traditional trial - and - error methods in drug research.Medical robots are another area where AI is making an impact.Surgical robots, for instance, can be guided by AI systems to perform complex surgeries with greater precision. These robots can filter out the natural tremors of a surgeon's hand, allowing for more delicate and accurate incisions. Additionally, there are robots designed to assist in patient care, such as those that can help patients with limited mobility to move around or perform simple tasks.However, the application of AI in medicine also faces some challenges. Issues like data privacy, algorithmic bias, and the need for regulatory approval are important considerations. But overall, the potential of AI to transform the medical field is vast and holds great promise for the future of healthcare.1. What is one of the main applications of AI in the medical field according to the article?A. Designing hospital buildings.B. Disease diagnosis.C. Training medical students.D. Managing hospital finances.答案:B。
阅读理解题型分类练(四) 推理判断题——推断隐含意义类A[2023·石家庄市教学质量检测] Throughout all the events in my life, one in particular sticks out more than the others. As I reflect on this significant event, a smile spreads across my face. As I think of Shanda, I feel loved and grateful.It was my twelfth year of dancing, I thought it would end up like any other year: stuck in emptiness, forgotten and without the belief of any teacher or friend that I really had the potential to achieve greatness.However, I met Shanda, a young, talented choreographer (编舞者). She influenced me to work to the best of my ability, pushed me to keep going when I wanted to give up, encouraged me and showed me the real importance of dancing. Throughout our hard work, not only did my ability to dance grow, but my friendship with Shanda grew as well.With the end of the year came our show time. As I walked to a backstage filled with other dancers, I hoped for a good performance that would prove my improvement.I waited anxiously for my turn. Finally, after what seemed like days, the loudspeaker announced my name. Butterflies filled my stomach as I took trembling steps onto the big lighted stage. But, with the determination to succeed and eagerness to live up to Shanda, expectations for me, I began to dance. All my troubles and nerves went away as I danced my whole heart out.As I walked up to the judge to receive my first place shining gold trophy (奖杯), I realized that dance is not about becoming the best. It was about loving dance for dance itself, a getaway from all my problems in the world. Shanda showed me that you could let everything go and just dance what you feel at that moment. After all the doubts that people had in me, I believed in myself and did not care what others thought. Thanks to Shanda, dance became more than a love of mine, but a passion.1.What did the author think her dancing would be for the twelfth year?A.A change for the better.B.A disappointment as before.C.A proof of her potential.D.A pride of her teachers and friends.2.How did Shanda help the author?A.By offering her financial help.B.By entering her in a competition.C.By coaching her for longer hours.D.By awakening her passion for dancing.3.How did the author feel when she stepped on the stage?A.Proud. B.Nervous.C.Scared. D.Relieved.4.What can we learn from the author's story?A.Success lies in patience.B.Fame is a great thirst of the young.C.A good teacher matters.D.A youth is to be treated with respect.B[2023·辽宁省部分学校二模] Almost a decade ago, researchers at Yale University launched a global database called Map of Life to track biodiversity distributions across the planet. Now, the team added a new feature to the database that predicts where species currently unknown to scientists may be hiding.In 2018, ecologist Mario Moura of the Federal University of Paraiba in Brazil teamed up with Yale ecologist Walter Jetz, who took the lead in the initial creation of the Map of Life. The pair set out to identify where 85 percent of Earth's undiscovered species may be. For two years, the team collected information about 32,000 vertebrate (脊椎动物)species. Data on population size, geographical range, historical discovery dates and other biological characteristics were used to create a computer model that estimated where undescribed species might exist today.The model found tropical environments in countries including Brazil, Indonesia, Madagascar, and Colombia house the most undiscovered species. Smaller animals have limited ranges that may be inaccessible, making their detection more difficult. In contrast, larger animals that occupy greater geographic ranges are more likely to be discovered, the researchers explain.“It is striking to see the importance of tropical forests as the birthplace of discoveries, stressing the urgent need to protect tropical forests and address the need of controlling deforestation rate if we want a chance to truly discover our biodiversity,” said Moura.The map comes at a crucial time when Earth is facing a biodiversity crisis. It was reported that there was a 68 percent decrease in vertebrate species populations between 1970 and 2006 and a 94 percent decline in animal populations in the America's tropical subregions. “At the current pace of global environmental change, there is no doubt that many species will go extinct before we have ever learned about their existence and had the chance to consider their fate,” Jetz said.5.What can be learned about the Map of Life?A.It only tracks biodiversity distributions.B.It was initially created by Mario Moura.C.It predicts where undiscovered species exist.D.It managed to locate 85% of the undiscovered species.6.Which factor makes animals easier to discover?A.location. B.species.C.size. D.population.7.What does the underlined word “address” mean in Paragraph 4?A.Tackle. B.Ignore.C.Maintain. D.Postpone.8.What can we infer from the last two paragraphs?A.Tropical animal populations have slightly declined.B.The Map of life is significant to protecting biodiversity.C.Tropical forests are the birthplace of many extinct species.D.Many species will undoubtedly go extinct even if discovered.CThis is the digital age, and the advice to managers is clear. If you don't know what ChatGPT is or dislike the idea of working with a robot, enjoy your retirement. So, as for the present you should get for your manager this festive season, a good choice may be anything made of paper. Undoubtedly, it can serve as a useful reminder of where the digital world's limitations lie. Several recent studies highlighted the enduring value of this ancient technology in several different aspects.A study by Vicky Morwitz of Columbia Business School, Yanliu Huang of Drexel University and Zhen Yang of California State University, Fullerton, finds that paper calendars produce different behaviours from digital calendars. Users of oldfashioned calendars made more detailed project plans than those looking at an App, and they were more likely to stick to those plans. Simple dimensions seem to count. The ability to see lots of days at once on a paper calendar matters.Here is another study from Maferima TouréTillery of the Kellogg School of Management at Northwestern University and Lili Wang of Zhejiang University. In one part of their study, the researchers asked strangers to take a survey. Half the respondents were given a pen and paper to fill out a form; the other half were handed an iPad. When asked for their email address to receive information, those who used paper were much likelier to decide on a positive answer. The researchers believe that people make better decisions on paper because it feels more consequential than a digital screen. Paperandpen respondents were more likely than iPad users to think their choices indicated their characters better.Researchers had other findings. They found shoppers were willing to pay more for reading materials in printed form than those they could only download online. Even the sight of someone handling something can help online sales. Similarly, people browsing(浏览) in a virtualreality(虚拟现实) shop was more willing to buy a Tshirt if they saw their own virtual hand touch it.9.How does the author lead in the topic?A.By telling a story.B.By giving examples.C.By raising questions.D.By describing a situation.10.Why can paper calendars make users stick to plans better?A.They are a better reminder.B.They can show more detailed plans.C.They provide chances for people to practice writing.D.They provide a better view of many days' plans at a time.11.Which of the following did paper influence based on Paragraph 3?A.Decision. B.Sympathy.C.Efficiency. D.Responsibility.12.What can we infer from the last paragraph?A.Paper posters will greatly promote sales online.B.Emagazines are thought less valuable than paper ones.C.Seeing others buy will increase one's purchasing desire.D.People prefer items made of paper instead of other materials.[答题区]阅读理解题型分类练(四)A【语篇解读】本文是一篇记叙文。
β-淀粉样蛋白在阿尔茨海默病中的作用郑玲艳;韩瑞兰;曹俊彦【摘要】阿尔茨海默病现已成为威胁人类健康发展的疾病之一,在对其发生发展过程及机制的研究中可以看出:Aβ在阿尔茨海默病病变过程中起着中心环节的作用,它通过多靶点、多通路导致AD的发生。
近几年有研究表明,β淀粉样蛋白在促进神经元凋亡方面起着重要作用,它可以直接引发细胞凋亡,也可以通过一些其他致病因素间接地通过β淀粉样蛋白的毒性作用加速神经元的凋亡。
本文主要通过查阅了Aβ神经毒性作用的文献,对其在AD形成过程中的作用进行了综述。
【期刊名称】《内蒙古医科大学学报》【年(卷),期】2016(038)002【总页数】5页(P147-150,155)【关键词】阿尔茨海默病;β-淀粉样蛋白;神经毒性【作者】郑玲艳;韩瑞兰;曹俊彦【作者单位】内蒙古医科大学药学院,内蒙古呼和浩特010110【正文语种】中文【中图分类】R742.5阿尔茨海默症(Alzheimer's disease,AD)是老年痴呆的主要类型,是一种进行性发展的致死性神经退行性疾病,AD病因及分子机制十分复杂,如胆碱能学说、Aβ(β-Amyloid)级联学说、氧化应激学说、神经细胞凋亡学说、免疫与炎症学说、基因遗传学说、有毒金属离子学说、钙代谢紊乱学说以及雌激素缺陷学说等[1]。
但许多研究表明,Aβ可能是各种原因诱发AD的共同通路,是AD形成和发展的关键因素[2],Aβ的产生与清除速率的失衡是导致神经元变性和痴呆发生的起始因素[3]。
因此,研究Aβ的产生、代谢及在AD形成过程中的毒性具有重要意义,本文查阅了关于Aβ在阿尔茨海默病中的作用的文献,对Aβ在AD发病机制中的神经毒性作用进行了综述。
Aβ是由存在于细胞膜上的淀粉样前体蛋白(APP)通过酶解途径裂解成的长度为39~43个氨基酸的片段。
APP分子量110~135kD,该蛋白的基因定位于人类21号染色体长臂的中段。
基因转录结束后根据剪接位点的不同,所得的mRNA可翻译形成数种亚型:APP695,APP751和APP770。
最新理论试题及答案英语一、选择题(每题1分,共10分)1. The word "phenomenon" is most closely related to which of the following concepts?A. EventB. FactC. TheoryD. Hypothesis答案:C2. In the context of scientific research, what does the term "hypothesis" refer to?A. A proven factB. A testable statementC. A final conclusionD. An unverifiable assumption答案:B3. Which of the following is NOT a characteristic of scientific theories?A. They are based on empirical evidence.B. They are subject to change.C. They are always universally applicable.D. They are supported by a body of evidence.答案:C4. The scientific method typically involves which of the following steps?A. Observation, hypothesis, experimentation, conclusionB. Hypothesis, observation, conclusion, experimentationC. Experimentation, hypothesis, observation, conclusionD. Conclusion, hypothesis, observation, experimentation答案:A5. What is the role of experimentation in the scientific process?A. To confirm a hypothesisB. To disprove a hypothesisC. To provide evidence for or against a hypothesisD. To replace the need for a hypothesis答案:C6. The term "paradigm shift" in the philosophy of science refers to:A. A minor change in scientific theoryB. A significant change in the dominant scientific viewC. The process of scientific discoveryD. The end of scientific inquiry答案:B7. Which of the following is an example of inductive reasoning?A. Observing a pattern and making a general ruleB. Drawing a specific conclusion from a general ruleC. Making a prediction based on a hypothesisD. Testing a hypothesis through experimentation答案:A8. Deductive reasoning is characterized by:A. Starting with a specific observation and drawing a general conclusionB. Starting with a general rule and applying it to a specific caseC. Making assumptions without evidenceD. Relying on intuition rather than logic答案:B9. In scientific research, what is the purpose of a control group?A. To provide a baseline for comparisonB. To test an alternative hypothesisC. To increase the number of participantsD. To confirm the results of previous studies答案:A10. The principle of falsifiability, introduced by Karl Popper, suggests that:A. Scientific theories must be proven trueB. Scientific theories must be able to withstand attempts at being disprovenC. Scientific theories are never wrongD. Scientific theories are always based on personal beliefs答案:B二、填空题(每题1分,共5分)1. The scientific method is a systematic approach to__________ knowledge through observation, experimentation, and __________.答案:gaining; logical reasoning2. A scientific law is a statement that describes a__________ pattern observed in nature, while a scientific theory explains the __________ behind these patterns.答案:recurring; underlying principles3. The process of peer review in scientific publishing is important because it helps to ensure the __________ and__________ of research findings.答案:validity; reliability4. In the context of scientific inquiry, an __________ is a tentative explanation for an aspect of the natural world that is based on a limited range of __________.答案:hypothesis; observations5. The term "empirical" refers to knowledge that is based on __________ and observation, rather than on theory or__________.答案:experimentation; speculation三、简答题(每题5分,共10分)1. Explain the difference between a scientific theory and a scientific law.答案:A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experimentation. It is a broad framework that can encompass multiple laws and observations. A scientific law, on the other hand, is a concise verbal or mathematical statement that describes a general pattern observed in nature. Laws summarize specific phenomena, while theories explain the broader principles behind those phenomena.2. What is the significance of the falsifiability criterionin the philosophy of science?答案:The falsifiability criterion, proposed byphilosopher of science Karl Popper, is significant because it provides a way to distinguish between scientific and non-scientific theories. For a theory to be considered scientific, it must be testable and potentially refutable by empirical evidence. This criterion ensures that scientific theories are open。
实证的empirical名词解释引言:Empirical(实证)是一个常用的名词,经常被用来描述科学研究中的方法和结果。
然而,这个词的确切含义和解释并不被广泛了解。
在本文中,我们将探讨实证的含义,并提供一些相关的实例,以帮助读者更好地理解这个术语。
一、实证的定义实证一词源自拉丁语“empiricus”,意为“经验的”。
在科学研究中,实证方法是指通过观察和实践,收集和分析实际的数据和事实,以验证假设和推断。
换句话说,实证方法是依赖于经验和观察的科学研究方法。
二、实证方法的特点实证方法有几个显著的特点,其中包括以下几点:1. 基于观察:实证方法的核心是观察和实践。
研究者通过观察真实的情况,从中收集数据和信息,并根据这些数据进行推断和分析。
2. 依赖数据:实证方法要求使用可观察的数据和事实进行研究。
这些数据可以是定量数据(如数字或统计数据)或定性数据(如观察和描述)。
3. 可复制性:实证方法要求研究结果可以被其他研究者重复和验证。
这意味着实证研究的过程、方法和结果都应尽可能透明和可复制。
4. 假设验证:实证方法的目标是验证或驳斥某个假设或理论。
通过收集和分析实际数据,研究者可以评估假设的可靠性和准确性。
三、实证方法的实际应用实证方法在许多学科领域中都得到广泛运用。
下面我们将介绍一些实证方法在不同领域的具体应用。
1. 医学研究:在医学研究中,实证方法被用来评估特定治疗方法的有效性和安全性。
研究者通过比较使用特定治疗方法和非治疗方法的患者的结果,来判断该治疗方法的价值。
2. 经济学:实证经济学是一个广泛运用实证方法的领域。
经济学家使用历史数据、统计模型和实验等手段,来研究经济现象和政策的效果。
3. 教育研究:实证方法在教育研究中也有重要的应用。
教育学家通过收集学生成绩、教学方法和学生反馈等数据,来评估不同教学方法的有效性和学生学习成果。
4. 心理学:在心理学中,实证方法被用来研究和分析人类行为和思维的模式。
s Data mining and knowledge discovery in databases have been attracting a significant amount of research, industry, and media atten-tion of late. What is all the excitement about?This article provides an overview of this emerging field, clarifying how data mining and knowledge discovery in databases are related both to each other and to related fields, such as machine learning, statistics, and databases. The article mentions particular real-world applications, specific data-mining techniques, challenges in-volved in real-world applications of knowledge discovery, and current and future research direc-tions in the field.A cross a wide variety of fields, data arebeing collected and accumulated at adramatic pace. There is an urgent need for a new generation of computational theo-ries and tools to assist humans in extracting useful information (knowledge) from the rapidly growing volumes of digital data. These theories and tools are the subject of the emerging field of knowledge discovery in databases (KDD).At an abstract level, the KDD field is con-cerned with the development of methods and techniques for making sense of data. The basic problem addressed by the KDD process is one of mapping low-level data (which are typically too voluminous to understand and digest easi-ly) into other forms that might be more com-pact (for example, a short report), more ab-stract (for example, a descriptive approximation or model of the process that generated the data), or more useful (for exam-ple, a predictive model for estimating the val-ue of future cases). At the core of the process is the application of specific data-mining meth-ods for pattern discovery and extraction.1This article begins by discussing the histori-cal context of KDD and data mining and theirintersection with other related fields. A briefsummary of recent KDD real-world applica-tions is provided. Definitions of KDD and da-ta mining are provided, and the general mul-tistep KDD process is outlined. This multistepprocess has the application of data-mining al-gorithms as one particular step in the process.The data-mining step is discussed in more de-tail in the context of specific data-mining al-gorithms and their application. Real-worldpractical application issues are also outlined.Finally, the article enumerates challenges forfuture research and development and in par-ticular discusses potential opportunities for AItechnology in KDD systems.Why Do We Need KDD?The traditional method of turning data intoknowledge relies on manual analysis and in-terpretation. For example, in the health-careindustry, it is common for specialists to peri-odically analyze current trends and changesin health-care data, say, on a quarterly basis.The specialists then provide a report detailingthe analysis to the sponsoring health-care or-ganization; this report becomes the basis forfuture decision making and planning forhealth-care management. In a totally differ-ent type of application, planetary geologistssift through remotely sensed images of plan-ets and asteroids, carefully locating and cata-loging such geologic objects of interest as im-pact craters. Be it science, marketing, finance,health care, retail, or any other field, the clas-sical approach to data analysis relies funda-mentally on one or more analysts becomingArticlesFALL 1996 37From Data Mining to Knowledge Discovery inDatabasesUsama Fayyad, Gregory Piatetsky-Shapiro, and Padhraic Smyth Copyright © 1996, American Association for Artificial Intelligence. All rights reserved. 0738-4602-1996 / $2.00areas is astronomy. Here, a notable success was achieved by SKICAT ,a system used by as-tronomers to perform image analysis,classification, and cataloging of sky objects from sky-survey images (Fayyad, Djorgovski,and Weir 1996). In its first application, the system was used to process the 3 terabytes (1012bytes) of image data resulting from the Second Palomar Observatory Sky Survey,where it is estimated that on the order of 109sky objects are detectable. SKICAT can outper-form humans and traditional computational techniques in classifying faint sky objects. See Fayyad, Haussler, and Stolorz (1996) for a sur-vey of scientific applications.In business, main KDD application areas includes marketing, finance (especially in-vestment), fraud detection, manufacturing,telecommunications, and Internet agents.Marketing:In marketing, the primary ap-plication is database marketing systems,which analyze customer databases to identify different customer groups and forecast their behavior. Business Week (Berry 1994) estimat-ed that over half of all retailers are using or planning to use database marketing, and those who do use it have good results; for ex-ample, American Express reports a 10- to 15-percent increase in credit-card use. Another notable marketing application is market-bas-ket analysis (Agrawal et al. 1996) systems,which find patterns such as, “If customer bought X, he/she is also likely to buy Y and Z.” Such patterns are valuable to retailers.Investment: Numerous companies use da-ta mining for investment, but most do not describe their systems. One exception is LBS Capital Management. Its system uses expert systems, neural nets, and genetic algorithms to manage portfolios totaling $600 million;since its start in 1993, the system has outper-formed the broad stock market (Hall, Mani,and Barr 1996).Fraud detection: HNC Falcon and Nestor PRISM systems are used for monitoring credit-card fraud, watching over millions of ac-counts. The FAIS system (Senator et al. 1995),from the U.S. Treasury Financial Crimes En-forcement Network, is used to identify finan-cial transactions that might indicate money-laundering activity.Manufacturing: The CASSIOPEE trou-bleshooting system, developed as part of a joint venture between General Electric and SNECMA, was applied by three major Euro-pean airlines to diagnose and predict prob-lems for the Boeing 737. To derive families of faults, clustering methods are used. CASSIOPEE received the European first prize for innova-intimately familiar with the data and serving as an interface between the data and the users and products.For these (and many other) applications,this form of manual probing of a data set is slow, expensive, and highly subjective. In fact, as data volumes grow dramatically, this type of manual data analysis is becoming completely impractical in many domains.Databases are increasing in size in two ways:(1) the number N of records or objects in the database and (2) the number d of fields or at-tributes to an object. Databases containing on the order of N = 109objects are becoming in-creasingly common, for example, in the as-tronomical sciences. Similarly, the number of fields d can easily be on the order of 102or even 103, for example, in medical diagnostic applications. Who could be expected to di-gest millions of records, each having tens or hundreds of fields? We believe that this job is certainly not one for humans; hence, analysis work needs to be automated, at least partially.The need to scale up human analysis capa-bilities to handling the large number of bytes that we can collect is both economic and sci-entific. Businesses use data to gain competi-tive advantage, increase efficiency, and pro-vide more valuable services to customers.Data we capture about our environment are the basic evidence we use to build theories and models of the universe we live in. Be-cause computers have enabled humans to gather more data than we can digest, it is on-ly natural to turn to computational tech-niques to help us unearth meaningful pat-terns and structures from the massive volumes of data. Hence, KDD is an attempt to address a problem that the digital informa-tion era made a fact of life for all of us: data overload.Data Mining and Knowledge Discovery in the Real WorldA large degree of the current interest in KDD is the result of the media interest surrounding successful KDD applications, for example, the focus articles within the last two years in Business Week , Newsweek , Byte , PC Week , and other large-circulation periodicals. Unfortu-nately, it is not always easy to separate fact from media hype. Nonetheless, several well-documented examples of successful systems can rightly be referred to as KDD applications and have been deployed in operational use on large-scale real-world problems in science and in business.In science, one of the primary applicationThere is an urgent need for a new generation of computation-al theories and tools toassist humans in extractinguseful information (knowledge)from the rapidly growing volumes ofdigital data.Articles38AI MAGAZINEtive applications (Manago and Auriol 1996).Telecommunications: The telecommuni-cations alarm-sequence analyzer (TASA) wasbuilt in cooperation with a manufacturer oftelecommunications equipment and threetelephone networks (Mannila, Toivonen, andVerkamo 1995). The system uses a novelframework for locating frequently occurringalarm episodes from the alarm stream andpresenting them as rules. Large sets of discov-ered rules can be explored with flexible infor-mation-retrieval tools supporting interactivityand iteration. In this way, TASA offers pruning,grouping, and ordering tools to refine the re-sults of a basic brute-force search for rules.Data cleaning: The MERGE-PURGE systemwas applied to the identification of duplicatewelfare claims (Hernandez and Stolfo 1995).It was used successfully on data from the Wel-fare Department of the State of Washington.In other areas, a well-publicized system isIBM’s ADVANCED SCOUT,a specialized data-min-ing system that helps National Basketball As-sociation (NBA) coaches organize and inter-pret data from NBA games (U.S. News 1995). ADVANCED SCOUT was used by several of the NBA teams in 1996, including the Seattle Su-personics, which reached the NBA finals.Finally, a novel and increasingly importanttype of discovery is one based on the use of in-telligent agents to navigate through an infor-mation-rich environment. Although the ideaof active triggers has long been analyzed in thedatabase field, really successful applications ofthis idea appeared only with the advent of theInternet. These systems ask the user to specifya profile of interest and search for related in-formation among a wide variety of public-do-main and proprietary sources. For example, FIREFLY is a personal music-recommendation agent: It asks a user his/her opinion of several music pieces and then suggests other music that the user might like (<http:// www.ffl/>). CRAYON(/>) allows users to create their own free newspaper (supported by ads); NEWSHOUND(<http://www. /hound/>) from the San Jose Mercury News and FARCAST(</> automatically search information from a wide variety of sources, including newspapers and wire services, and e-mail rele-vant documents directly to the user.These are just a few of the numerous suchsystems that use KDD techniques to automat-ically produce useful information from largemasses of raw data. See Piatetsky-Shapiro etal. (1996) for an overview of issues in devel-oping industrial KDD applications.Data Mining and KDDHistorically, the notion of finding useful pat-terns in data has been given a variety ofnames, including data mining, knowledge ex-traction, information discovery, informationharvesting, data archaeology, and data patternprocessing. The term data mining has mostlybeen used by statisticians, data analysts, andthe management information systems (MIS)communities. It has also gained popularity inthe database field. The phrase knowledge dis-covery in databases was coined at the first KDDworkshop in 1989 (Piatetsky-Shapiro 1991) toemphasize that knowledge is the end productof a data-driven discovery. It has been popular-ized in the AI and machine-learning fields.In our view, KDD refers to the overall pro-cess of discovering useful knowledge from da-ta, and data mining refers to a particular stepin this process. Data mining is the applicationof specific algorithms for extracting patternsfrom data. The distinction between the KDDprocess and the data-mining step (within theprocess) is a central point of this article. Theadditional steps in the KDD process, such asdata preparation, data selection, data cleaning,incorporation of appropriate prior knowledge,and proper interpretation of the results ofmining, are essential to ensure that usefulknowledge is derived from the data. Blind ap-plication of data-mining methods (rightly crit-icized as data dredging in the statistical litera-ture) can be a dangerous activity, easilyleading to the discovery of meaningless andinvalid patterns.The Interdisciplinary Nature of KDDKDD has evolved, and continues to evolve,from the intersection of research fields such asmachine learning, pattern recognition,databases, statistics, AI, knowledge acquisitionfor expert systems, data visualization, andhigh-performance computing. The unifyinggoal is extracting high-level knowledge fromlow-level data in the context of large data sets.The data-mining component of KDD cur-rently relies heavily on known techniquesfrom machine learning, pattern recognition,and statistics to find patterns from data in thedata-mining step of the KDD process. A natu-ral question is, How is KDD different from pat-tern recognition or machine learning (and re-lated fields)? The answer is that these fieldsprovide some of the data-mining methodsthat are used in the data-mining step of theKDD process. KDD focuses on the overall pro-cess of knowledge discovery from data, includ-ing how the data are stored and accessed, howalgorithms can be scaled to massive data setsThe basicproblemaddressed bythe KDDprocess isone ofmappinglow-leveldata intoother formsthat might bemorecompact,moreabstract,or moreuseful.ArticlesFALL 1996 39A driving force behind KDD is the database field (the second D in KDD). Indeed, the problem of effective data manipulation when data cannot fit in the main memory is of fun-damental importance to KDD. Database tech-niques for gaining efficient data access,grouping and ordering operations when ac-cessing data, and optimizing queries consti-tute the basics for scaling algorithms to larger data sets. Most data-mining algorithms from statistics, pattern recognition, and machine learning assume data are in the main memo-ry and pay no attention to how the algorithm breaks down if only limited views of the data are possible.A related field evolving from databases is data warehousing,which refers to the popular business trend of collecting and cleaning transactional data to make them available for online analysis and decision support. Data warehousing helps set the stage for KDD in two important ways: (1) data cleaning and (2)data access.Data cleaning: As organizations are forced to think about a unified logical view of the wide variety of data and databases they pos-sess, they have to address the issues of map-ping data to a single naming convention,uniformly representing and handling missing data, and handling noise and errors when possible.Data access: Uniform and well-defined methods must be created for accessing the da-ta and providing access paths to data that were historically difficult to get to (for exam-ple, stored offline).Once organizations and individuals have solved the problem of how to store and ac-cess their data, the natural next step is the question, What else do we do with all the da-ta? This is where opportunities for KDD natu-rally arise.A popular approach for analysis of data warehouses is called online analytical processing (OLAP), named for a set of principles pro-posed by Codd (1993). OLAP tools focus on providing multidimensional data analysis,which is superior to SQL in computing sum-maries and breakdowns along many dimen-sions. OLAP tools are targeted toward simpli-fying and supporting interactive data analysis,but the goal of KDD tools is to automate as much of the process as possible. Thus, KDD is a step beyond what is currently supported by most standard database systems.Basic DefinitionsKDD is the nontrivial process of identifying valid, novel, potentially useful, and ultimate-and still run efficiently, how results can be in-terpreted and visualized, and how the overall man-machine interaction can usefully be modeled and supported. The KDD process can be viewed as a multidisciplinary activity that encompasses techniques beyond the scope of any one particular discipline such as machine learning. In this context, there are clear opportunities for other fields of AI (be-sides machine learning) to contribute to KDD. KDD places a special emphasis on find-ing understandable patterns that can be inter-preted as useful or interesting knowledge.Thus, for example, neural networks, although a powerful modeling tool, are relatively difficult to understand compared to decision trees. KDD also emphasizes scaling and ro-bustness properties of modeling algorithms for large noisy data sets.Related AI research fields include machine discovery, which targets the discovery of em-pirical laws from observation and experimen-tation (Shrager and Langley 1990) (see Kloes-gen and Zytkow [1996] for a glossary of terms common to KDD and machine discovery),and causal modeling for the inference of causal models from data (Spirtes, Glymour,and Scheines 1993). Statistics in particular has much in common with KDD (see Elder and Pregibon [1996] and Glymour et al.[1996] for a more detailed discussion of this synergy). Knowledge discovery from data is fundamentally a statistical endeavor. Statistics provides a language and framework for quan-tifying the uncertainty that results when one tries to infer general patterns from a particu-lar sample of an overall population. As men-tioned earlier, the term data mining has had negative connotations in statistics since the 1960s when computer-based data analysis techniques were first introduced. The concern arose because if one searches long enough in any data set (even randomly generated data),one can find patterns that appear to be statis-tically significant but, in fact, are not. Clearly,this issue is of fundamental importance to KDD. Substantial progress has been made in recent years in understanding such issues in statistics. Much of this work is of direct rele-vance to KDD. Thus, data mining is a legiti-mate activity as long as one understands how to do it correctly; data mining carried out poorly (without regard to the statistical as-pects of the problem) is to be avoided. KDD can also be viewed as encompassing a broader view of modeling than statistics. KDD aims to provide tools to automate (to the degree pos-sible) the entire process of data analysis and the statistician’s “art” of hypothesis selection.Data mining is a step in the KDD process that consists of ap-plying data analysis and discovery al-gorithms that produce a par-ticular enu-meration ofpatterns (or models)over the data.Articles40AI MAGAZINEly understandable patterns in data (Fayyad, Piatetsky-Shapiro, and Smyth 1996).Here, data are a set of facts (for example, cases in a database), and pattern is an expres-sion in some language describing a subset of the data or a model applicable to the subset. Hence, in our usage here, extracting a pattern also designates fitting a model to data; find-ing structure from data; or, in general, mak-ing any high-level description of a set of data. The term process implies that KDD comprises many steps, which involve data preparation, search for patterns, knowledge evaluation, and refinement, all repeated in multiple itera-tions. By nontrivial, we mean that some search or inference is involved; that is, it is not a straightforward computation of predefined quantities like computing the av-erage value of a set of numbers.The discovered patterns should be valid on new data with some degree of certainty. We also want patterns to be novel (at least to the system and preferably to the user) and poten-tially useful, that is, lead to some benefit to the user or task. Finally, the patterns should be understandable, if not immediately then after some postprocessing.The previous discussion implies that we can define quantitative measures for evaluating extracted patterns. In many cases, it is possi-ble to define measures of certainty (for exam-ple, estimated prediction accuracy on new data) or utility (for example, gain, perhaps indollars saved because of better predictions orspeedup in response time of a system). No-tions such as novelty and understandabilityare much more subjective. In certain contexts,understandability can be estimated by sim-plicity (for example, the number of bits to de-scribe a pattern). An important notion, calledinterestingness(for example, see Silberschatzand Tuzhilin [1995] and Piatetsky-Shapiro andMatheus [1994]), is usually taken as an overallmeasure of pattern value, combining validity,novelty, usefulness, and simplicity. Interest-ingness functions can be defined explicitly orcan be manifested implicitly through an or-dering placed by the KDD system on the dis-covered patterns or models.Given these notions, we can consider apattern to be knowledge if it exceeds some in-terestingness threshold, which is by nomeans an attempt to define knowledge in thephilosophical or even the popular view. As amatter of fact, knowledge in this definition ispurely user oriented and domain specific andis determined by whatever functions andthresholds the user chooses.Data mining is a step in the KDD processthat consists of applying data analysis anddiscovery algorithms that, under acceptablecomputational efficiency limitations, pro-duce a particular enumeration of patterns (ormodels) over the data. Note that the space ofArticlesFALL 1996 41Figure 1. An Overview of the Steps That Compose the KDD Process.methods, the effective number of variables under consideration can be reduced, or in-variant representations for the data can be found.Fifth is matching the goals of the KDD pro-cess (step 1) to a particular data-mining method. For example, summarization, clas-sification, regression, clustering, and so on,are described later as well as in Fayyad, Piatet-sky-Shapiro, and Smyth (1996).Sixth is exploratory analysis and model and hypothesis selection: choosing the data-mining algorithm(s) and selecting method(s)to be used for searching for data patterns.This process includes deciding which models and parameters might be appropriate (for ex-ample, models of categorical data are differ-ent than models of vectors over the reals) and matching a particular data-mining method with the overall criteria of the KDD process (for example, the end user might be more in-terested in understanding the model than its predictive capabilities).Seventh is data mining: searching for pat-terns of interest in a particular representa-tional form or a set of such representations,including classification rules or trees, regres-sion, and clustering. The user can significant-ly aid the data-mining method by correctly performing the preceding steps.Eighth is interpreting mined patterns, pos-sibly returning to any of steps 1 through 7 for further iteration. This step can also involve visualization of the extracted patterns and models or visualization of the data given the extracted models.Ninth is acting on the discovered knowl-edge: using the knowledge directly, incorpo-rating the knowledge into another system for further action, or simply documenting it and reporting it to interested parties. This process also includes checking for and resolving po-tential conflicts with previously believed (or extracted) knowledge.The KDD process can involve significant iteration and can contain loops between any two steps. The basic flow of steps (al-though not the potential multitude of itera-tions and loops) is illustrated in figure 1.Most previous work on KDD has focused on step 7, the data mining. However, the other steps are as important (and probably more so) for the successful application of KDD in practice. Having defined the basic notions and introduced the KDD process, we now focus on the data-mining component,which has, by far, received the most atten-tion in the literature.patterns is often infinite, and the enumera-tion of patterns involves some form of search in this space. Practical computational constraints place severe limits on the sub-space that can be explored by a data-mining algorithm.The KDD process involves using the database along with any required selection,preprocessing, subsampling, and transforma-tions of it; applying data-mining methods (algorithms) to enumerate patterns from it;and evaluating the products of data mining to identify the subset of the enumerated pat-terns deemed knowledge. The data-mining component of the KDD process is concerned with the algorithmic means by which pat-terns are extracted and enumerated from da-ta. The overall KDD process (figure 1) in-cludes the evaluation and possible interpretation of the mined patterns to de-termine which patterns can be considered new knowledge. The KDD process also in-cludes all the additional steps described in the next section.The notion of an overall user-driven pro-cess is not unique to KDD: analogous propos-als have been put forward both in statistics (Hand 1994) and in machine learning (Brod-ley and Smyth 1996).The KDD ProcessThe KDD process is interactive and iterative,involving numerous steps with many deci-sions made by the user. Brachman and Anand (1996) give a practical view of the KDD pro-cess, emphasizing the interactive nature of the process. Here, we broadly outline some of its basic steps:First is developing an understanding of the application domain and the relevant prior knowledge and identifying the goal of the KDD process from the customer’s viewpoint.Second is creating a target data set: select-ing a data set, or focusing on a subset of vari-ables or data samples, on which discovery is to be performed.Third is data cleaning and preprocessing.Basic operations include removing noise if appropriate, collecting the necessary informa-tion to model or account for noise, deciding on strategies for handling missing data fields,and accounting for time-sequence informa-tion and known changes.Fourth is data reduction and projection:finding useful features to represent the data depending on the goal of the task. With di-mensionality reduction or transformationArticles42AI MAGAZINEThe Data-Mining Stepof the KDD ProcessThe data-mining component of the KDD pro-cess often involves repeated iterative applica-tion of particular data-mining methods. This section presents an overview of the primary goals of data mining, a description of the methods used to address these goals, and a brief description of the data-mining algo-rithms that incorporate these methods.The knowledge discovery goals are defined by the intended use of the system. We can distinguish two types of goals: (1) verification and (2) discovery. With verification,the sys-tem is limited to verifying the user’s hypothe-sis. With discovery,the system autonomously finds new patterns. We further subdivide the discovery goal into prediction,where the sys-tem finds patterns for predicting the future behavior of some entities, and description, where the system finds patterns for presenta-tion to a user in a human-understandableform. In this article, we are primarily con-cerned with discovery-oriented data mining.Data mining involves fitting models to, or determining patterns from, observed data. The fitted models play the role of inferred knowledge: Whether the models reflect useful or interesting knowledge is part of the over-all, interactive KDD process where subjective human judgment is typically required. Two primary mathematical formalisms are used in model fitting: (1) statistical and (2) logical. The statistical approach allows for nondeter-ministic effects in the model, whereas a logi-cal model is purely deterministic. We focus primarily on the statistical approach to data mining, which tends to be the most widely used basis for practical data-mining applica-tions given the typical presence of uncertain-ty in real-world data-generating processes.Most data-mining methods are based on tried and tested techniques from machine learning, pattern recognition, and statistics: classification, clustering, regression, and so on. The array of different algorithms under each of these headings can often be bewilder-ing to both the novice and the experienced data analyst. It should be emphasized that of the many data-mining methods advertised in the literature, there are really only a few fun-damental techniques. The actual underlying model representation being used by a particu-lar method typically comes from a composi-tion of a small number of well-known op-tions: polynomials, splines, kernel and basis functions, threshold-Boolean functions, and so on. Thus, algorithms tend to differ primar-ily in the goodness-of-fit criterion used toevaluate model fit or in the search methodused to find a good fit.In our brief overview of data-mining meth-ods, we try in particular to convey the notionthat most (if not all) methods can be viewedas extensions or hybrids of a few basic tech-niques and principles. We first discuss the pri-mary methods of data mining and then showthat the data- mining methods can be viewedas consisting of three primary algorithmiccomponents: (1) model representation, (2)model evaluation, and (3) search. In the dis-cussion of KDD and data-mining methods,we use a simple example to make some of thenotions more concrete. Figure 2 shows a sim-ple two-dimensional artificial data set consist-ing of 23 cases. Each point on the graph rep-resents a person who has been given a loanby a particular bank at some time in the past.The horizontal axis represents the income ofthe person; the vertical axis represents the to-tal personal debt of the person (mortgage, carpayments, and so on). The data have beenclassified into two classes: (1) the x’s repre-sent persons who have defaulted on theirloans and (2) the o’s represent persons whoseloans are in good status with the bank. Thus,this simple artificial data set could represent ahistorical data set that can contain usefulknowledge from the point of view of thebank making the loans. Note that in actualKDD applications, there are typically manymore dimensions (as many as several hun-dreds) and many more data points (manythousands or even millions).ArticlesFALL 1996 43Figure 2. A Simple Data Set with Two Classes Used for Illustrative Purposes.。
山东省名校考试联盟2023-2024学年高二上学期11月期中英语试题学校:___________姓名:___________班级:___________考号:___________一、阅读理解2023 Hot List: The Best New Restaurants in the World Place des Fetes — New York CityThis famous wine bar provides a spot with a rare sweet and warm atmosphere. For date night, go to the bar with views of the open kitchen, or fill up the large table in the back with a group and taste the entire item menu. Either way, do not miss the famous mushroom soup.Le Doyenne — Saint — Vrain, FranceAustralian chefs James Henry and Shaun Kelly transformed the former stables (马厩) of a 19th-century private estate into a working farm, restaurant, and guesthouse driven by the principles of regenerative agriculture. More than one hundred varieties of fruits, vegetables, and herbs make their way into Henry’s cooking after being carefully nurtured by Kelly.Mi Compa Chava — Mexico CityAlmost everyone eating here is devoted to fixing last night’s damage from drunkenness and getting a head start on creating today’s. On the sidewalk, crowds of locals and tourists alike line up for fisherman Salvador Orozco’s creative takes on Sinaloa and Baja seafood. Anything from the raw half of the menu is a sure bet, though cooked dishes like fish can help fill out a meal.Vilas — BangkokCan a dish inspired by a Spanish recipe using Japanese ingredients (原料) still be considered Thai? For Chef Prin Polsuk, one of Bangkok’s most famous Thai chefs, it most certainly can. At his latest restaurant, a small dining room at the base of Bangkok’s hulalongkorn’s 1897 journey around Europe and the foreign ingredients and landmark King Power Mahanakhon Tower, he draws inspiration from King Chulalongkorn’s 1897 journey around Europe and the foreign ingredients and cooking techniques he added to the royal cookbooks.1.What can you do in Place des Fetes — New York City ?A.Drink the red wine.B.Taste the mushroom soup.2.Which restaurant best suits people who suffer from alcohol?A.Place des Fetes.B.Le Doyenne.C.Mi Compa Chava.D.Vilas.3.What’s the purpose of the text?A.To introduce the features of some restaurants.B.To compare the origins of some restaurants.C.To state the similarities of some restaurants.D.To recommend some foods of some restaurants.The 36-year-old Jia Juntingxian was born in Pingxiang, Jiangxi Province, and was blind in both eyes due to congenital eye disease. She has shown athletic talent since childhood and was selected as a track and field athlete by Jiangxi Disabled Persons’ Federation.Although she can’t see the world, Jia breaks through the “immediate” obstacles again and again while running, letting the world see her. In her sports career, Jia has won 43 national and world-class sports medals. Among them, in 2016, she broke the world record and stood on the podium (领奖台) of the women’s T11-T13 4×100-meter relay event at the Rio Paralympics.In 2017, Jia retired and chose to become a teacher at a special education school. Just a year ago, she found out that two young brothers, with visual impairments (视觉障碍), wanted to be an athlete. They had never attended a special education school and never achieved their athletic dream. Jia could only help them attend a local special education school. The experience made her realize that these children living in remote areas may have little knowledge of special education. Even she didn’t know about such schools until late into her education. Therefore, she decided to become more involved with special education.Changing from a Paralympic competitor to a special education teacher, Jia said that there is no discomfort, “Because I understand the students as well as myself and know the inconveniences and difficulties of the children. I hope that every child is like a different seed.Through hard study, they can bravely realize their own life.”Jia also has paid close attention to the rights and interests of disabled people. In 2021, Jia proposed the construction of audible (听得见的) traffic signals for blind people. Her advice to local authorities on dog management has resulted in more indoor public places allowingshop and currently employ 16 visually impaired people, with an average monthly salary of 3,500 yuan per person.Jia always believes that the world is a circle, as long as the love of others is constantly passed on, the whole society will be full of love!4.What can we learn about Jia from the passage?A.She won 43 sports medals in her country.B.She was strong-minded despite her disability.C.She was good at sports at the age of 5 years old.D.She never won national and world-class sports medals.5.What made Jia decide to occupy herself in special education?A.The high salary of special education.B.Her wish to enrich her life after sports.C.Local government’s need for special education.D.Her experience of helping two disabled brothers.6.Which of the following best describes Jia’s job on special education?A.Boring and dangerous.B.Patient and generous.C.Humorous and brave.D.Devoted and selfless.7.What did Jia do to help the disabled?A.She constructed audible traffic signals.B.She set up a massage shop on her own.C.She advised increasing indoor public places.D.She provided employmentopportunity for the blind.Coral reefs in Florida have lost an estimated 90% of their corals in the last 40 years. This summer, a marine heat wave hit Florida’s coral reefs. The record high temperatures created an extremely stressful environment for the coral reefs, which are currently also experiencing intense coral bleaching (白化).A coral is an animal, which has a symbiotic relationship with a microscopic algae (藻类). The algae gets energy from the sun and shares it with the coral internally. The coral builds a rock-like structure, which makes up most of the reef, providing homes and food for many organisms that live there. Coral bleaching is when the symbiotic relationship breaks down. Without the algae, the corals appear white because the rock skeleton becomes visible. If theFlorida is on the front lines of climate change. It is also on the cutting edge of restoration science. Many labs, institutions and other organizations are working nonstop to protect and maintain the coral reefs. This includes efforts to understand what is troubling the reef, from disease outbreaks to coastal development impacts. It also includes harvesting coral spawn (卵), or growing and planting coral parts. Scientists moved many coral nurseries into deeper water and shore-based facilities during this marine heat wave. They are digging into the DNA of the coral to discover which species will survive best in future.There are some bright spots in the story, however. Some corals have recovered from the bleaching, and many did not bleach at all. In addition, researchers recorded coral spawning. Although it’s not clear yet whether the larvae (幼虫) will be successful in the wild, it’s a sign of recovery potential. If the baby corals survive, they will be able to regrow the reef. They just have to avoid one big boss: human-induced climate change.8.What does the underlined word “symbiotic” in paragraph 2 mean?A.Reliable.B.Opposite.C.Harmonious.D.Contradictory. 9.What caused the coral bleaching?A.The rock skeleton.B.The microscopic algae.C.The high temperatures.D.The symbiotic relationship. 10.Which is not the efforts scientists made to help coral reefs?A.Transferring coral nurseries.B.Growing and planting coral spawn.C.Researching the DNA of the coral.D.Figuring out the reasons for problems. 11.Which of the following best describes the impact of scientists’ efforts?A.Identifiable.B.Predictable.C.Far-reaching.D.Effective.Scientists at the UCL Institute for Neurology have developed new tools, based on AI language models, that can characterize subtle signatures in the speech of patients diagnosed with schizophrenia (精神分裂症). The research, published in PNAS, aims to understand how the automated analysis of language could help doctors and scientists diagnose and assess psychiatric (病) conditions.Currently, psychiatric diagnosis is based almost entirely on talking with patients and those close to them, with only a minimal role for tests such as blood tests and brain scans. However, this lack of precision prevents a richer understanding of the causes of mental illnessThe researchers asked 26 participants with schizophrenia and 26 control participants to complete two verbal fluency tasks, where they were asked to name as many words as they could either belonging to the category “animals” or starting with the letter “p” in five minutes. To analyze the answers given by participants, the team used an AI language model to represent the meaning of words in a similar way to humans. They tested whether the words people naturally recalled could be predicted by the AI model, and whether this predictability was reduced in patients with schizophrenia.They found that the answers given by control participants were indeed more predictable by the AI model than those generated by people with schizophrenia, and that this difference was largest in patients with more severe symptoms. The researchers think that this difference might have to do with the way the brain learns relationships between memories and ideas, and stores this information in so called “cognitive maps”.The team now plan to use this technology in a larger sample of patients, across more diverse speech setting, to test whether it might prove useful in the clinic. Lead author, Dr. Matthew Nour, said: “There is enormous interest in using AI language models in medicine. If these tools prove safe, I expect they will begin to be used in the clinic within the next decade.”12.What is the disadvantage of current psychiatric diagnosis?A.It is greatly related to blood tests.B.It mostly relies on talking with patients.C.It refers to the words of patients’ family.D.It can’t comprehend schizophreniadeeply.13.What is paragraph 3 mainly about?A.The process of the research.B.The tasks of the participants.C.The performance of researchers.D.The predictability of AI language models 14.What is Dr Matthew Nour’s attitude toward AI language models?A.Unclear.B.Positive.C.Doubtful.D.Negative. 15.What can be a suitable title for the text?A.AI language new tools used in the clinic.B.AI language tools developed byscientists.C.AI language models treating schizophrenia.D.AI language models diagnosing schizophrenia.Protecting from aboveA deadly asteroid (小行星) heading toward the Earth is a common plot in sci-fi movies.16 An increasing number of space agencies are now taking steps to defend against near-Earth asteroids (NEAs).17Wu Yanhua, deputy director of the China National Space Administration (CNSA), recently told CCTV News that China will start to build Earth and space-based monitoring and warning systems to detect NEAs. 18 In 2025 or 2026, China hopes to be able to closely observe approaching asteroids before impacting them to change their path toward our planet.Making an impactNASA also has its own program for developing technology to deflect (使转向) incoming asteroids. On Nov 23, 2021, the Double Asteroid Redirection Test (DART) was launched to slam into Dimorphos and change the speed at which it orbits its space neighbor, Didymos, an asteroid approximately 2, 560 feet in diameter (直径). 19Global effort20 It also re-launched its Planetary Defense Office in 2021, according to Electronics Weekly. Restarting the program, which seeks to communicate with space agencies around the world, is due to “the global character of the dangers we all face due to asteroids”, said ESA Director General Josef Aschbacher.A.Plan to protect.B.Taking prompt actions.C.But most people believe this is only an imagination.D.However, this is also a risk we should be worried about in real life.E.They are aimed to classify incoming NEAs depending on the risks they pose.F.The European Space Agency (ESA) signed a deal to make a spacecraft for a joint mission with NASA.G.This will help prove out one viable (可行的) way to protect our planet from a dangerous asteroid.Watching a plane fly across the sky as a young boy, Todd Smith knew that flying was what he wanted to do when he was older.After five years’ training, he finally 21 his dream job in his late twenties-working as an airline pilot. But in 2019, the travel firm he was 22 for was closed down.By this time Mr Smith had become increasingly 23 about the growing threat of climate change, and the aviation (航空) industry’s carbon emissions (碳排放). “I had an uncomfortable 24 ,” he says. “I was really eager to get involved in environmental protest groups, but I knew it would ruin my 25 , and I had a lot of 26 . It would be easier to return to the industry and pay them off.”Yet after hesitating for several months, Mr Smith finally 27 to quit his flying career for good. “I prefer flying and 28 interesting destinations, and earning a decent 29 ,” says Mr Smith. “But when we are 30 the climate and ecological emergency, how could I possibly 31 my needs? We need to think about how to 32 the biggest threat to humanity.”Giving up his dream job was a 33 decision, he says. “Financially I’ve been really 34 . It’s been challenging, but taking action has 35 my anxiety.”Mr Smith is now a climate activist.21.A.quit B.changed C.completed D.landed 22.A.waiting B.preparing C.working D.looking 23.A.concerned B.curious C.serious D.doubtful 24.A.tension B.conflict C.solution D.passion 25.A.fame B.life C.ambition D.career 26.A.needs B.debts C.pressures D.troubles 27.A.refused B.promised C.expected D.decided 28.A.discovering B.comparing C.recording D.visiting 29.A.salary B.honor C.award D.title 30.A.accustomed to B.faced with C.addicted to D.trapped in 31.A.remove B.raise C.meet D.stress 32.A.issue B.view C.make D.handle34.A.saving B.struggling C.investing D.contributing 35.A.covered B.balanced C.eased D.increased四、用单词的适当形式完成短文阅读下面材料,在空白处填写适当的内容(1个单词)或括号内单词的正确形式。
多模态脑监测对急性大面积脑梗死后脑水肿评估作用研究进展朱炳综述,陈丽霞审校摘要:随着人口老龄化,脑血管疾病成为全球第二大死亡原因,急性脑梗死约占脑血管病的80%,急性大面积脑梗死是因颈内动脉或大脑中动脉主干粥样硬化及血栓形成导致动脉闭塞引起大面积脑组织缺血坏死,具有高发病率、高致死率、高致残率的特点。
急性大面积脑梗死发生后,病情严重进展迅速,脑细胞大量损伤、坏死后出现脑水肿,进一步压迫神经,可引发脑组织进一步损伤,若未得到及时有效救治,会出现脑疝等危及生命情况。
因此,能够床旁动态监测脑水肿的改变对帮助病情评估、判断预后以及指导临床治疗极为重要。
本文通过对多种监测方式对急性大面积脑梗死后脑水肿的动态评估应用价值进行综述,为以后临床诊治提供借鉴。
关键词:急性大面积脑梗死;脑水肿;多模态脑监测中图分类号:R743.3 文献标识码:AResearch advances in multimodal brain monitoring in evaluating cerebral edema after acute massive cerebral in⁃farction ZHU Bing,CHEN Lixia.(The Second Affiliated Hospital of Harbin Medical University, Harbin 150000, China)Abstract:With the aging of the population,cerebrovascular diseases have become the second leading cause of death in the world, and acute cerebral infarction accounts for about 80% of cerebrovascular diseases. Acute massive cere⁃bral infarction refers to a large area of brain ischemia and necrosis due to arterial occlusion caused by arteriosclerosis and thrombosis of the internal carotid artery or the middle cerebral artery, with the features of high incidence rate, mortality rate, and disability rate. After the onset of acute massive cerebral infarction, the disease progresses seriously and rapidly,and cerebral edema occurs after the damage and necrosis of a large number of brain cells, which further compresses nerves and leads to further brain tissue damage, resulting in life-threatening conditions like cerebral hernia without timely and ef⁃fective treatment. Therefore, bedside dynamic monitoring of cerebral edema is of great importance for assessing disease conditions, judging prognosis, and guiding clinical treatment. This article reviews the application value of various monitor⁃ing methods in dynamic assessment of cerebral edema after acute massive cerebral infarction, so as to provide a reference for future clinical diagnosis and treatment.Key words:Acute massive cerebral infarction;Cerebral edema;Multimodal brain monitoring所有卒中类型中大面积脑梗死约10%,一旦发生病情变化快,病情复杂,死亡率高达80%,在缺血期间,由于能量依赖性离子转运的失败和血脑屏障(blood-brain barrier, BBB)的破坏[1,2],过量的液体积聚在脑的细胞内或细胞外空间中,这导致组织肿胀和颅内压升高。
Collaborative filteringCollaborative filtering,即协同过滤,是⼀种新颖的技术。
最早于1989年就提出来了,直到21世纪才得到产业性的应⽤。
应⽤上的代表在国外有,Last.fm,Digg等等。
最近由于毕业论⽂的原因,开始研究这个题⽬,看了⼀个多星期的论⽂与相关资料之后,决定写篇总结来总结⼀下最近这段时间资料收集的成果。
在微软1998年的那篇关于协同过滤的论⽂[1]中,将协同过滤分成了两个流派,⼀个是Memory-Based,⼀个是Model-Based。
关于Memory-Based的算法,就是利⽤⽤户在系统中的操作记录来⽣成相关的推荐结果的⼀种⽅法,主要也分成两种⽅法,⼀种是User-Based,即是利⽤⽤户与⽤户之间的相似性,⽣成最近的邻居,当需要推荐的时候,从最近的邻居中得到推荐得分最⾼的⼏篇⽂章,⽤作推荐;另外⼀种是Item-Based,即是基于item之间的关系,针对item来作推荐,即是使⽤这种⽅法,使⽤⼀种基本的⽅法来得到不俗的效果。
⽽实验结果也表明,Item-Based的做法⽐User-Based更有效[2]。
⽽对于Model-Based的算法,即是使⽤机器学习中的⼀些建模算法,在线下对于模型进⾏预计算,在线上能够快速得出结果。
主要使⽤的算法有 Bayesian belief nets , clustering , latent semantic , 最近⼏年⼜出现了使⽤SVM 等的CF算法。
最近⼏年⼜提出⼀种新的分类,content-based,即是对于item的内容进⾏分析,从⽽进⾏推荐。
⽽现阶段,⽐较优秀的⼀些应⽤算法,则是将以上⼏种⽅法,混合使⽤。
⽐较说Google News[3],在它的系统中,使⽤了⼀种将Memory-Based与Model-Based两种⽅法混合的算法来处理。
在Google的那篇论⽂⾥⾯,它提到了如何构建⼀个⼤型的推荐系统,其中Google的⼀些⾼效的基础架构如:BigTable,MapReduce等得到很好的应⽤。
基于网络药理学预测白藜芦醇治疗阿尔茨海默症的关键潜在靶点田晓燕 江思瑜 张睿 许顺江 李国风*【摘要】目的通过网络药理学预测白藜芦醇(resveratrol,RSV)治疗阿尔茨海默症(Alzheimer's disease,AD)的关键靶点。
方法 利用TCMSP数据库检索含RSV的中药,并对其性味、归经和功效进行归纳分析。
利用SwissTargetPrediction、SEA、HERB数据库预测RSV作用靶点;利用GeneCards、OMIM、TTD、DisGeNRT 数据库检索AD靶点;取RSV的作用靶点与AD靶点的交集为潜在治疗靶点。
利用DAVID数据库进行潜在治疗靶点的GO分析。
利用STRING数据库获取潜在治疗靶点的KEGG富集分析和蛋白质交互作用(protein-protein interaction, PPI),并用Cytoscape绘制PPI网络图。
AlzData数据库验证AD关键靶点变化。
SwissDock网站对RSV与关键蛋白进行分子对接。
结果含RSV中药的性味为苦味最多;归经中入肝经最多;功效中清热解毒功效最多。
RSV预测靶点388个,AD靶点1624个,交集靶点119个。
KEGG富集通路中的阿尔兹海默症通路共富集到27个蛋白。
AlzData数据库分析发现AD患者表达发生变化的蛋白。
分子对接结果发现,RSV与丝氨酸/苏氨酸激酶(serine/threonine kinase 1, AKT1)、白介素-6(interleukin-6, IL-6)、连环蛋白-1(β-catenin, CTNNB1)、肿瘤坏死因子(tumor necrosis factor, TNF)均有较好的结合能力。
结论网络药理分析结果显示RSV对AD的治疗是多靶点、多通路的,可为后续研究方向提供参考。
【关键词】 网络药理学;白藜芦醇;阿尔兹海默症;分子对接中图分类号 R285文献标识码 A 文章编号1671-0223(2023)24-1879-08Predicting the key potential targets of resveratrol in the treatment of Alzheimer's disease based on network pharmacology Tian Xiaoyan, Jiang Siyu, Zhang Rui, Xu Shunjiang, Li Guofeng. Chengde Medical University, Chengde 067000, China【Abstract】Objective Key targets of resveratrol (RSV) in the treatment of Alzheimer's disease (AD) are predicted by network pharmacology. Methods The traditional Chinese medicines which contain RSV were searched by the TCMSP database, and their property and flavor, meridian distribution and phamacologic action were summarized and analyzed. The targets of RSV were predicted by SwissTargetPrediction, SEA and HERB databases. The targets of AD were retrieved using GeneCards, OMIM, TTD and DisGeNRT databases. The intersection targets of RSV and AD were taken as the potential therapeutic targets.Analysis gene ontology (GO) annotations of potential therapeutic targets by biological information annotation database (DAVID). Did KEGG cluster analysis and protein interactions (PPIs) of potential therapeutic targets in STRING database, and mapped PPI networks in Cytoscape. Verified changes of AD key targets in AlzData database.Docking RSV and key proteins in SwissDock website. Results The most Tropism of taste of the traditional Chinese medicines that contain RSV: bitter, cold, in the liver. And the main phamacologic action is clearing away heat and toxic materials.There are 388 predicted targets of RSV,1624 targets of AD, 119 intersection targets. Alzheimer's pathway in KEGG enriched pathway was enriched to 27 proteins. The proteins which expression changed of AD patients was analysised in AlzData database. The results of molecular docking showed that RSV had good binding ability with AKT1, IL-6, CTNNB1 and TNF. Conclusion The results of network pharmacological analysis show that the treatment of AD by RSV is multi-target and multi-pathway, which can provide reference for subsequent research directions.【Key words】 Network pharmacology; Resveratrol; Alzheimer's disease; Molecular docking作者单位:067000 河北省承德市,承德医学院研究生学院 (田晓燕、李国风);河北医科大学第一医院中心实验室(江思瑜、张睿、许顺江);河北省疾病预防控制中心药物研究所(李国风)*通讯作者现代科学研究认为,阿尔兹海默症(Alzheimer disease,AD)是一种不可逆的退行性神经疾病,临床上多以记忆力障碍、执行能力障碍以及人格变化等为特征,是老年性痴呆的最主要因素。
The Information Content ofEarnings Components:Evidencefrom the Chinese Stock MarketGONGMENG CHEN ∗,MICHAEL FIRTH ∗∗andDANIEL NING GAO †∗Antai School of Economics and Management,Shanghai Jiaotong University,Shanghai,China,∗∗Department of Finance and Insurance,Lingnan University,Tuen Mun,New Territories,HongKong and †School of Management,Xia ´n Jiaotong University,Xia ´n,China (Received:January 2010;accepted:May 2011)A BSTRACT China’s listed firms report substantial non-operating revenues and expenses.We argue that these non-core earnings should have different properties and differentvaluation implications than operating or core earnings.Furthermore,the different typesof firm ownership may have differential impacts on the information content of earningscomponents.Based on data from 1996to 2008,we find that core earnings are morepersistent than non-core earnings.Because of this,core earnings have a greaterassociation with contemporaneous stock returns.However,the stock market does notfully incorporate all the information in earnings;we find that core earnings areundervalued and non-core earnings are overvalued.This effect is much reduced forprivately controlled listed firms.We develop an investment trading strategy to exploitthese market inefficiencies.1.IntroductionThe purpose of this paper is to examine the properties of accounting numbers oflisted firms in China,an economic superpower that warrants further study andunderstanding from a financial capital markets ing a large data-set,we investigate the interplay between accounting earnings and stock prices.In doing so,we take account of China’s unique institutional features includingthe effects of different types of control ownership.Correspondence Address:Daniel Ning Gao,School of Management,Xia´n Jiaotong University,Xia ´n,China.Tel.:+8675583948067;Email:gtagao@European Accounting ReviewVol.20,No.4,669–692,20110963-8180Print /1468-4497Online /11/040669–24#2011European Accounting Association /10.1080/09638180.2011.599929Published by Routledge Journals,Taylor &Francis Ltd on behalf of the EAA.670G.Chen et al.The relations between accounting earnings and stock market values have been the focus of intense research for the past40years and more.Among the issues that have been examined are earnings persistence and the value relevance and market pricing of earnings,often with mixed results(Burgstahler et al.,2002; McVay,2006;Fan et al.,2010).Moreover,Ball et al.(2000),Leuz et al. (2003)and others have shown that the properties of accounting information vary across national jurisdictions because countries’legal and infrastructural fra-meworks differ.This is especially true when comparing emerging markets and transitional economies with developed countries.Our specific focus is on the relations among the components of earnings,future earnings and stock prices,and whether the ownership of thefirm has an impact on these relations.The components of earnings we explore are operating income or core earnings(CORE)and non-core earnings(NCE).These components of earn-ings have to be explicitly reported in China and,as we discuss later,they have very different properties.We argue that these two components of earnings will have different implications for stock prices and the prediction of future earnings. We test our arguments using data from1996to2006.Our study is thefirst to investigate comprehensively the information content of the different components of earnings in China.In particular,we study the persist-ence,value relevance,and market pricing of core earnings and non-core earnings. Our study adds to prior research on the information content of Chinesefirms’financial statements.Bao and Chow(1999)and Chen et al.(2002)show that accounting information is value relevant in China.Haw et al.(2001)show that reported earnings are more persistent and have stronger predictive ability than do operating cashflows.There is also evidence of earnings management in China(Chen and Yuan,2004;Haw et al.,2005;Firth et al.,2007).Tang and Firth(2011)provide evidence of both earnings management and tax management in China.However,to date,there are no studies of the accounting properties of the components of earnings and this is something we seek to remedy in this paper. Like most transitional economies,China had to make decisions on the owner-ship of state-owned enterprises after the economic reforms began.The Chinese government decided to retain substantial ownership stakes in most of the listed companies it spun-off from state-owned enterprises.In addition,the government encouraged private entrepreneurs to develop businesses and list them on the stock market.In some cases,an entrepreneur has taken over a former state-owned enterprise,reorganised it and subsequently listed it.Thus,listedfirms have differ-ent types of major shareholder and different business objectives.These differ-ences in objectives can lead to differences in accounting quality and the information content of earnings and so we control for the type of ownership of afirm.The study extends our understanding offirms in developing economies in general,and Chinesefirms in particular.Based on earnings announcements from1996to2008,wefind that core earn-ings are more persistent and more value relevant than non-core earnings.This effect is more pronounced for privately owned listedfirms and we attributeThe Information Content of Earnings Components671 this to their focus on operational efficiency and profitability.In contrast,state-controlledfirms have multiple objectives,which result in less stable operations, greater related party transactions and more variable earnings.For state-controlled firms,the stock market does not fully price the information in the different com-ponents of earnings.This is because the stock market is more suspicious of the quality of the accounting information provided by state-controlledfirms than the accounting information provided by private-controlledfirms.Thus,the reported accounting information is not reflected,or not fully reflected,in market prices.We show that a portfolio that is long on stocks that have large increases in core earnings and low increases in non-core earnings,has positive risk-adjusted stock returns.Similarly,wefind that a portfolio that is short on stocks that have large increases in non-core earnings and low increases in core earnings,has superior risk-adjusted stock returns.Our evidence implies that the stock market under-reacts to core earnings and over-reacts to non-core earn-ings.This irrational pricing of stocks may be due to investors’lack of understand-ing of the properties of the distinct components of earnings.The paper proceeds as follows.In Section2,we discuss enterprise reforms in China,ownership structure,and the disclosure of earnings and components of earnings.Section3describes prior research on core and non-core earnings. Section4introduces the sample and Section5describes the models and discusses the results.Section6concludes.2.Accounting in ChinaEnterprise ReformsChina’s rapid transition from a centrally planned economy to one that embraces free market practices is well documented.One of the main planks of the reforms is the corporatisation of enterprises and the subsequent listing of many of them on one of the country’s two stock exchanges.By2010,there were more than1700 listedfirms in China and,in terms of market capitalisation,China is now the second largest stock market in the world behind the USA.While the modern enterprise reforms have had many successes,improvement in operating perform-ances of listedfirms is not one of them.Chen et al.(2006)document a reduction in performance and profitability from before listing to after listing for Chinese IPOs.They attribute this poor performance to non-business objectives,ownership issues and poor corporate governance.One of the impediments to the development of strong and vibrant capital markets is the underdevelopment of afinancial infrastructure that provides the mechanisms for efficient investing.The legal infrastructure,financial intermedi-aries and accounting practices were all sadly lacking in the early stages of the pri-vatisation process and China’s authorities are striving to address these problems. High qualityfinancial accounting and extensivefinancial disclosure are vital ingredients for the detailedfinancial analysis needed to valuefipared672G.Chen et al.to the USA and other developed countries,China lacks a large coterie offinancial analysts and independent institutional investors who have the expertise and experience to analysefinancial statements.Most stock trading is conducted by small private stockholders1who have short-term investment horizons.2The short-term horizons of these investors imply they place less weight onfirm-specific information such as earnings and the components of earnings. OwnershipChina’s listedfirms have unique ownership characteristics that impinge on their objectives and economic performance(Chen et al.,2009).Moreover,these differ-ences have the potential to affect the quality and usefulness of accounting reports. Eachfirm has a dominant stockholder who,on average,owns43%of the com-pany’s stock(Chen et al.,2009).In contrast,the second largest stockholder usually owns less than10%of thefirm’s shares.The state is the dominant stock-holder in most listedfirms with private investors the dominant owner in the remainder.The state’s ownership of shares can be separated into those adminis-tered by the state asset management bureaus(SAMBs)and those owned by state-owned enterprises(SOEs).SAMBs and SOEs have different objectives and incentives.The state designates SAMBs to administer its shareholdings in large businesses with national operations.The remainder of the state-controlled listedfirms has a SOE‘parent’or regional government as the major stockholder. The officials of SAMBs are civil servants and they require listedfirms under their control to fulfil social objectives,such as increasing employment,as well as making profits.The relative importance of the different government objectives is likely to vary over time and this may lead to more variable core and non-core earnings,which makes them less informative.The maximisation of stock price and informative accounting reports may be of second-order importance for thesefirms.SOE-controlled listedfirms often engage in related party transactions(RPTs), which involve transactions with the parent SOE or otherfirms with the same owner.These transactions are sometimes made at non-market prices,which either transfer wealth into the listedfirm(propping)or transfer resources out of thefirm(expropriation or tunnelling).RPTs may involve the purchase and sales of goods and services,which affect core earnings,or the sales of assets, which affect non-core earnings.The presence of RPTs at non-market prices may have an impact on the predictive ability of earnings and the association of earnings with market prices.The private individual who controls a listedfirm often has private businesses and he or she may be tempted to transfer wealth from the partly owned listed firm to the wholly owned privatefirm.However,in the absence of RPTs,pri-vately controlled listedfirms are more likely to want to maximise stockholder wealth than are SAMB-and SOE-controlledfirms.On the one hand,this concern with stockholder wealth may encourage a private-controlledfirm toThe Information Content of Earnings Components673 have better qualityfinancial reports.On the other hand,the expropriation of minority stockholders via tunnelling activities may lead the privately controlled firm to have less transparent accounts to camouflage the expropriation,which make them less informative.If government objectives and propping and tunnelling activities vary over time, this will make the core earnings and non-core earnings more difficult to predict and thus earnings will be less informative.Moreover,firms may use less transpar-ent accounting in a bid to hide or obfuscate the tunnelling activities.We have laid out arguments whyfirms may have more or less informative accounting and we show how this can arise in each type of owner-controlledfirm.A priori,it is not clear which effects dominate.In order to obtain some resolution of these argu-ments,we turn to empirical observation.In particular,we investigate whether the type of owner(SAMB,SOE and Private)has a material impact on earnings informativeness.Firms in other countries do not have the same ownerships types,and the incentives and opportunities for propping and tunnelling are different.China is an ideal setting to examine the impact of state and private ownership on earnings informativeness inside a single country.Disclosure of EarningsUnder Chinese reporting standards,operating income is a separate line item in the income statement.Investment income and other non-operating income are shown after income from operations.The Appendix shows an income statement under Chinese GAAP.Subsidy income is subsidies from the state or government that reimburse losses that result from government policies.Subsidy income is small for mostfirms.Non-operating income consists mainly of items that are unusual or nonrecurring,or income that is peripheral to thefirm’s core business.We cat-egorise non-operating income and subsidy income as non-core earnings(NCE). In general,the two main categories of earnings,operating income and non-core earnings,are expected to have different implications forfirm valuation(Stark, 1997;Pope and Wang,2005).Income from operations represents recurring events and can be used to help predict future profits.Thus,financial analysts pay particular attention to operating income and unexpected changes in it are the focus of much analysis and discussion.3Operating income is seen as a permanent source of income,albeit with changes in its level.Non-core earnings are principally made up of investment income and other non-core expenses and revenues.Investment income is made up of dividends received and gains and losses from sales of investments.While the dividend income may be somewhat stable,the realised gains and losses from the sale of investments are subject to managerial whim and are less predictable.In China, on average,realised investment gains are greater than investment losses,and so investment income is usually positive.Investment income can be quite large in some Chinese listedfirms.One reason for the large investment income is thatfirms often take a considerable time before investing the proceeds from674G.Chen et al.IPOs and seasoned equity offerings in operating assets,and,in the meantime,the funds are used to trade in stock market securities.Unfortunately,not allfirms break down the income from dividends and the gains and losses from sales of investments and so predicting future investment income is difficult.Other non-operating income is typically made up of non-recurring or unusual items and earnings from peripheral business activities.This includes gains and losses from the sale of tangible assets and the sale of operating divisions.As such,they are transient in nature and,in theory,should have less impact on company value.3.Literature ReviewValuation of Earnings and Market ValuationOne particular strand of accounting research has examined the value relevance of accounting numbers by assessing whether current earnings help predict future earnings(Fairfield et al.,1996)and current and future stock prices.Empirical research from the USA has questioned the efficiency of the stock market in immediately recognising the full valuation implications of accounting data (Melumad and Nissim,2009).For example,Bernard and Thomas(1990)and Ball and Bartov(1996),among others,conclude that the US stock market takes some time to reflect quarterly earnings information and it is possible to devise an investment trading strategy that profits from this market inefficiency. Other studies have concluded that the stock market does not accurately price the different measures of earnings and sub-components of earnings.For example,Sloan(1996)finds that investors appear to attach similar weights to accruals and cashflow measures of earnings even though accruals are less persistent.Dechow et al.(2008)find that cashflows related to equity have higher persistence and changes in cash balances are mispriced.Fairfield et al. (2003a,2003b)find that growth has an impact on the persistence and mispricing of accruals and cashflows.There is also some evidence that investors misprice breakdowns of earnings into foreign and domestic income(Thomas,2000), and subsidiary and parent earnings in Japanesefirms(Herrmann et al.,2001). Market Reaction to Non-core EarningsMany studies of special items in the USA focus on the material write-off of assets.4Elliott and Shaw(1988)report significant negative stock returns at the time of announcements of asset write-offs.Francis et al.(1996)conclude that the contemporaneous market reaction to special items depends on their nature. Their results show negative reactions to inventory write-offs(which is consistent with the write-offs conveying information about declines in economic con-ditions)and positive reactions to restructuring charges(which is consistent with such items conveying information about improved future prospects).The Information Content of Earnings Components675 Elliott and Hanna(1996)examine the information content of earnings and the incremental information content of special items afterfirms report multiple write-offs andfind that the earnings response coefficient(ERC)on the special items component of earnings declines as the frequency of special items increases, and becomes insignificant for longer sequences of special items.Burgstahler et al.(2002)assess the extent to which market prices reflect differences in the implications of special items vs.the remaining components of aggregate earnings for subsequent quarterly earnings.They conclude that although prices do not fully reflect the implications of special items for future earnings,they do reflect relatively more of the effects of special items than those of the non-special item components of earnings.They alsofind that market expectations are complex and that prices reflect different weights given to positive and negative special items.Positive special items are less than completely transitory because they are followed by smaller yet non-zero amounts of earnings of the same directional sign in subsequent quarters.Negative special items have the charac-teristics of inter-period transfers because earnings of the opposite sign follow them in subsequent quarters.Dechow and Ge(2006)conclude that investors misunderstand the nature of special items and misprice the ing UK data,Strong and Walker(1993)show that core earnings are valued differently than exceptional items or extraordinary items.In particular,the exceptional and extraordinary earnings are more transitory than core earnings.Earnings Manipulation by Non-core ItemsManagers may desire to manipulate earnings,or change the way earnings are viewed,by the use of non-core revenues and expenses.The discretion given to managers allows them to treat some items as non-core items(special items)if they so want.McVay(2006)finds that USfirms opportunistically move core expenses to special items and so core earnings become overstated.She argues thatfirms do this to meet or beat analysts’forecasts of core earnings.Fan et al. (2010)conclude that income shifting to special items is more likely when a manager’s ability to manipulate accruals is constrained.Kinney and Trezevant (1997)indicate that income-decreasing special items are displayed as separate line items on the income statement to emphasise the transitory nature of the decrease in earnings that is caused by such items.In contrast,the description of income-increasing special items is relegated tofinancial statement footnotes to de-emphasise the transitory nature of the increase in earnings that is caused by such items.Their results support the hypothesis that special items are used to manage earnings to achieve a steady rate of growth.Studies of the Chinese MarketResearch into the earnings informativeness of listed Chinesefirms usually con-siders non-core items as a method of earnings management.Chen and Yuan676G.Chen et al.(2004)indicate that in China,earnings management is often achieved by trans-actions that are not related to the core business activities.Haw et al.(1998) show that ST5firms rely on non-core items to increase their net earnings.In another paper,Haw et al.(2005)conclude that manyfirms manipulate net income by timing the occurrence of non-core revenues and losses.In these studies,non-core items include investment income and non-operating income.4.SampleDataOur sample period covers the13years from1996to2008.Although the Shanghai Securities Exchange(SHSE)opened in December1990and the Shenzhen Stock Exchange(SZSE)opened in June1991,we do not use observations in1991–95 because the data are incomplete.We excludefirms in thefinance industry sector as they are subject to different regulatory and accounting rules.The total number of observations that have annual earnings announcements and relevant stock return data is14,582.The yearly distribution of the sample is shown in Table1, Panel A.The number of listedfirms has grown threefold over the sample period. The data are from CSMAR.6Summary statistics are shown in Table1,Panel B.NI is net income divided by total assets,core earnings(CORE)is operating income divided by total assets, and non-core earnings(NCE)is non-core earnings divided by total assets at the beginning of the year.These variables are windsorised at the top1%and bottom1%for each year.Mean(median)net income is4.40%(4.70%)of total assets.The means(medians)of CORE and NCE are3.41%(3.80%)and0.97% (0.68%),respectively.CORE is positive in93%of cases and NCE is positive in83%of cases.There are very small differences in the means and medians of CORE and NCE across private-and state-controlledfirms.The major stock-holders of listedfirms are the state(72%;SAMBs¼10%and SOEs¼62%) and a private individual or family(28%).The annual proportion of private-controlledfirms(to all listedfirms)increases over the period of our study. State-controlledfirms are larger than private-controlledfirms.MethodWe conduct a number of tests to examine the persistence,informativeness and market pricing of the separate components of earnings.First,we examine the per-sistence of earnings by examining how well they predict future earnings.Second, we examine the association between earnings components and contemporaneous stock returns.This tells us something about the way earnings are capitalised into prices.Third,we examine market efficiency by investigating whether current earnings components help explain future stock returns.We examine market efficiency in two ways:using a regression model approach and an investmentTable1.Sample distribution and summary statistics of variablesPanel A.Sample distribution(year and number of listedfirms at the year-end) 1996199719981999200020012002200320042005200620072008Total 50171081791810491118118612401327133614031443153414,582 Panel B.Summary statistics of earnings and stock returnsVariableOverall(n¼14,582)MeanPrivate(n¼4006)StateOwned(n¼10,576)Overall(n¼14,582)MedianPrivate(n¼4006)StateOwned(n¼10,576)NI(%) 4.40 4.47 4.32 4.70 4.72 4.69CORE(%) 3.41 3.43 3.40 3.80 3.81 3.80NCE(%)0.97 1.020.950.680.670.68R t(%)0.090.100.090.040.050.04B/M0.280.270.290.280.280.28SIZE(millionRMB)12068421349751490852LEV0.230.200.240.210.190.22NI is net income divided by beginning total assets;CORE is core earnings divided by beginning total assets;NCE is non-core earnings divided by beginning total assets;R t is the market adjusted one-year stock return(1May,year t to30April,year t+1).B/M is the book to market ratio;SIZE is the market capitalisation of the firm;LEV is debt divided by total assets.The Information Content of Earnings Components 677678G.Chen et al.strategy approach.We condition earnings persistence and the earnings to market value associations on afirm’s dominant owner type(SAMB,SOE and Private).5.Models and ResultsEarnings PersistenceTo test the persistence of earnings components,we regress core earnings(CORE) and non-core earnings(NCE)in year t on the following year’s earnings(E t+1).To capture the effect of ownership,we interact CORE and NCE with SOE and Private.We also include the main effect of SOE and Private in the regression. Suppressingfirm subscripts(i),the model isE t+1=a0+a1CORE t+a2NCE t+a3CORE t∗SOE+a4CORE t∗Private+a5NCE t∗SOE+a6NCE t∗Private+a7SOE+a8Private+e(1) where E t+1is earnings in year t+1scaled by total assets in year t,CORE t is core earnings(i.e.operating income)in year t scaled by total assets in year t21and NCE is non-core earnings in year t scaled by total assets in year t21.SOE is a dummy variable coded one(1)if the major stockholder in afirm is an SOE. Private is a dummy variable coded one(1)if the major stockholder is a private individual or family.We include year and industryfixed effects.We expect core earnings to have greater persistence than non-core earnings and therefore a1will be greater than a2.Based on prior US research,we expect that both a1 and a2will be less than one,which will indicate reversion toward the mean. The coefficients on the interaction terms will indicate whether ownership type has a differential impact on the ability of CORE and NCE to predict E t+1.7 Table2shows the results of the earnings persistence effects(Equation(1)). Reported t-statistics in Panels A and C use robust standard errors corrected for clustering at thefirm level(Petersen,2009;Gow et al.,2010).Panel A shows the pooled results using data from1996to2008.The estimate of a1is0.73, which indicates that core earnings are slowly mean reverting.The coefficient on NCE(a2)is0.25and has a faster reversion to the mean.The coefficient a1 is significantly higher than the coefficient a2and this implies core earnings are more persistent than non-core earnings.Thus,investors can use core earnings to extrapolate future earnings.In contrast,non-core earnings are more likely to be transient and have a lower ability to predict future earnings.Thesefindings are important as investors often use past earnings as an expectation of future profits because there are relatively few independent forecasts of future earnings. CORE∗Private and NCE∗Private have positive and significant coefficients in Panel A.Thus,the core earnings and non-core earnings are more persistent in pri-vately controlledfirms than in SAMB-controlledfirms.The interaction terms forThe Information Content of Earnings Components679 SOE are not significant.Our results suggest that the type of dominant owner (private vs.state)has an important influence on whether afirm’s earnings are predictable.We also run annual regressions of Equation(1)to account for time series variation.The averages of the coefficients across the13yearly regressions and the associated t-statistics(Fama–MacBeth regression)are shown in Panel B.The results are similar to those in Panel A.The average a1across the13regressions is0.72and the average a2is0.17.Of note,is that CORE and NCE are mean reverting and CORE is more persistent than NCE (a1.a2).The interaction terms of Private with CORE and Private with NCE are positive and marginally significant at the10%level.The other inter-action terms are not significant.Following Sloan(1996),we also estimate Equation(1)using the decile rank-ings of the variables rather than the actual values.This alleviates problems associ-ated with extreme values that are not representative of the population or are measured with error.The decile ranks for each variable(ranked1,2, (10)are calculated for each of the13years.Thus,firms with the lowest10%of values for CORE in a specific year get a score of one(1)for CORE.Firms with the next lowest values for CORE(10–20%)get a score of two(2),and so on.A similar procedure is used for NCE and E t+1.The regression results are shown in Panel C.The results are consistent with those shown in Panels A and B.The coefficients a1and a2are both less than one indicating mean reversion. Furthermore,the coefficient on CORE(a1¼0.60)is significantly larger than the coefficient on NCE(a1¼0.17)showing that core earnings are more persist-ent than non-core earnings.The positive and significant coefficients on CORE∗Private and NCE∗Private indicate that the earnings of private-controlled listedfirms have more persistence than the earnings offirms controlled by the state(SAMB and SOE).The Association between Earnings Components and Contemporaneous Stock ReturnsTo examine the informativeness of core earnings and special items for contem-poraneous stock prices we use the following regression model(we omitfirm (i)and year(t)subscripts):R=b0+b1CORE+b2D CORE+b3NCE+b4D NCE+b5CORE∗SOE+b6CORE∗Private+b7D CORE∗SOE+b8D CORE∗Private+b9NCE∗SOE+b10NCE∗Private+b11D NCE∗SOE+b12D NCE∗Private +b13SOE+b14Private+b15BETA+b16B/M+b17SIZE+b18LEV+e(2)。
2024-2025学年湖北省鄂东南高三上学期期中考试英语试题Here are some dictionaries to share with you for English learning. Verbal AdvantageIt is the most comprehensive, accessible, and effective vocabulary-building program available today. What exactly is a “verbal advantage”? In short, a “verbal advantage” is the ability to use words in a precise and powerful manner, to communicate clearly, correctly, and effectively in every situation. In this book, I intend to turn your ability with words into mastery. Dictionary of Common ErrorsIt provides learners and teachers of English with a practical guide to common errors and their correction. Arranged alphabetically(按字母顺序)for ease of use, the entries deal with those errors that regularly appear in the written English of learners at the intermediate level of proficiency and above. Each error is accompanied by a correction and a short, simple explanation. Merriam-Webster’s Vocabulary BuilderIt is designed to achieve 2 goals: to add a large number of words to your permanent working vocabulary, and to teach the most useful word-building roots to help you continue expanding your vocabulary in the future. To achieve these goals, it employs an approach that takes into account how people learn and remember. Word Power Made EasyIt is the complete handbook for building a superior vocabulary, which enables you to speak and write with confidence, read more effectively and efficiently, learn quickly, develop social contacts, and increase your earning power. Pay special attention to the Chapter Review! Are the words still fresh in your mind? Do you remember the meaning of each word studied in the previous sessions? In these Reviews, you are not only testing your learning but also tightening up any areas in which you discover gaps, weaknesses, or forgetfulness.1. What does the “Dictionary of Common Errors” offer to its users?A.A collection of idiomatic expressions.B.An alphabetical list of advanced vocabulary.C.A comprehensive history of the English language.D.A handy guide to frequently made errors and their corrections.2. According to the passage, in which dictionary can we learn English roots?A.Verbal Advantage. B.Word Power Made Easy.C.Merriam-Webster’s Vocabulary Builder.D.Dictionary of Common Errors.3. What can we learn from this passage?A.Going over the learned words is recommended for English learning.B.Memorizing words alphabetically is the best way to build vocabulary.C.English learners at intermediate level seldom make mistakes in writing.D.All the 4 dictionaries intend to promote learners’ grammatical competence.High levels of lead detected in Ludwig van Beethoven’s hair which has been confirmed belonging to him suggest that the composer had lead poisoning, which may have contributed to illness he endured over the course of his life, including deafness, according to new research.In addition to hearing loss, the famed classical composer had repeated stomach issues throughout his life, experienced two attacks of severe liver disease. It is believed that Beethoven died from liver and kidney disease at age 56. But the process of understanding what caused his many health problems has been a much more complicated puzzle, one that even Beethoven himself hoped doctors could eventually solve.An international team of researchers set out nearly a decade ago to partially fulfill Beethoven’s wish by studying locks of his hair. Using DNA analysis, the team determined which ones truly belonged to the composer and which did not, and sequenced Beethoven’s genome(基因组). The findings, published in a March 2023 report, revealed that Beethoven had significant genetic risk factors for liver disease. But the results didn’t provide any insights into the underlying causes of his deafness, which began in his 20s, or his stomach issues.Beethoven’s genome was made publicly available, inviting researchers around the world to investigate remaining questions about Beethoven’s health. In addition to high concentrations of lead, the latest findings showed arsenic(砷)and mercury(汞)that remain trapped in the composer’s strings of hair nearly 200 years after his death, according to a new letter published Monday in the journal Clinical Chemistry. The surprising insights could provide new windows into Beethoven’s persistent health problems.4. What might have caused Beethoven’s long-term health problems?A.Loss of hearing. B.High levels of lead in his body.C.Constant complaints. D.Lack of doctor’s treatment.5. The underlined word “wish” in paragraph 3 may refer to________.A.Examining his hair. B.Curing him of the disease.C.Identifying the cause of his illness. D.Conducting DNA analysis.6. What did the report in 2023 find out?A.The potential cause of his deafness.B.The sequence of his genetic material.C.The hair that truly belonged to Beethoven.D.Beethoven’s carrying a great genetic risk of liver disease.7. Why does the author mention the latest findings in the last paragraph?A.To confirm the earlier result.B.To contradict the previous findings.C.To draw a conclusion about the contributing factors.D.To provide a better understanding of the cause of his illness.One morning in June 1986, I placed an empty snail shell into a tide pool on Long Island. A hermit crab(寄居蟹)came by, inspected the shell, and quickly exchanged it for its old one. Soon another crab found the abandoned shell, did the same, and moved on. About 10 minutes later a third crab found the second’s old home and claimed its prize, leaving behind its damaged one.It may seem strange, but these small creatures are making use of what sociologists call a “vacancy chain(空缺链)”—an organized method of exchanging resources in which every individual benefits by claiming a more desirable possession abandoned by another individual. Recent studies have revealed two types of vacancy chains in hermit crabs: synchronous and asynchronous. In the asynchronous type(like what I observed), usually one crab at a time comes across a vacant shell without other crabs nearby. But in synchronous chains, they line up by size behind the one examining a vacant shell. Once it moves into the new shell, the others quickly follow, each taking the better-suited shelter in line.Though research on vacancy chains in animals beyond hermit crabs is limited, early evidence suggests that the strategy has evolved widespread. Humans follow the same pattern. Studies in 1960s Manhattan showed how new apartments triggered a chain reaction, allowing many families to upgrade their housing. Car dealers in the early 20th century adopted a similar system, trading in old cars to facilitate new sales. Vacancy chains highlight that resource distribution is not just about competition but also about the efficient transfer of resources, shedding light on issues like housing shortages and even crime.Not long ago, I returned to the beach where my observations began. Watching the hermit crabs crawl through the tide pool, I felt grateful and delighted, realizing that some patterns of our social life are so fundamental that we even share them with rather primitive creatures.8. Where was the second crab’s original shell according to paragraph 1?A.It was taken by the first crab. B.It was exchanged with a snail.C.It was occupied by the third crab. D.It was left behind in the tide pool.9. Which of the following is correct about the two kinds of chains?A.Asynchronous chains occur only in animals.B.Asynchronous chains involve fighting over resources.C.Synchronous chains involve crabs queuing up by age.D.Synchronous chains occur when crabs gather in the same place.10. Which of the following can set off a “vacancy chain”?A.Winning a bet. B.Storing canned food.C.Selling old vehicles for new ones. D.Buying disposable plastic bottles.11. What does the passage imply about the significance of studying vacancy chains?A.It may suggest new ways to care for crabs.B.It may reveal how competition is stimulated.C.It may highlight the importance of saving resources.D.It may give insights into human resource distribution.A bestseller by Giulia Enders explores the fascinating world of the human digestive system and its profound impact on overall health. One of the key takeaways is the idea that the gut(肠)is not just a digestion machine, but a complex and intelligent organ that influences our immune system, brain function, and emotional well-being.The book explores the gut-brain connection, explaining how the gut communicates with the brain and can influence mood and behavior, highlighting the link between gut health and mental conditions like anxiety and depression. Enders also explains how the gut’s nervous system functions independently of the brain and why it’s often called the “second brain.”Another key point is the impact of diet on gut health. Enders advises incorporating fiber-rich foods, fermented products(like yogurt), and probiotics(good bacteria)into our diet to nourish beneficial gut bacteria. She also warns against the overuse of antibiotics(抗生素), which can upset the balance of gut bacteria and lead to digestive disorders.The book also provides insight into common digestive problems and breaks down how these issues can be managed or prevented by making simple lifestyle changes, like eating slowly and managing stress. Enders explains the digestive process in a simple and engaging way, highlighting the importance of a healthy gut and offers practical advice on supporting its function, such as avoiding overly processed foods and eating mindfully.Enders also touches on the significance of the immune system in the gut, where a large portion of immune cells reside. A healthy gut microbiome(肠道微生物组)can strengthen the immune response, while an imbalanced gut may lead to increased risk to infections and autoimmune diseases.Enders successfully makes the science of the gut accessible and relatable, showing that by understanding how this often-overlooked organ works, we can make informed choices that significantly improve our health and happiness.12. What does the underlined word “takeaways” in paragraph 1 mean?A.Differences. B.Conclusions. C.Causes. D.Goals.13. What does the author might agree with?A.An unhealthy gut will definitely lead to immune disorders.B.The gut’s nervous system interrelates with the brain in its function.C.Eating slowly or managing stress helps avoid some digestive problems.D.The more fiber we include in our diet, the healthier our gut will become.14. Which one can best serve as the title of the book by Giulia Enders?A.Gut Health Is Above Wealth.B.Eat Your Way to Good Health.C.Immune System: A Deciding Factor in Overall Health.D.Gut: The Inside Story of Our Body’s Most Underrated Organ.15. What is the text?A.A book review. B.A research paper.C.An advertisement for a book. D.A chapter of a book.When you get up in the morning, what is the first thing that you tend to do? By chance, is it to check notifications on a cell phone? 16 That usually means using the newest technology in nearly every aspect of life.17 One sensible approach is to set goals, use time limits, and avoid letting any single medium or electronic platform take up the lion’s share of your time.One major impact of advanced technology has been the change in the way people read. After centuries of reading the printed page, people now do much of their reading on various kinds of screens for the reason that the shift is convenient and cost-effective. 18 One concern is that light from computer and phone screens can eventually cause teary eyes. In addition, recent research has found advantages to reading on paper. Professor Virginia Clinton of the University of North Dakota examined results from 33 studies on reading. This research indicates that students demonstrated better comprehension when reading on paper rather than on screens. 19 It simply suggests that reading on paper has not yet lost its value.How often do people find themselves watching videos or using websites only to find that hours have passed with their original goal unfulfilled? That is because designers use artificial intelligence to create video feeds and music streaming programs that absorb your attention for as long as possible. Therefore, it is more important than ever to be able to take a step back. 20 In doing so, it can give them back their time and sense of agency.In many ways Mack and Cameron were typical high school friends. They enjoyed playing video games, and watched movies together. Both boys loved ________ and did well in school. But Mack a nd Cameron’s friendship was ________ , or rather, extraordinary.Cameron had been born with cerebral palsy(脑瘫), a ________ that limits a person’s ability to move. He used a wheelchair to get around. He communicated through a sophisticated computer system that ________ to his eye movements. Cameron loved sports and hoped to be a ________ someday. Mack enjoyed sports, too. He was also an excellent student and ________ as senior class president. The boys had met years before when Mack was the new kid in the neighborhood. By first grade the two had become ________ , and by high school they were still best friends. “We laugh at the same things,” Mack once said, “but we’re also different. Cam’s into following sports, while I’m more ________ and into good academic grades. He’s fun to be around, so we find things we can do together.”Although Mack and Cameron had been ________ most of their lives, no one had ever expected them to run in high school ________ events together. Beginning in their junior year, the boys ________ , and Mack used a special wheelchair to push Cameron in every race. Mack understood that this meant he couldn’t earn points at the meets. But as he later explained, he didn’t really ________ that. As the seasons ________ , the boys had some memorable experiences together on the track and managed to beat some other kids in races. “We’re not like the best, but we’ re not bad, ”Mack said ________ .Today both boys have finished high school and moved on to college. They’re still, friends, and they still run together. In fact, they recently completed a half marathon, ________ in less than an hour and a half, which is faster than seven minutes a mile!21.A.sports B.math C.art D.craft22.A.fun B.different C.marvelous D.common23.A.condition B.effect C.circumstance D.medium24.A.applied B.contributed C.objected D.responded25.A.driver B.doctor C.coach D.programmer26.A.regarded B.treated C.served D.defined27.A.focused B.inseparable C.hopeful D.sympathetic 28.A.bookish B.optimistic C.dynamic D.consistent29.A.brothers B.relatives C.friends D.roommates 30.A.history B.track C.singing D.reciting31.A.pulled through B.gave up C.settled down D.teamed up32.A.care about B.think about C.set down D.show off33.A.settled B.transferred C.ended D.progressed 34.A.surprised B.disappointedly C.proudly D.sadly35.A.relaxing B.finishing C.cooperating D.navigating阅读下面短文,在空白处填入1个适当的单词或括号内单词的正确形式。
BMJ | onlineresearch methods& reportingintroductionSystematic reviews and meta-analyses are essential tools for summarising evidence accurately and reliably. They help clinicians keep up to date; provide evidence for policy makers to judge risks, benefits, and harms of healthcare behaviours and interventions; gather together and summarise related research for patients and their carers; provide a starting point for clinical practice guideline developers; provide summaries of previous research for funders wishing to support new research;1 and help editors judge the merits of publishing reports of new studies.2 Recent data suggest that at least 2500 new systematic reviews reported in English are indexed in Medline annually.3Unfortunately, there is considerable evidence that key information is often poorly reported in systematic reviews, thus diminishing their potential usefulness.3-6 As is true for all research, systematic reviews should be reported fully and transparently to allow readers to assess the strengths and weaknesses of the investigation.7 That rationale led to the development of the QUOROM (quality of reporting of meta-analysis) statement; those detailed reporting recommendations were published in 1999.8 In this paper we describe the updating of that guidance. Our aim is to ensure clear presentation of what was planned, done, and found in a systematic review.T erminology used to describe systematic reviews and meta-analyses has evolved over time and varies across different groups of researchers and authors (see box 1 at end of document). In this document we adopt the defini-tions used by the Cochrane Collaboration.9 A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria to answer a specific research question. It uses explicit, systematic methods that are selected to minimise bias, thus providing reliable findings from which conclusions can be drawn and deci-sions made. Meta-analysis is the use of statistical methods to summarise and combine the results of independent studies. Many systematic reviews contain meta-analyses, but not all.the QUorom statement and its evolution into prisma The QUOROM statement, developed in 1996 and published in 1999,8 was conceived as a reportingguidance for authors reporting a meta-analysis of ran-domised trials. Since then, much has happened. First, knowledge about the conduct and reporting of system-atic reviews has expanded considerably. For example, the Cochrane Library’s Methodology Register (which includes reports of studies relevant to the methods for systematic reviews) now contains more than 11 000 entries (March 2009). Second, there have been many conceptual advances, such as “outcome-level” assess-ments of the risk of bias,10 11 that apply to systematic reviews. Third, authors have increasingly used system-atic reviews to summarise evidence other than that pro-vided by randomised trials.However, despite advances, the quality of the con-duct and reporting of systematic reviews remains well short of ideal.3-6 All of these issues prompted the need for an update and expansion of the QUOROM state-ment. Of note, recognising that the updated statement now addresses the above conceptual and methodo-logical issues and may also have broader applicability than the original QUOROM statement, we changed the name of the reporting guidance to PRISMA (pre-ferred reporting items for systematic reviews and meta-analyses).development of prismaThe PRISMA statement was developed by a group of 29 review authors, methodologists, clinicians, medi-cal editors, and consumers.12 They attended a three day meeting in 2005 and participated in extensive post-meeting electronic correspondence. A consensus process that was informed by evidence, whenever pos-sible, was used to develop a 27-item checklist (table 1) and a four-phase flow diagram (fig 1) (also available as extra items on for researchers to down-load and re-use). Items deemed essential for transpar-ent reporting of a systematic review were included in the checklist. The flow diagram originally proposed by QUOROM was also modified to show numbers of identified records, excluded articles, and included stud-ies. After 11 revisions the group approved the checklist, flow diagram, and this explanatory paper.The PRISMA statement itself provides further details regarding its background and development.12ThisEmilia, Modena, Italy 2Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri, Milan, Italy 3Centre for Statistics in Medicine, University of Oxford, Oxford Hospital Research Institute, Ottawa, Ontario, Canada 5Annals of Internal Medicine, Philadelphia, Pennsylvania, USA 6Nordic Cochrane Centre, Copenhagen, Denmark Epidemiology, University of Ioannina School of Medicine, Ioannina, Greece 8UK Cochrane Centre, Oxford 9School of Nursing and Midwifery, Trinity College, Dublin, Republic of Ireland 10Departments of Medicine, Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada Kleijnen Systematic Reviews, York Primary Care (CAPHRI), University of Maastricht, Maastricht, Netherlands 13Department of Epidemiology and Community Medicine, Faculty of Medicine, Ottawa, Ontario, Canada Correspondence to: alesslib@mailbase.itAccepted: 5 June 2009Cite this as: BMJ 2009;339:b2700doi: 10.1136/bmj.b2700The PRiSMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaborationAlessandro Liberati,1 2 Douglas G Altman,3 Jennifer Tetzlaff,4 Cynthia Mulrow,5 Peter C Gøtzsche,6 John P A Ioannidis,7 Mike Clarke,8 9 P J Devereaux,10 Jos Kleijnen,11 12 David Moher 4 13research methods & reportingaccompanying explanation and elaboration document explains the meaning and rationale for each checklist item. A few PRISMA Group participants volunteered to help draft specific items for this document, and four of these (DGA, AL, DM, and JT) met on several occasions to further refine the document, which was circulated and ultimately approved by the larger PRISMA Group. scope of prismaPRISMA focuses on ways in which authors can ensure the transparent and complete reporting of systematic reviews and meta-analyses. It does not address directly or in a detailed manner the conduct of systematic reviews, for which other guides are available.13-16W e developed the PRISMA statement and this explan-atory document to help authors report a wide array of systematic reviews to assess the benefits and harms of a healthcare intervention. W e consider most of the checklist items relevant when reporting systematic reviews of non-randomised studies assessing the benefits and harms of interventions. However, we recognise that authors who address questions relating to aetiology, diagnosis, or prog-nosis, for example, and who review epidemiological or diagnostic accuracy studies may need to modify or incor-porate additional items for their systematic reviews. how to use this paperW e modeled this explanation and elaboration document after those prepared for other reporting guidelines.17-19 T o maximise the benefit of this document, we encour-age people to read it in conjunction with the PRISMA statement.11W e present each checklist item and follow it with a published exemplar of good reporting for that item. (W e edited some examples by removing citations or web addresses, or by spelling out abbreviations.) W e then explain the pertinent issue, the rationale for including the item, and relevant evidence from the literature, when-ever possible. No systematic search was carried out to identify exemplars and evidence. W e also include seven boxes at the end of the document that provide a more comprehensive explanation of certain thematic aspects of the methodology and conduct of systematic reviews. Although we focus on a minimal list of items to con-sider when reporting a systematic review, we indicate places where additional information is desirable to improve transparency of the review process. W e present the items numerically from 1 to 27; however, authors need not address items in this particular order in their reports. Rather, what is important is that the information for each item is given somewhere within the report.the prisma checklistTitle and abstractItem 1: TitleIdentify the report as a systematic review, meta-analysis, or both.Examples “Recurrence rates of video-assisted tho-racoscopic versus open surgery in the prevention of recurrent pneumothoraces: a systematic review of ran-domised and non-randomised trials”20“Mortality in randomised trials of antioxidant supple-ments for primary and secondary prevention: system-atic review and meta-analysis”21Explanation Authors should identify their report as a systematic review or meta-analysis. T erms such as “review” or “overview” do not describe for readers whether the review was systematic or whether a meta-analysis was performed. A recent survey found that 50% of 300 authors did not mention the terms “systematic review” or “meta-analysis” in the title or abstract of their systematic review.3 Although sensitive search strategies have been developed to identify systematic reviews,22 inclusion of the terms systematic review or meta-analysis in the title may improve indexing and identification.W e advise authors to use informative titles that make key information easily accessible to readers. Ideally, a title reflecting the PICOS approach (participants, inter-ventions, comparators, outcomes, and study design) (see item 11 and box 2) may help readers as it provides key information about the scope of the review. Specify-ing the design(s) of the studies included, as shown in the examples, may also help some readers and those searching databases.Some journals recommend “indicative titles” that indicate the topic matter of the review, while others require declarative titles that give the review’s main conclusion. Busy practitioners may prefer to see the conclusion of the review in the title, but declarative titles can oversimplify or exaggerate findings. Thus, many journals and methodologists prefer indicative titles as used in the examples above.Item 2: Structured summaryProvide a structured summary including, as applicable, background; objectives; data sources; study eligibility cri-teria, participants, and interventions; study appraisal and synthesis methods; results; limitations; conclusions and implications of key findings; funding for the systematic review; and systematic review registration number. Example “Context: The role and dose of oral vitamin D supplementation in nonvertebral fracture prevention have not been well established.Objective: T o estimate the effectiveness of vitamin D supplementation in preventing hip and nonvertebral fractures in older persons.Data Sources: A systematic review of English and non-English articles using MEDLINE and the Cochrane Controlled T rials Register (1960-2005), and EMBASE (1991-2005). Additional studies were identified by con-tacting clinical experts and searching bibliographies and abstracts presented at the American Society for Bone and Mineral Research (1995-2004). Search terms included randomised controlled trial (RCT), controlled clinical trial, random allocation, double-blind method, cholecalciferol, ergocalciferol, 25-hydroxyvitamin D, fractures, humans, elderly, falls, and bone density. Study Selection: Only double-blind RCT s of oral vita-min D supplementation (cholecalciferol, ergocalciferol) with or without calcium supplementation vs calcium supplementation or placebo in older persons (>60 years) that examined hip or nonvertebral fractures were included.BMJ| onlineBMJ | onlineresearch methods & reportingobjective of the review. Under a Data sources heading, they summarise sources that were searched, any lan-guage or publication type restrictions, and the start and end dates of searches. Study selection statements then ideally describe who selected studies using what inclusion criteria. Data extraction methods statements describe appraisal methods during data abstraction and the methods used to integrate or summarise the data. The Data synthesis section is where the main results of the review are reported. If the review includes meta-analyses, authors should provide numerical results with confidence intervals for the most important outcomes. Ideally, they should specify the amount of evidence in these analyses (numbers of studies and numbers of par-ticipants). Under a Limitations heading, authors might describe the most important weaknesses of included studies as well as limitations of the review process. Then authors should provide clear and balanced Con-clusions that are closely linked to the objective and find-ings of the review. Additionally, it would be helpful if authors included some information about funding for the review. Finally, although protocol registration for systematic reviews is still not common practice, if authors have registered their review or received a regis-tration number, we recommend providing the registra-tion information at the end of the abstract.T aking all the above considerations into account, the intrinsic tension between the goal of completeness of the abstract and its keeping into the space limit often set by journal editors is recognised as a major challenge.introduction Item 3: RationaleDescribe the rationale for the review in the context of what is already known.Example “Reversing the trend of increasing weight for height in children has proven difficult. It is widely accepted that increasing energy expenditure and reduc-ing energy intake form the theoretical basis for man-agement. Therefore, interventions aiming to increase physical activity and improve diet are the foundation of efforts to prevent and treat childhood obesity. Such lifestyle interventions have been supported by recent systematic reviews, as well as by the Canadian Paediat-ric Society, the Royal College of Paediatrics and Child Health, and the American Academy of Pediatrics. How-ever, these interventions are fraught with poor adher-ence. Thus, school-based interventions are theoretically appealing because adherence with interventions can be improved. Consequently, many local governments have enacted or are considering policies that mandate increased physical activity in schools, although the effect of such interventions on body composition has not been assessed.”33Explanation Readers need to understand the rationale behind the study and what the systematic review may add to what is already known. Authors should tell readers whether their report is a new sys-tematic review or an update of an existing one. If the review is an update, authors should state reasons for the update, including what has been added to the evidenceData Extraction : Independent extraction of articles by 2 authors using predefined data fields, including study quality indicators.Data Synthesis : All pooled analyses were based on random-effects models. Five RCTs for hip fracture (n=9294) and 7 RCT s for nonvertebral fracture risk (n=9820) met our inclusion criteria. All trials used cholecalciferol. Heterogeneity among studies for both hip and nonvertebral fracture prevention was observed, which disappeared after pooling RCT s with low-dose (400 IU/d) and higher-dose vitamin D (700-800 IU/d), separately. A vitamin D dose of 700 to 800 IU/d reduced the relative risk (RR) of hip fracture by 26% (3 RCT s with 5572 persons; pooled RR, 0.74; 95% con-fidence interval [CI], 0.61-0.88) and any nonvertebral fracture by 23% (5 RCT s with 6098 persons; pooled RR, 0.77; 95% CI, 0.68-0.87) vs calcium or placebo. No significant benefit was observed for RCT s with 400 IU/d vitamin D (2 RCT s with 3722 persons; pooled RR for hip fracture, 1.15; 95% CI, 0.88-1.50; and pooled RR for any nonvertebral fracture, 1.03; 95% CI, 0.86-1.24).Conclusions : Oral vitamin D supplementation between 700 to 800 IU/d appears to reduce the risk of hip and any nonvertebral fractures in ambulatory or institution-alised elderly persons. An oral vitamin D dose of 400 IU/d is not sufficient for fracture prevention.”23Explanation Abstracts provide key information that enables readers to understand the scope, processes, and findings of a review and to decide whether to read the full report. The abstract may be all that is readily available to a reader, for example, in a bibliographic database. The abstract should present a balanced and realistic assessment of the review’s findings that mirrors, albeit briefly, the main text of the report.W e agree with others that the quality of reporting in abstracts presented at conferences and in journal publications needs improvement.24 25 While we do not uniformly favour a specific format over another, we generally recommend structured abstracts. Structured abstracts provide readers with a series of headings per-taining to the purpose, conduct, findings, and conclu-sions of the systematic review being reported.26 27 They give readers more complete information and facilitate finding information more easily than unstructured abstracts.28-32A highly structured abstract of a systematic review could include the following headings: Context (or Back-ground ); Objective (or Purpose ); Data sources ; Study selection (or Eligibility criteria ); Study appraisal and Synthesis meth-ods (or Data extraction and Data synthesis ); Results ; Limita-tions ; and Conclusions (or Implications ). Alternatively, a simpler structure could cover but collapse some of the above headings (such as label Study selection and Study appraisal as Review methods ) or omit some headings such as Background and Limitations .In the highly structured abstract mentioned above, authors use the Background heading to set the context for readers and explain the importance of the review question. Under the Objectives heading, they ideally use elements of PICOS (see box 2) to state the primaryBMJ | onlineresearch methods & reportingpre-specifies the objectives and methods of the sys-tematic review. For instance, a protocol specifies out-comes of primary interest, how reviewers will extract information about those outcomes, and methods that reviewers might use to quantitatively summarise the outcome data (see item 13). Having a protocol can help restrict the likelihood of biased post hoc decisions in review methods, such as selective outcome reporting. Several sources provide guidance about elements to include in the protocol for a systematic review.16 38 39 For meta-analyses of individual patient-level data, we advise authors to describe whether a protocol was explicitly designed and whether, when, and how participating collaborators endorsed it.40 41Authors may modify protocols during the research, and readers should not automatically consider such modifications inappropriate. For example, legitimate modifications may extend the period of searches to include older or newer studies, broaden eligibility cri-teria that proved too narrow, or add analyses if the primary analyses suggest that additional ones are war-ranted. Authors should, however, describe the modifi-cations and explain their rationale.Although worthwhile protocol amendments are common, one must consider the effects that protocol modifications may have on the results of a systematic review, especially if the primary outcome is changed. Bias from selective outcome reporting in randomised trials has been well documented.42 43 An examination of 47 Cochrane reviews revealed indirect evidence for possible selective reporting bias for systematic reviews. Almost all (n=43) contained a major change, such as the addition or deletion of outcomes, between the pro-tocol and the full publication.44 Whether (or to what extent) the changes reflected bias, however, was not clear. For example, it has been rather common not to describe outcomes that were not presented in any of the included studies.Registration of a systematic review, typically with a protocol and registration number, is not yet common, but some opportunities exist.45 46 Registration may pos-sibly reduce the risk of multiple reviews addressing the same question,45-48 reduce publication bias, and provide greater transparency when updating systematic reviews. Of note, a survey of systematic reviews indexed in Medline in November 2004 found that reports of pro-tocol use had increased to about 46%3 from 8% noted in previous surveys.49 The improvement was due mostly to Cochrane reviews, which, by requirement, have a published protocol.3Item 6: Eligibility criteriaSpecify study characteristics (such as PICOS, length of follow-up) and report characteristics (such as years considered, language, publication status) used as criteria for eligibility, giving rationale.Examples T ypes of studies: “Randomised clinical trials studying the administration of hepatitis B vaccine to CRF [chronic renal failure] patients, with or without dialysis. No language, publication date, or publication status restrictions were imposed…”T ypes of participants: “Participants of any age withbase since the previous version of the review.An ideal background or introduction that sets context for readers might include the following. First, authors might define the importance of the review question from different perspectives (such as public health, individual patient, or health policy). Second, authors might briefly mention the current state of knowledge and its limitations. As in the above example, informa-tion about the effects of several different interventions may be available that helps readers understand why potential relative benefits or harms of particular inter-ventions need review. Third, authors might whet read-ers’ appetites by clearly stating what the review aims to add. They also could discuss the extent to which the limitations of the existing evidence base may be overcome by the review.Item 4: ObjectivesProvide an explicit statement of questions being addressed with reference to participants, interventions, comparisons, outcomes, and study design (PICOS).Example “T o examine whether topical or intralu-minal antibiotics reduce catheter-related bloodstream infection, we reviewed randomised, controlled trials that assessed the efficacy of these antibiotics for primary prophylaxis against catheter-related bloodstream infec-tion and mortality compared with no antibiotic therapy in adults undergoing hemodialysis.”34Explanation The questions being addressed, and the rationale for them, are one of the most critical parts of a systematic review. They should be stated precisely and explicitly so that readers can understand quickly the review’s scope and the potential applicability of the review to their interests.35 Framing questions so that they include the following five “PICOS” components may improve the explicitness of review questions: (1) the patient population or disease being addressed (P), (2) the interventions or exposure of interest (I), (3) the comparators (C), (4) the main outcome or endpoint of interest (O), and (5) the study designs chosen (S). For more detail regarding PICOS, see box 2.Good review questions may be narrowly focused or broad, depending on the overall objectives of the review. Sometimes broad questions might increase the applicability of the results and facilitate detection of bias, exploratory analyses, and sensitivity analyses.35 36 Whether narrowly focused or broad, precisely stated review objectives are critical as they help define other components of the review process such as the eligibility criteria (item 6) and the search for relevant literature (items 7 and 8).MethodsItem 5: Protocol and registrationIndicate if a review protocol exists, if and where it can be accessed (such as a web address), and, if available, provide registration information including the registra-tion number.Example “Methods of the analysis and inclusion criteria were specified in advance and documented in a protocol.”37Explanation A protocol is important because itBMJ | onlineresearch methods & reportingCaution may need to be exercised in including all identified studies due to potential differences in the risk of bias such as, for example, selective reporting in abstracts.60-62Item 7: Information sourcesDescribe all information sources in the search (such as databases with dates of coverage, contact with study authors to identify additional studies) and date last searched.Example “Studies were identified by searching electronic databases, scanning reference lists of articles and consultation with experts in the field and drug companies…No limits were applied for language and foreign papers were translated. This search was applied to Medline (1966 - Present), CancerLit (1975 - Present), and adapted for Embase (1980 - Present), Science Cita-tion Index Expanded (1981 - Present) and Pre-Medline electronic databases. Cochrane and DARE (Database of Abstracts of Reviews of Effectiveness) databases were reviewed…The last search was run on 19 June 2001. In addition, we handsearched contents pages of Jour-nal of Clinical Oncology 2001, European Journal of Cancer 2001 and Bone 2001, together with abstracts printed in these journals 1999 - 2001. A limited update literature search was performed from 19 June 2001 to 31 December 2003.”63Explanation The National Library of Medicine’s Medline database is one of the most comprehensive sources of healthcare information in the world. Like any database, however, its coverage is not complete and varies according to the field. Retrieval from any single database, even by an experienced searcher, may be imperfect, which is why detailed reporting is impor-tant within the systematic review.At a minimum, for each database searched, authors should report the database, platform, or provider (such as Ovid, Dialog, PubMed) and the start and end dates for the search of each database. This information lets readers assess the currency of the review, which is important because the publication time-lag outdates the results of some reviews.64 This information should also make updating more efficient.65 Authors should also report who developed and conducted the search.66In addition to searching databases, authors should report the use of supplementary approaches to identify studies, such as hand searching of journals, checking reference lists, searching trials registries or regula-tory agency websites,67 contacting manufacturers, or contacting authors. Authors should also report if they attempted to acquire any missing information (such as on study methods or results) from investigators or spon-sors; it is useful to describe briefly who was contacted and what unpublished information was obtained.Item 8: SearchPresent the full electronic search strategy for at least one major database, including any limits used, such that it could be repeated.Examples In text: “W e used the following search terms to search all trials registers and databases: immu-noglobulin*; IVIG; sepsis; septic shock; septicaemia; and septicemia…”68CRF or receiving dialysis (haemodialysis or peritoneal dialysis) were considered. CRF was defined as serum creatinine greater than 200 µmol/L for a period of more than six months or individuals receiving dialysis (haemodialysis or peritoneal dialysis)…Renal trans-plant patients were excluded from this review as these individuals are immunosuppressed and are receiving immunosuppressant agents to prevent rejection of their transplanted organs, and they have essentially normal renal function...”T ypes of intervention: “T rials comparing the benefi-cial and harmful effects of hepatitis B vaccines with adjuvant or cytokine co-interventions [and] trials com-paring the beneficial and harmful effects of immu-noglobulin prophylaxis. This review was limited to studies looking at active immunisation. Hepatitis B vaccines (plasma or recombinant (yeast) derived) of all types, dose, and regimens versus placebo, control vac-cine, or no vaccine…”T ypes of outcome measures: “Primary outcome measures: Seroconversion, ie, proportion of patients with adequate anti-HBs response (>10 IU/L or Sample Ratio Units). Hepatitis B infections (as measured by hepatitis B core antigen (HBcAg) positivity or persistent HBsAg positivity), both acute and chronic. Acute (pri-mary) HBV [hepatitis B virus] infections were defined as seroconversion to HBsAg positivity or development of IgM anti-HBc. Chronic HBV infections were defined as the persistence of HBsAg for more than six months or HBsAg positivity and liver biopsy compatible with a diagnosis or chronic hepatitis B. Secondary outcome measures: Adverse events of hepatitis B vaccinations…[and]…mortality.”50Explanation Knowledge of the eligibility criteria is essential in appraising the validity, applicability, and comprehensiveness of a review. Thus, authors should unambiguously specify eligibility criteria used in the review. Carefully defined eligibility criteria inform various steps of the review methodology. They influ-ence the development of the search strategy and serve to ensure that studies are selected in a systematic and unbiased manner.A study may be described in multiple reports, and one report may describe multiple studies. Therefore, we separate eligibility criteria into the following two com-ponents: study characteristics and report characteristics. Both need to be reported. Study eligibility criteria are likely to include the populations, interventions, compa-rators, outcomes, and study designs of interest (PICOS, see box 2), as well as other study-specific elements, such as specifying a minimum length of follow-up. Authors should state whether studies will be excluded because they do not include (or report) specific outcomes to help readers ascertain whether the systematic review may be biased as a consequence of selective reporting.42 43Report eligibility criteria are likely to include lan-guage of publication, publication status (such as inclu-sion of unpublished material and abstracts), and year of publication. Inclusion or not of non-English lan-guage literature,51-55 unpublished data, or older data can influence the effect estimates in meta-analyses.56-59。
How Much Do Expected Stock Returns Vary Over Time WAYNE E. FERSONBoston College and NBERANDREA HEUSONUniversity of MiamiTIE SUUniversity of Miamifirst draft: January 30, 1998, this revision: December 11, 2002This paper makes indirect inference about the time-variation in expected stock returns by comparing unconditional sample variances to estimates of expected conditional variances. The evidence reveals more predictability as more information is used, more reliable predictability in indexes than large common stocks, and no evidence that predictability has diminished over time. A "strong-form" analysis using options suggests that time-variation in market discount rates are economically important.© 1998-2002 by Wayne E. Ferson, Andrea Heuson, and Tie Su. Ferson is the Collins Chair in Finance at Boston College, 140 Commonwealth Avenue, Chestnut Hill, MA. 024678. ph (617) 552-6431, fax: 552-0431, email: wayne.ferson@, /~fersonwa. Heuson and Su are Associate Professors of Finance at the University of Miami, 5250 University Drive, Coral Gables, FL. 33124. Heuson may be reached at (305) 284-1866, fax 284-4800, aheuson@. Su may be reached at (305) 284-1885, fax 284-4800, tie@. We are grateful to Gurdip Bakshi, Hendrick Bessembinder, Charles Cao, John Cochrane, Pat Fishe, Bruce Grundy, Ravi Jagannathan, Herb Johnson, Avi Kamara, Terence Lim, Stewart Mayhew, Simon Pak, Mark Rubinstein, Robert Stambaugh, William Ziemba, and anonymous referees for helpful discussions, comments or other assistance. This paper was presented at the following universities: UC Riverside, Dartmouth, Emory, HEC School of Management, Houston, INSEAD, New South Wales, Oregon, the Stockholm School of Economics, Vanderbilt, Washington University in St. Louis, the University of Washington, the 1998 Maryland Symposium, the 2000 Midwestern Finance Association, the 1999 Utah Winter Finance Conference, the Spring, 1999 NBER Asset Pricing Meetings, the 1999 Western Finance Association, the 1999 European Finance Association, and the 2000 American Finance Association meetings. Parts of this work were competed while Ferson was the Pigott-PACCAR Professor of Finance at the University of Washington, a Visiting Scholar at the University of Miami, and a Visiting Scholar at Arizona State University. Su acknowledges financial support from the Research Council at the University of Miami.I. IntroductionThe empirical evidence for predictability in common stock returns remains ambiguous, even after many years of research. This paper makes indirect inference about the time-variation in expected stock returns by comparing unconditional, sample variances of return to estimates of expected conditional variances. The key to our approach is a sum-of-squares decomposition:Var{R} = E{ Var(R|O) } + Var{ E(R|O) }, (1)where R is the rate of return of a stock and O is the public information set. E(.|O) and Var(.|O) are the conditional mean and variance and Var{.} and E{.}, without the conditioning notation, are the unconditional moments. We are interested in the term Var{ E(R|O) }; that is, the amount of variation through time in expected stock returns. We infer this quantity by subtracting estimates of the expected conditional variance from estimates of the unconditional variance. We focus on the predictability in monthly stock returns. This is motivated by the empirical literature on asset pricing, which most commonly studies monthly returns.We use three approaches to estimating the average conditional variances. These correspond to the classical description of increasing market information sets described by Fama (1970). Weak-form information considers only the information contained in past stock prices. This analysis, summarized in Table 1, builds on a comparison of daily and monthly sample variances, and is related to the variance ratios studied by Lo and MacKinlay (1988) and others. Semi-strong form information relates to lagged variables that are clearly publicly available. Our analysis uses regressions for individual stock returns, on lagged firm-specific characteristics. These results are reported in Table 2. Strong form refers to all relevant information that may be reflected in asset market prices. In this case, we use the implied volatilities from stock options toproxy for the expected conditional variances. These results are reported in Table 3.Studies of predictability in stock index returns typically report regressions with small R-squares, as the fraction of the variance in returns that can be predicted with lagged variables is small. The R-squares are larger for longer-horizon returns, because expected returns are considered to be more persistent than returns themselves.1 However, because stock returns are very volatile, small R-squares can mask economically important variation in the expected return. To illustrate, consider the simple Gordon (1962) constant-growth model for a stock price: P = kE/(r-g), where P is the stock price, E is the earnings per share, k is the dividend payout ratio, g is the future growth rate of earnings and r is the discount rate. The discount rate is the required or expected return of the stock. Stocks are long "duration" assets, so a small change in the expected return can lead to a large fluctuation in the asset value. Consider an example where the price/earnings ratio, P/E = 15, the payout ratio, k = 0.6, and the expected growth rate, g = 3%. The expected return is 7%. Suppose there is a shock to the expected return, ceteris paribus. In this example a change of one percent in r leads to approximately a 20% change in the asset value.Of course, it is unrealistic to hold everything else fixed, but the example suggests that small changes in expected returns can produce large and economically significant changes in asset values. Consistent with this argument, studies such as Kandel and Stambaugh (1996), Campbell and Viceira (2001), and Fleming, Kirby, and Ostdiek (2001) show that optimal portfolio decisions can be affected to an economically significant degree by predictability, even when the amount of predictability in returns,1 Thus, the variance of the expected return accumulates with longer horizons faster than the variance of the return, and the R-squared increases (see, e.g. Fama and French, 1988, 1989).as measured by R-squared, is small. Generalizing the Gordon model to allow for changes in growth rates, Campbell (1991) estimates that changes in expected returns through time may account for about half of the variance of equity index values.Our estimates of the amount of time-variation in the expected returns of stocks are increasing with finer information. Weak-form tests find no reliable evidence of predictability in modern data. Semi-strong form tests find small but economically significant predictability. In contrast to recent studies that rely on aggregate predictor variables, we find no evidence that the predictability has diminished over time. Strong-form tests using option-implied expected conditional variances suggest that the variation in ex ante equity discount rates is highly economically significant for individual stocks, but the results for the index are ambiguous.The rest of the paper is organized as follows. Section II discusses our three approaches to measuring the expected conditional variances of stock returns. Section III presents the empirical data and results. Conclusions are offered in Section IV. An appendix discusses data, estimation issues and technical details.II. Measuring Average Conditional VariancesA. Weak Form InformationIn order to use Equation (1) we need to estimate the average variance of the return around its conditional mean, E{Var(R|O)} = E{[R-E(R|O)]2}. The problem is that we don't know the conditional mean. Our approach in this section follows Merton (1980), who showed that while the mean of a stock return is hard to estimate, it is nearly irrelevant for estimating the conditional variance, when the time between observations is short. We use high frequency returns to estimate the conditional variance, subtract its average from the monthly unconditional variance, and the difference is the variance ofthe monthly mean.Nelson (1990, 1992) further develops Merton's idea. Suppose that the stock price can be approximated by a continuous process formed as a step function, with time intervals of length h between the steps. Take the interval [T-h,T), chop it into D pieces, and consider the average of the D squared price changes as an estimator for the conditional variance of the return over the interval. Nelson proves the estimator is consistent, in the sense that it approaches the conditional variance in the "continuous record" limit, as h approaches zero and D becomes infinite. The intuition is that for small h, the conditional mean is effectively constant, so the sample variance approaches the conditional variance as D grows. By similar logic, Nelson (1992) shows that misspecification of the conditional mean, which arises due to the inability to measure the information O, washes out as h gets small.Evidence from Nelson (1991) supports the idea that for monthly stock returns, chopping the month into days should work well. He finds that daily returns measured with versus without dividends, or with a simple adjustment for risk-free interest rates, produce virtually the same estimates of conditional variances. Similarly, Schwert (1990) finds that different dividend series have almost no affect on the daily variances of a long historical stock return series that we use in our analysis.We estimate E{Var(R|O)} by the time series average of daily variances for each month. Using monthly returns data, we estimate the unconditional variance, Var(R). Then, we infer the variance of the conditional expected return by Equation (1). To fix things, let the return for month m be R m = ln(V m/V m-1) = S j e m?j, where V m is the value of the stock at time m and ?j is the daily log value change for day j. Assume that the conditional mean for month m is µm = E(S j?j|j e m), with E(?j|j e m) = µm/D, D being the number of days in the month. The unconditional mean monthly return is E(µm) = µ,and we are interested in Var(µm), the variance of the monthly expected return. Define the average daily variance, ADV = E{E[(?j - µm/D)2|j e m]}, and the unconditional monthly variance, MV = E{(R m - µ)2}. Simple calculations show that Var(µm) = MV -D(ADV).The model outlined above uses the approximation that the means shift monthly, while daily returns fluctuate independently around the conditional means. However, there is weak serial dependence in daily stock returns, which would influence the sample estimates. The question is whether or not to attribute this serial dependence to changes in the conditional expected return.On the one hand, much of the literature on predictability allows that serial dependence may reflect changing conditional means. Fama and French (1988) use rate-of-return autoregressions to study predictability. Studies such as Lo and MacKinlay (1988) and Conrad and Kaul (1988) model expected returns within the month as autoregressive processes. On the other hand, serial dependence in daily returns can arise from end-of-day price quotes that fluctuate between bid and ask (Roll, 1984) or from nonsynchronous trading of the stocks in an index. These effects should not be attributed to time-variation in the expected discount rate for stocks. We estimateVar(µm) with and without adjustments for serial dependence. To illustrate the adjustment, let ? = E{ E[(?j - µm/D)(?j' - µm/D)|j,j'e m, |j-j'|=1] }. Assuming that the first order daily serial dependence reflects market microstructure effects unrelated to discount rates, we estimate Var(µm) = MV - D(ADV + 2?).In the Appendix B we describe how the calculations are adjusted to obtain unbiased estimators in finite samples. Biases in the sample variances and autocovariances arise due to estimation error in the sample means. In addition, there is a "finite record" bias, which arises because h>0 and D<∞. To address these biases weuse Monte Carlo simulations.B. Semi-strong Form Information: Using Individual Stock RegressionsMuch of the empirical literature on asset-return predictability uses regressionsof stock-index returns on lagged, market-wide information variables. This approach raises two types of concerns. First, there are statistical problems associated with such regressions, especially when the data are heteroskedastic, the right-hand side variables are highly persistent or the left-hand side variables are overlapping in time.2 Second is the issue of data mining. If the lagged instruments result from many researchers sifting through the same data sets, there is a risk of spurious predictability (Lo and MacKinlay 1990; Foster, Smith, and Whaley 1997).We use time-series regressions for individual stocks to estimate the sum-of-squares decomposition in Equation (1), focusing on the aggregate predictability. The individual-stock regressions use firm-specific variables. From basic portfolio theory, individual-stock expected returns teach us about index predictability, only to the extent that they covary. Consider the N x N covariance matrix of the conditional mean returns for N stocks, Cov{E(R|Z)}, where Z stands for the lagged, public information regressors. Letting 1 be an N-vector of ones, the variance of the conditional expected return on an equally-weighted portfolio, R p = (1/N)1'R, is Var{E(R p|Z)} = (1/N2)1'Cov{E(R|Z)}1. Since there are N(N-1) covariance terms, but only N variances in the quadratic form, the expected return variance for the portfolio approaches the average of the firms' covariances, while the individual stock predictability vanishes as N gets large.2 Boudoukh and Richardson (1994) provide an overview of the statistical issues. Stambaugh (1999) and Ferson, Sarkissian, and Simin (2002) provide more recent analyses and references.There is some correlation between our firm-specific variables and the instruments selected in studies of aggregate predictability, so we are not immune to data mining bias. However, such biases should be mitigated to some extent by our approach. Our measure does not rely on the direct index predictability that so many previous studies have explored. The number of studies that examine individual-stock return predictability with time-series regressions is still relatively small. We use Monte Carlo methods to handle the statistical issues, as described in Appendix B. Using only firm-specific instruments we probably understate the correlations among the expected returns, and therefore understate the aggregate predictability. If we use market-wide instruments for individual stocks, we probably overstate the correlations. Comparing the two cases we estimate a range of plausible values.C. Strong Form: Using Implied Variances from the Options MarketsEmpiricists cannot observe all of the information that economic agents might possess, and finer information is likely to reveal a greater degree of predictability. Our strong-form tests use the implied volatility at time t, s t, for an option that matures at time t+1, and assumes that the average implied variance is the average conditional variance of returns: E{s} = E{Var(R t+1|O t)}. We define the option-implied predictability using this assumption and equation (1):Implied predictability = Var{E(R|O)}= {Var(R)-E(s)}. (2)Since the option-implied predictability derives directly from asset-market prices, the measure reflects the otherwise unobserved "market" information set, O.The option-implied predictability uses insights from Lo and Wang (1995) and Grundy (1991), who emphasize that while standard option pricing formulae are invariant to the conditional mean of the stock, option prices should still reflect the predictability in asset returns. This works because option prices derive from stock returns "net" of their predictability. (For example, options may be priced using a "risk neutralized" distribution.) By comparing the implied volatility with the unconditional variance of the return, we draw inferences about the amount of predictability. The intuition is that, holding the unconditional variance constant, a situation with more predictability should imply a smaller conditional volatility, lower option prices; and thus, lower implied volatilities.The key assumption of equation (2) is that the average implied variance from stock options is the average conditional variance of the stock return. This is consistent with empirical studies that find implied volatilities to be informative instruments for conditional variance (e.g. Day and Lewis 1992; Lamoureaux and Lastrapes 1993; Fleming 1998). If our assumption introduces less error than the error in explicit models for expected returns, we should obtain more reliable inferences about the predictability using our approach. Since the implied volatility from any particular option pricing model can be a biased proxy for the average conditional variance, we use several option pricing models in our analysis and we explore the potential biases.D. Caveats about Implied VolatilityIn the option pricing model of Black and Scholes (BS, 1973), the implied volatility is the fixed diffusion coefficient of the log-price process of the stock, representing the conditional variance of infinitesimal holding period returns. If the drift of the process is constant the implied variance is also the conditional variance perperiod of discrete holding period returns, and the implied predictability is zero. Merton (1973) shows that if the diffusion coefficient in a lognormal diffusion varies deterministically over time, the BS model holds and the implied variance is the average of the time-varying conditional variance. Lo and Wang (1995) show that the relation between the diffusion coefficient in a continuous-time model and the conditional variances of holding period returns generally depends on the specification of the drift of the process. If we knew the correct continuous-time model we could compute the theoretical mapping between the option-implied variance and the conditional variance of return. In this case, we could also compute the implied predictability directly from the process.It is easy to compute option-implied variances from several models, as we do, but it is not easy to agree on the "true" continuous time model. In fact, we doubt that such a model exists. We know that asset prices cannot literally follow continuous-time processes. Markets are not always open. Even when they are, assets tend to trade discretely and nonsynchronously.3 Approaching continuous time, we run into microstructural effects at high frequencies, such as bid-ask spreads and tick sizes. Options cannot be hedged continuously (Leland 1985; Figlewski 1989), although continuous-time models typically assume that they are. It is therefore unlikely that option-implied volatilities correspond exactly to the predictions of any continuous time model. Without the "true" stochastic process, we can't infer the errors in our empirical proxy from theory. We therefore conduct a series of experiments to evaluate the potential biases.3 Even in our sample of heavily traded stocks, CRSP records up to 15% of the daily returns as zeros, with more than 10% in six of 26 cases. See Lesmond, Ogden and Trzcinka (1999) for an analysis of transactions costs based on the frequency of zero returns.The simple step function model behind our weak-form tests can motivate the assumption that option-implied variances are conditional variances. Nelson (1992) provides conditions under which the step function process becomes a diffusion process in the continuous-record limit. The diffusion coefficient of the limiting process is the conditional variance of return. Therefore, if the step function model is a reasonable approximation to our data, the option-implied variances should be a reasonable proxy for the conditional variances.There are many reasons to think that option-implied variances contain measurement error. However, the implied predictability does not require a precise conditional variance at each date, because only the average value of s over time is used. To the extent that measurement errors average to zero, they do not bias the option-implied predictability. Errors in the implied variances will only create a problem if their sample means differ from zero.The implied volatilities from any given model can be systematically biased for a variety of reasons. Some biases can produce implied predictability that appears too high, while other biases may work in the opposite direction. Still other factors have ambiguous effects on the implied volatility, or may bias both the implied and unconditional sample variances in the same direction, so the net effect on the implied predictability is an empirical issue. We conduct a series of experiments to assess the magnitudes of various sources of potential bias. The details are available by request to the authors, and Appendix C contains an abbreviated discussion.III. Empirical ResultsA. Results using Weak-form InformationTable 1 presents estimates of the variation in monthly conditional expectedreturns based on the comparison of monthly and daily return variances. Appendix A describes the data. The first three columns give the name of the return series and the starting and ending dates. The rows present results for the Standard and Poors index over different subsamples, followed by a summary of the individual stocks. The fourth column is the unconditional standard deviation of the monthly returns, expressed as an annual percent (the monthly variance is multiplied by 12, then the square root of this result is multiplied by 100. All of the numbers in the tables are annualized this way.) Two estimators of predictability are shown. The estimator denoted by s(µm) ignores daily serial dependence, while the estimator denoted by s(µm)* adjusts for autocorrelation in daily returns, taking the view that daily serial correlation reflects microstructure issues unrelated to changes in discount rates. (The autocorrelation parameter, ?, is positive for the Standard and Poors 500 index, and negative for 16 ofthe 26 firms.)The estimates of predictability for the stock index are 6.2% and 4.8%, respectively, using Schwert's (1990) data for the 1885-1962 period; and 5.4% and 1.9% over the 1885-2001 period. However, over the 1962-2001 period where the CRSP daily data are used, there is little evidence of predictability. The estimated volatility of the expected returns is 2.0% if the serial dependence is allowed in the estimate, but -6.2% if serial dependence is assumed to be unrelated to the ex ante discount rate. Over the most recent 120 months of the sample the two estimates are -8.3% and -8.9%, respectively.4 For individual stocks the average point estimates of predictability are negative, and the individual estimates are negative in all but four cases, but none are4 Our procedure does not constrain the estimated variance of the expected return to be positive. This could be done, following studies such as Boudoukh, Richardson, and Whitelaw (1993), and should produce more precise estimates.significantly different from zero. Thus, no weak-form evidence for predictability is found at the firm level.We estimate the finite sample bias in the predictability using Monte Carlo simulations.5 The adjusted estimates, after subtracting the expected bias, are shown in columns 7 and 8 of Table 1. They tell a similar story. The estimates for the index are8.9% and 7.5% for the 1885-1962 period. However, using CRSP data for 1962-2001, one estimate is 4.2% while the other is -4.1%. Over the last 120 months the adjusted estimates are -6.6% and -7.3%, respectively. The values for the average individual stocks are typically negative, and insignificantly different from zero.6 Thus, our weak-form tests find only weak evidence of time-varying expected returns on the market index, concentrated in the pre-1962, pre CRSP data, but no evidence for predictability at the firm level.7B. Semi-Strong form Tests Using Individual Stock RegressionsTable 2 presents our estimates of predictability based on the covariances of5 We resample from the actual data for a given stock or index, randomly with replacement. For each simulation trial we generate an artificial time series with the same number of daily observations as the original data series. The artificial data satisfy the null hypothesis that the expected return is constant. We compute the estimators on the artificial sample in exactly the same way as on the original samples. We repeat this for 1,000 trials. The average across the trials is the expected finite sample bias. We use the distribution of the simulated estimates to generate empirical p-values. These are the fraction of the simulations where the variance estimates are larger than the sample values.A small p-value means that the sample estimate is unlikely to occur if expected returns are constant.6 We also run simulations for the estimators without adjustment for finite sample biases, relying on the simulations to control the biases. The results are similar.7 Nelson and Startz (1993) also find that weak-form evidence for stock index predictability is thin in post World War II data.individual stock regressions on the lagged variables. The regressions use monthly data from 1969 through 2001 and twenty large common stocks. We estimate predictabilityfor an equally weighted portfolio. The first column shows the results when each regression uses both firm-specific and index characteristics as the lagged regressors, in which case the covariances among the expected returns are probably overestimated. The estimated standard deviations of monthly expected returns are between 6% and 7% annualized, depending on whether or not we exclude the diagonals from the calculation. (Excluding the diagonals provides some information on what would be expected to happen as the number of similar stocks used in the calculation is increased.) Using only firm-specific variables in the regressions reduces the estimated predictability to the 2.4% to 3.3% range, as shown in the second column.The estimates of predictability in Table 2 are similar whether we use the full sample or concentrate on the last 120 months. This is interesting in view of recent empirical finance studies that find predictability, measured using lagged variables like aggregate dividend yields and bond yield spreads, has weakened in recent samples. It may be that the predictability was "real" when first publicized, but diminished as traders attempted to exploit it.8 It may also be that the predictability was spurious in the first place. If the predictability is spurious we would expect instruments to appearin the empirical literature, then fail to work with fresh data (e.g., Ferson, Sarkissian, and Simin, 2002). In this case, we should be suspicious of any "stylized facts" based on the aggregate instruments. The aggregate dividend yield rose to prominence in the 1980s, but fails to work in post 1990 data. The book-to-market ratio seems also to have8 Lo and MacKinlay (1999) present weak form tests with less evidence for predictability in more recent data, and suggest this may be related to "statistical arbitrage" trading by Wall Street firms.weakened in recent data. (See e.g. Bossaerts and Hillion (1999), Goyal and Welch, (1999), and Schwert, 2002.) But Table 2 presents no evidence that predictability is weaker in the recent ten-year period. As the measures in Table 2 do not rely on index predictability using aggregate predictor variables, the table provides interesting evidence that the underlying predictability has not diminished.The regressions behind Table 2 are subject to statistical biases, which we control via simulation as discussed in Appendix B. The bias-adjusted estimates are reported in the fourth column. Using only firm-specific lagged variables the estimates are 1.6% to 2.3% annualized, which is statistically significant according to the empirical p-values. Again, the values are similar for the last 120 months of the sample.Our semi-strong form estimates of predictability provide more reliable evidence of time-variation in monthly stock returns than our weak-form tests, which is expected if returns are more easily predicted using more information. But is 2% on an annual basis an economically significant effect? The simple Gordon model example from the introduction provides an illustration. Consider a month in which the required expected return jumps by two standard deviations, from 7% to 11%. Other things held fixed, the stock price would fall to half of its former value in response.C. Strong-form Answers from the Options MarketsWe estimate the option implied predictability using the Generalized Method of Moments (GMM, Hansen 1982). The GMM estimates are obtained by replacing the population moments in (2) by their sample analogues. The standard errors and test statistics then follow from standard results in Hansen (1982).9 The key input is the9 Given monthly returns R t and implied variances s, the system of moment conditions is:。