A Knowledge-Based Methodology for Tuning Analytical Models
- 格式:pdf
- 大小:92.15 KB
- 文档页数:31
2009高考英语阅读理解精读(2):MethodofScientificInquiryWhytheinductiveandmathematicalsciences,aftertheirfirstrapiddevelo pmentattheculminationofGreekcivilization,advancedsoslowlyfortwothousa ndyears—andwhyinthefollowingtwohundredyearsaknowledgeofnaturalandmat hematicalsciencehasaccumulated,whichsovastlyexceedsallthatwasprevious lyknownthatthesesciencesmaybejustlyregardedastheproductsofourowntimes —arequestionswhichhaveinterestedthemodernphilosophernotlessthantheob jectswithwhichthesesciencesaremoreimmediatelyconversant.Wasittheemplo ymentofanewmethodofresearch,orintheexerciseofgreatervirtueintheuseoft heoldmethods,thatthissingularmodernphenomenonhaditsorigin?Wasthelongp eriodoneofarresteddevelopment,andisthemoderneraoneofnormalgrowth?Orsh ouldweascribethecharacteristicsofbothperiodstoso-calledhistoricalacci dents—totheinfluenceofconjunctionsincircumstancesofwhichnoexplanatio nispossible,saveintheomnipotenceandwisdomofaguidingProvidence?Theexplanationwhichhasbecomecommonplace,thattheancientsemployedde ductionchieflyintheirscientificinquiries,whilethemodernsemployinducti on,provestobetoonarrow,andfailsuponcloseexaminationtopointwithsuffici entdistinctnessthecontrastthatisevidentbetweenancientandmodernscienti ficdoctrinesandinquiries.Forallknowledgeisfoundedonobservation,andpro ceedsfromthisbyanalysis,bysynthesisandanalysis,byinductionanddeductio n,andifpossiblebyverification,orbynewappealstoobservationundertheguid anceofdeduction—bystepswhichareindeedcorrelativepartsofonemethod;and theancientsciencesaffordexamplesofeveryoneofthesemethods,orpartsofone method,whichhavebeengeneralizedfromtheexamplesofscience.Afailuretoemployortoemployadequatelyanyoneofthesepartialmethods,a nimperfectionintheartsandresourcesofobservationandexperiment,careless nessinobservation,neglectofrelevantfacts,byappealtoexperimentandobser vation—thesearethefaultswhichcauseallfailurestoascertaintruth,whethe ramongtheancientsorthemoderns;butthisstatementdoesnotexplainwhythemod ernispossessedofagreatervirtue,andbywhatmeansheattainedhissuperiority .Muchlessdoesitexplainthesuddengrowthofscienceinrecenttimes.Theattempttodiscovertheexplanationofthisphenomenonintheantithesis of“facts”and“theories”or“facts”and“ideas”—intheneglectamongt heancientsoftheformer,andtheirtooexclusiveattentiontothelatter—prove salsotobetoonarrow,aswellasopentothechargeofvagueness.Forinthefirstpl ace,theantithesisisnotcomplete.Factsandtheoriesarenotcoordinatespecies.Theories,iftrue,arefacts—aparticularclassoffactsindeed,generallyco mplex,andifalogicalconnectionsubsistsbetweentheirconstituents,haveall thepositiveattributesoftheories.Nevertheless,thisdistinction,howeverinadequateitmaybetoexplainthe sourceoftruemethodinscience,iswellfounded,andconnotesanimportantchara cterintruemethod.Afactisapropositionofsimple.Atheory,ontheotherhand,i ftruehasallthecharacteristicsofafact,exceptthatitsverificationispossi bleonlybyindirect,remote,anddifficultmeans.Toconverttheoriesintofacts istoaddsimpleverification,andthetheorythusacquiresthefullcharacterist icsofafact.1.Thetitlethatbestexpressestheideasofthispassageis[A].Philosophyofmathematics.[B].TheRecentGrowthinScience.[C].TheVerificationofFacts.[C].MethodsofScientificInquiry.2.Accordingtotheauthor,onepossiblereasonforthegrowthofscienceduringthedaysoftheancientGreeksandinmoderntimesis[A].thesimilaritybetweenthetwoperiods.[B].thatitwasanactofGod.[C].thatbothtriedtodeveloptheinductivemethod.[D].duetothedeclineofthedeductivemethod.3.Thedifferencebetween“fact”and“theory”[A].isthatthelatterneedsconfirmation.[B].restsonthesimplicityoftheformer.[C].isthedifferencebetweenthemodernscientistsandtheancientGreeks.[D].helpsustounderstandthedeductivemethod.4.Accordingtotheauthor,mathematicsis[A].aninductivescience.[B].inneedofsimpleverification.[C].adeductivescience.[D].basedonfactandtheory.5.Thestatement“Theoriesarefacts”maybecalled.[A].ametaphor.[B].aparadox.[C].anappraisaloftheinductiveanddeductivemethods.[D].apun.Vocabulary1.inductive归纳法inductionn.归纳法2.deductive演绎法deductionn。
托福物理学专业词汇:方法论Methodology托福物理学学科分类词汇:方法论Methodology方法论,Methodology英语短句,例句大全方法论,Methodology1)Methodology[英][,Meθ?'D?L?D?I][美]['M?Θ?'Dɑl?D??]方法论1.Theoretical System And Methodology Of Coal Structural Chemistry;煤结构化学的理论体系与方法论2.Discussion On Research Methodology Of Law Of Evidence.;证据法学研究的方法论问题3.Thinking The System Biology And Its Methodology;对统生物学及其方法论的思考英文短句/例句1.On Innovation From Methodology Of Law Economics To Traditional Law Methodology;论法经济学方法论对传统法学方法论的创新2.On The Methods And Methodology Of Feminist Research In Education;论女性主义教育研究的方法和方法论3.One Methodology Suspicion--The Evolutionary Theories Of Gene On "Rule By Law";方法论猜想——“法治”基因进化论4.The Scientific Development Theory And The Ecological Trend Of Jurisprudence Methodology;论科学发展观与法学方法论的生态化5.Methodology Of Law And Economics:A General Review;法和经济学方法论:一个综述性的评论6.On Litigation Object论诉讼证明对象——以法律方法论为启示7.On System Analytical And Synthetic Method And Its Methodology试论系统分析综合法及其方法论启示8.The Object Of Existentialism,Theory Of Existence And Generative Methodology;存有论、生存论与生成性方法论旨趣9.Considering Poetry Is Just Like Considering Chan--An Analysis Of Yan Yu S Methodology Of "Considering Poetry As Chan";论诗如论禅——严羽“以禅喻诗”方法论辨析10.Epistemological And Methodological Significances Of Pound S Theory Of Translation;庞德翻译理论的理解论、方法论意义11.The Elaboration Of Marx S Influences In Logic And Reality;论马克思的跨越理论及其方法论意义12.On Jiang Zemin S Theory Of Innovation And Its Methodological Meaning;试论江泽民创新理论及其方法论意义13.On The Epistemological And Methodological Problems In The Probability Theory浅谈概率论中的理解论及方法论问题14.Housing Price Forecasting Method Based On TEI@I Methodology;基于TEI@I方法论的房价预测方法15.An Analysis Of The Necessity Of The Artistic Methodology Used In Education Research Methodology;解析教育研究方法论也需要艺术方法16.Exploration Of Teaching Way Of Practice Of "Middle School Mathematics Methodology;《中学数学方法论》教学实践方法探讨17.The Criminal Rebuilding Ways,Theory And Methodology In USA美国犯罪重建的方法、原理与方法论18.On Source Of Law--From The Perspective Of Legal And Jurisprudence Methodologies;论法律渊源——以法学方法和法律方法为视角相关短句/例句Method[英]['Meθ?D][美]['M?Θ?D]方法论1.A Research On The Methods Of Key Technology;关键技术选择与评价的方法论研究2.On The Method Characteristics Of Jiang Ze - Min Thought And Theory;论江泽民思想理论的方法论特征3.The Article Elaborates The Four Levels And Meaning Of The Industry Design Method.阐述了工业设计方法论内容的四个层次及意义,后结合实际,重点用创新设计法、形态组构法、设计管理法等理论对宝马这个世界汽车品牌及其旗下第五代新产品宝马5系轿车的设计研发过程实行分析。
第1篇一、开场白及自我介绍1. 题目:- Good morning/afternoon, Professor [Last Name]. Thank you forjoining us today. To start, could you please introduce yourself and provide a brief overview of your academic background?参考答案:- Good morning/afternoon, everyone. It's a pleasure to be here today. My name is [Your Name], and I am currently an Associate Professor at [Your University/Institution]. I specialize in [Your Field of Study],and my research focuses on [Brief Description of Your Research Focus]. I have been teaching at [Your University/Institution] for [Number of Years] years, and during this time, I have developed a strong passion for both teaching and research.2. 题目:- Can you tell us about your most significant academic achievement or publication?参考答案:- One of my most significant academic achievements is the publication of my book, "Title of the Book," which was released in [Year]. This book explores [Topic of the Book] and has been well-received by both academics and the general public. It has been cited in several research papers and has contributed to the advancement of knowledge in my field.二、专业知识及研究3. 题目:- Can you describe a recent research project that you have been involved in? What was the goal of the project, and what were the key findings?参考答案:- I recently led a research project titled "Project Title," which aimed to investigate [Objective of the Project]. The project involved [Description of Methods Used], and the key findings were [Summary of Findings]. Our research has provided new insights into [Area of Study] and has the potential to influence [Relevant Field or Practice].4. 题目:- How do you incorporate the latest developments in your field into your teaching and research?参考答案:- I stay updated with the latest developments in my field through continuous reading, attending conferences, and collaborating with other scholars. In my teaching, I ensure that my courses are current and that I incorporate recent research findings and case studies. This not only keeps my students engaged but also equips them with the most up-to-date knowledge in their field.三、教学经验及方法5. 题目:- Can you describe a teaching method that you have found particularly effective in your classroom?参考答案:- One teaching method that I have found particularly effective is the use of problem-based learning (PBL). In PBL, students are presented with real-world problems that they must work on in groups. This approach encourages critical thinking, teamwork, and the application of theoretical knowledge to practical situations. It has been very successful in engaging students and promoting deeper understanding of the subject matter.6. 题目:- How do you assess student performance in your courses?参考答案:- I use a variety of assessment methods to evaluate student performance, including written exams, presentations, research papers, and practical assignments. These assessments are designed to test a range of skills, from theoretical knowledge to practical application. I also provide feedback on student work to help them understand their strengths and areas for improvement.四、学术交流与合作7. 题目:- Can you tell us about a collaborative project you have been involved in with another academic or institution?参考答案:- I have been involved in a collaborative project with Professor [Collaborator's Name] from [Collaborator's Institution] titled "Project Name." This project aimed to [Objective of the Project]. We worked together to [Description of Collaboration], and the outcomes have been [Summary of Outcomes]. This collaboration has been mutually beneficial and has enhanced both our research and teaching efforts.8. 题目:- How do you stay motivated to continue producing high-quality research and teaching?参考答案:- Staying motivated is crucial in both research and teaching. I find inspiration in the potential impact of my work, both on the academic community and in the broader context. The passion for my field, the enthusiasm of my students, and the support of my colleagues and institution all contribute to my motivation. Additionally, I set clear goals and timelines for my research and teaching activities, which helps me stay focused and productive.五、未来规划及贡献9. 题目:- What are your future research plans, and how do you envision contributing to your field?参考答案:- My future research plans include [Description of Future Research Directions]. I aim to [Goals of Future Research]. I believe that by addressing these questions, I can contribute to the advancement of [Your Field of Study] and provide practical solutions to [Specific Challenges or Issues]. I am also committed to mentoring young scholars and passing on my knowledge and experience.10. 题目:- How do you think you will contribute to the academic community at our institution?参考答案:- I am excited about the opportunity to contribute to the academic community at your institution. I plan to actively engage in interdisciplinary research, collaborate with faculty members across departments, and contribute to the development of new courses and programs. I also intend to mentor graduate students and junior faculty, helping them to grow professionally and academically.六、总结11. 题目:- Is there anything else you would like to add that we haven't discussed yet?参考答案:- Yes, I would like to emphasize my enthusiasm for joining your institution and contributing to its academic excellence. I amparticularly interested in the opportunities for collaboration and the supportive environment that I have observed here. I am confident that myresearch and teaching experiences will be valuable assets to your institution, and I am looking forward to making a positive impact.This concludes the interview. Thank you for your time and consideration.---The above response templates are designed to provide a comprehensive guide for an Associate Professor's English interview. The actual content should be tailored to the individual's specific experiences, research, and teaching philosophy.第2篇Introduction:The following set of questions is designed to assess the candidate’s qualifications, expertise, teaching philosophy, research interests, and ability to contribute to the academic community. The questions are categorized into different sections to provide a comprehensive evaluation of the candidate’s suitability for the position of Associate Professor in English Language and Literature.Section 1: Background and Qualifications1. Academic Background:- Can you describe your academic journey from undergraduate to doctoral studies?- What inspired you to pursue a career in English Language and Literature?2. Research Experience:- What are your primary research interests within English Language and Literature?- Can you provide an overview of your research methodology and findings?- How have your research projects contributed to the field?3. Teaching Experience:- What teaching roles have you held in your academic career?- Describe your teaching philosophy and approach to student engagement.- How do you assess student learning and provide feedback?4. Publications and Presentations:- Can you list your recent publications and the impact they have had?- What role do conferences and workshops play in your academic development?- How do you stay updated with the latest trends and developments in your field?Section 2: Teaching and Pedagogical Skills5. Course Development:- What courses have you taught or are you interested in teaching at this institution?- How do you approach the development of new courses or the revision of existing ones?6. Student Engagement:- How do you create an inclusive and supportive learning environment for diverse student populations?- Can you provide an example of a successful student engagement strategy you have used?7. Technology in Teaching:- How do you incorporate technology into your teaching practices?- What are your thoughts on the use of online platforms and virtual learning in higher education?8. Assessment and Evaluation:- What methods do you use to assess student performance in your courses?- How do you ensure that assessments are fair, valid, and reliable?Section 3: Research and Scholarship9. Current Research Projects:- What are you currently working on in terms of research?- How do you plan to continue your research at this institution?10. Collaboration and Mentorship:- How do you collaborate with colleagues on research projects?- What is your approach to mentoring graduate students and postdoctoral researchers?11. Research Impact:- How do you measure the impact of your research on the field and beyond?- What strategies do you employ to disseminate your research findings?Section 4: Contribution to the Academic Community12. Service to the University:- How have you contributed to the academic community within your current institution?- What roles have you played in university governance and committees?13. Community Engagement:- How do you engage with the local community through your academic work?- Can you provide an example of a community-based project orinitiative you have led or participated in?14. Professional Development:- What professional development activities do you engage in to enhance your teaching and research?- How do you stay connected with the broader academic community?Section 5: Future Plans and Vision15. Long-Term Goals:- What are your long-term career goals as an Associate Professor?- How do you envision your research and teaching evolving over the next decade?16. Institutional Fit:- Why are you interested in joining this particular institution?- How do you see your work contributing to the mission and values of the university?17. Closing Questions:- Is there anything else you would like to share with us about your qualifications or experiences that we have not covered?- How do you see yourself contributing to the English Language and Literature department at this institution?Conclusion:This comprehensive set of interview questions is designed to provide a thorough assessment of the candidate’s suitability for the position of Associate Professor in English Language and Literature. It aims to evaluate their academic background, teaching and research skills, contributions to the academic community, and their vision for the future. The candidate’s responses will be car efully considered to determinetheir potential to excel in this role and to further the academic excellence of the institution.第3篇一、自我介绍1. 请简要介绍您的个人信息,包括姓名、年龄、籍贯、教育背景等。
英语论文范文精选篇一Chapter OneINTRODUCTION1.1 Research BackgroundHigh proficiency in writing is a key to success in a wide variety of situations andprofessions; meanwhile it is of critical importance for students to apply for promising jobs.Writing skills for university students are among the overwhelming indicators of success inacademic work during their freshmen year of college (Geiser & Studley, 2001). Writingskills for professionals are critical for their daily work and essential for application andpromotion within their disciplines (Light, 2008). Writing induces the capability ofconstructing logics, articulating ideas, debating opinions, and sharpening multipleperspectives. As a result, effective writing is conducive to associating convincingly withcommunication targets, including teachers, peers, colleagues, coworkers, and thecommunity at large (Crowhurst, 1990). No wonder that writing skill is an indispensible partto be checked for every test at home and abroad,such as TOELF, lELTS,GRE, BEC,CET4, CET6, TEM4,TEM8 and so on.Notwithstanding such manifestation of the significance of writing, it is reported in the2002 National Assessment of Educational Progress (NAEP) report in the U.S.A. that lessthan a third of students in Grade 4 (28%),Grade 8 (31%), and Grade 12 (21%) scored at orabove proficient levels,and only 2% wrote at advanced levels for all three samples.Moreover, only 9% of Grade 12 Black students and only 28% of Grade 12 White studentswere able to write at a proficient level (National Center for Educational Statistics, 2003).……………1.2 Significance of the ResearchBased on the CET 4 and CET6 compositions extracted from the CLEC,the study aimsto reveal the relationship between the linguistic features and the writing quality by meansof the advanced software,namely Lexical Frequency Profile, Coh-Metrix3.0 and L2Syntactic Complexity Analyzer for the analysis of vocabulary, syntax and textual cohesion.This study will be of great value mainly for the following two aspects:Firstly, theoretically speaking, the study is going to offer guidance and reference forthe teachingmethodology of L2 writing. The study reveals the contribution of lexicaldiversity, syntactic complexity, textual cohesion to writing quality, reflects the mostdecisive factor of the writing quality and analyzes the mutual relationship between thelexical diversity and quality of writing, the syntactic complexity and quality of writing aswell as the textual cohesion and quality of writing. Hopefully, this research will shedsome light on the instruction of CET 4 and 6 writing and provide practical advice.Secondly, practically speaking, the study demonstrates a new direction for thedevelopment of automatic assessment of the writing. The study is to be carried out bothby means of software and labor work to comprehensively examine more than 28variables that might have an impact on writing quality and build the relation modelbetween these related variables and writing scores. ……………Chapter TwoLITERATURE REVIEW2.1 Lexical Features and Quality of WritingIn the process of L2 writing,students are always perplexed by vocabulary. Leki&Carson (1994) surveyed 128 L2 learners to know about their feelings on the courseEnglish for Academic Purposes (EAP). It is discovered that the strongest zeal for studentsis to improve their language proficiency, especially lexical proficiency. Jordan (1997)obtained the similar conclusion in his study on Chinese students in UK applying for theirmaster degrees, 62% of whom regarded vocabulary as their biggest problem in the processof English writing. Over the past two decades,researchers have attached more and more importance toL2vocabulary studies. As an important element of language proficiency, lexical proficiency isdefined from different perspectives and evaluated by a series of measurements. Meanwhile, lexical proficiency, to a large extent, is embodied by lexical features. As a matter of fact,studies on lexical features have received more and more attention from home and abroadresearchers mainly focusing on total words, lexical diversity (LD) or lexical richness (LR)and lexical complexity (LC), among which lexical diversity or lexical richness has gainedmore popularity for lexical proficiency study.……………2.2 Syntactic Features and Quality of WritingSyntactic complexity (also called syntactic maturity,or linguistic complexity),isimportant in the prediction of the quality of student writings. Wolfe-Quintero et al. (1998)pointed out that a syntactically complex writer uses a wide variety of both basic andsophisticated structures,while a syntactically simple writer uses only a narrow range ofbasic structures. In the past half century, researchers adopted many different indices tostudy the syntactic complexity and attempted to find out the relationship among the scores,the grades, the ages and the writing quality. Syntactic complexity is defined as “the range of forms that surface in languageproduction and the degree of sophistication of such forms” (Ortega, 2003). It is animportant factor in the second language assessment construct as described in Bachman's(1990) conceptual model of language ability, and therefore is often used as an index oflanguage proficiency and development status of L2 learners. Various studies have proposedand investigated measures of syntactic complexity as well as examined itspredictivenessfor language proficiency, in both L2 writing and speaking settings, which will be reviewedrespectively.Syntactic complexity is also called syntactic maturity, referring to the range oflanguage production form and the degree of the form complexity. Therefore,the length ofthe production unit, the amount of the sentence embeddedness and the range of thestructure type are all the subjects of the syntactic complexity (Ortega 2003: 492).………CHAPTER THREE METHODOLOGY (20)3.1 Composition Collection (20)3.2 Tools (21)3.3 Variables (23)3.3.1 Dependent variables (25)3.3.2 Independent variables (26)3.4 Data Analysis (28)CHAPTER FOUR DATA ANALYSIS AND RESULTS (30)4.1 Quantitative Differences in High- and Low- Proficiency Writings-1ivviv (30)4.2 Comparison between Quantitative Features of CET4 (38)4.3 Impacts of Quantitative Features on Writing Quality (47)5.1 Lexical Diversity and Writing Quality (47)5.2 Syntactic Complexity and Writing Quality (48)5.3 Textual Cohesion and Writing Quality (49)Chapter FiveDICUSSION5.1 Lexical Diversity and Writing QualityU index assessing lexical diversity has showed significant difference between high-and low-proficiency writing both in CET4 and CET6. It may suggest thathigh-proficiencywritings have displayed more diverse vocabularies, which is different from the study ofWang (2004). In his study, the target students have a similar lexical diversity. Among theindices assessing lexical study in his study, none index has showed significant differencebetween high- and low-proficiency writings or correlated with writings scores. In his study,he explained the possible reason for such a result that there issignificant difference inaverage words. However, this result is probably attributed to his measurement of lexicaldiversity. In his study, TTR was employed as an index of lexical diversity, but asmentioned above, TTR is reliable only when texts have the same length. In Wang's study,texts vary in length; thus longer texts tend to have lower TTR. That is why the relationshipbetween lexical diversity and writing quality is blurred. But in this study, we adopted Uindex to measure lexical diversity in CET compositions, for U index can avoid theweakness of TTR and eliminate the influence of text length. Besides, Liu (2003) studied 57second- year college students in two natural classes and found out that vocabulary size hadno immediate effect on writing score. However, the result that lexical diversity has apositive impact on the quality of writing in this study is in accordance with the study ofMcNamara et al. (2001).……………ConclusionThis study aims to explore the relationship between lexical features and L2 writingquality with the help of Lexical Frequency Profile, the relationship between syntacticfeatures and L2 writing quality through the use of the computational tool L2 SyntacticComplexity Analyzer and the relationship between cohesive features and second languagewriting quality with the help of the computational tool Coh-Metrix3.0. Meanwhile, thestudy gives us information about the textual representation of different writingproficiencies along multiple textual measurements.This section summarizes the major findings of this study and presents theoretical,methodological and pedagogical implications for L2 writing research. Limitation of thepresent study and suggestions for further studies are raised in the end.……………Reference (omitted)英语论文范文精选篇二Chapter One Introduction1.1 Background of the ResearchEnglish writing is an important way of communication, which can enhance the ability oflanguage acquisition in the process of second language learning. As one of the language skills,English writing is very difficult to master. After many years, students still find that their writingis unsatisfactory and have many problems. It is widely acknowledged that much attentionshould be paid to English writing. At present our college English writing teaching is time-consuming and low effectiveness, for teachers spend a lot of time and energy reading andcorrecting students’ compositions, but the efficiency is not high; at the same time, studentsspend a lot of time writing, and the results are not satisfactory.The following conspicuous problems tend to exist in the English writing. First, when givena topic, students tend to think in Chinese and do a translation job. Second, students spend toomuch time avoiding grammatical errors in the process of writing, which leads to the ignoranceof the organization of the compositions in a comprehensive view. Third, enriching the contentduring the writing process is difficult for students, for they fail to support their viewpointswithappropriate examples and strong arguments. English writing is the weakest part in Englishlearning especially for Chinese Vocational college students. According to Basic Teaching Requirements for Vocational College English Course,developing students’ comprehensi ve abilities to use English language is the teaching aim ofvocational college English. In terms of writing, students should have the ability to master thebasic writing skills and accomplishing writing tasks of different types, including narration,description, argumentation and practical writings like business email or announcement.Besides,their writing should have a clear organization and proper coherence; at the same time, studentsshould be able to write or describe something with adequate content and proper form indifferent situations, such as business situation.…………1.2 Purpose and Significance of the ResearchAs we can see, most English class in the vocational colleges is always a big class which contains at least sixty students and in the class students may not receive the feedbackfromteacher immediately, although offering feedback is one of the essential tasks. It is helpful andefficient for teachers that students themselves can check other s’ writing and give comments. Sothese two feedbacks have their own roles in the revision. Considering the vocational collegeeducation, examining the practice of teacher feedback and peer feedback on EFL writing is ofgreat importance and necessity. This study is aimed to discuss the effects of teacher feedbackand peer feedback in the English class in order to provide some useful English writing teachingmethod and studying ways for vocational college education. This is not only consistent with thespirit of the new curriculum; at the same time reflects the “student-c entered” teachingphilosophy.…………Chapter Two Literature Review2.1 Feedback TheoryFeedback is widely seen in education as crucial for both encouraging and consolidatinglearning (Anderson, 1982; Brophy, 1981; Vygotsky, 1978), and the importance has alsobeenacknowledged in the field of English writing.In language learning, feedback means evaluative remarks which are available to languagelearners concerning their language proficiency or linguistic performance(Larsen-Freeman,2005). In the filed of teaching and learning, feedback is defined as many terms, such asresponse, review, correction, evaluation or comment. No matter what the term is, it can bedefined as “comments or information learners receive on the success of a learning task, eitherfrom the teacher or from other learners (Richards et al., 1998)”.A more detailed description of feedback in terms of writing is that the feedback is “inputfrom a reader to a writer with the effect of providing information to the writer for revision”(Keh, 1990). From the presentation of general grammatical explanation to the specific errorcorrection is all the range of feedback. The purpose is to improve the writing ability of studentsby the description and correction of the errors.The role of feedback is to make writers learn where he or she has misled or confused thereader by supplying insufficient information, illogical organization, lack ofdevelopment ofideas, or something like inappropriate word-choice or tense (Keh, 1990).…………2.2 Theoretical Foundations of FeedbackCollaborative learning, also called cooperative learning, is the second theoretical basis thatback for the application of feedback in writing class. It is feasible that students communicateactively with each other in the classroom.There is a clear difference betweenstudents-centered and traditional teacher-ledclassrooms. Students’ enthusiasm of participating in group discussion strengthens whenstudents are completely absorbed in collaborative learning in the students-centered class. Whenstudents get together to work out a problem, ideas are conveyed among them and immediatefeedback is received from their group members.Collaborative learning emphasizes that both students and instructors participate and interact actively (Hiltz, 1997). Collaborative learning is viewed from both behavioral andhumanistic perspectives (Slavin 1987). The behavioral perspective stresses that students areencouraged to study under a cooperativesituation and rewarded in the form of group rather thanindividual ones. As for the humanistic perspective, more understanding and better performanceare gained from the interaction among peers. So it is obvious that collaborative learning putsmore attention to the influence of peers, which is different from the previous English writingteaching theories(Johnson and Johnson,1986).Collaborative learning make the students work and learn together to maximize their ownand other’s study.…………Chapter Three Research Methodology (21)3.1 Research Questions (21)3.2 Subjects (21)3.3 Instruments (22)3.3.1 Writing Tasks (23)3.3.2 Questionnaires (23)3.3.3 Pre-test and Post-test (24)3.4 Research Design (24)3.5 Data Collection (27)Chapter Four Results Presentation and Discussion (29)4.1 Students’ Changed Writing P roficiency (29)4.2 Students’ Changed Interest in English Learning and Writing (36)Chapter Five Conclusion (43)5.1 Major Findings (43)5.2 Pedagogical Implications and Suggestions (44)5.3 Limitations of the Study (46)5.4 Suggestions for Further Study (46)Chapter Four Results Presentation and Discussion4.1 Students’ Changed Writing ProficiencyThe data from the pre-test and post-test of the EC and CC were all collected and analyzedthrough SPSS 13.0 to investigate the difference before and after the adoption of teacherfeedback and peer feedback in the English writing class. As table4-1 shows, the mean score of the control class (11.43) is rather similar to theexperimental class (11.56). Moreover, the standard deviation of experimental class (9.357) isalso rather similar to that of the control class (9.421). The mean score of the experimental groupisa little bit higher than that of control the group(11.56>11.43), but the disparity is only 0.13,and thelowest score and the highest score of the two groups are quite close to each other.On the basis of the group statistics of the pre-test, the author carried out an independentsamples t-test in order to further compare the mean scores of the pre-test between CC and EC.Table 4-2 shows the Sig is 0.624, higher than 0.05, showing the writing proficiency of twogroups have no significant difference. Thereby, the statistics in the row of “Equal variancesassumed” should be observed. The Mean Difference is merely 0.338, and the Standard ErrorDifference is only 2.086. In addition, Sig. (2-tailed) is 0.836 (>.05), which indicates that thestudents from both EC and CC share almost the same level of English writing proficiencybefore the study.…………ConclusionFeedback plays a key role and is quite effective in enhancing students’ writingproficiency. The comparison of mean scores in pre-test and post-test indicates that both groupsof EG and CG make more progress in their writingafter this feedback-initiated writinginstruction. Teacher feedback and peer feedback can lead to achievements in students’ writing,which means that the two kinds of feedback are all helpful, effective for promoting students’writing competence to some degree and there is no definite answer for the research question,which one will enhance students’ writing ability the more effective method between teacherfeedback and peer feedback. Teacher and peer feedback play different roles in improvingstudents’ writing. When giving teacher feedback, students in the control class make greaterprogress in organization and content, which was different from the experimental class. Theresults and discussion on students’ focus on the five language aspects had been mentioned in theprevious chapter. Those deep-level language aspects, like the content and organization are theweakest points for most of the students especially for the vocational students, so teacher has theability to point out the mistakes more deeply. As for peer feedback, students may havedifficultyin recognizing the errors in those deep -level aspects so they put more attention to the grammarand vocabulary.……………Reference (omitted)英语论文范文精选篇三Chapter I Introduction1.1 Theoretically analytical tool of the thesisAiming to analyze the features of English advertisements, the author picks English1advertisements which closely relate to people's daily life and rank first on the list ofcommercial advertisements as the studying material and applies thematic structure andthematic progression patterns as the theoretical tool of analysis.Now, quite a large number of linguists have studied theme and rheme, usingthematic structure and thematic progression patterns to conduct studies on detaileddiscourses,such as novels, sports news and students' theses. Taking thematic structureand thematic progression patterns as the analytical tool can help to explore how textsare developed. Halliday,a great linguist who has made many contributions tolinguistics, claims thematic structure as "basic form ofthe organization of the clause asmessage" (Halliday 1985:34). Each clause can be divided into theme part and rhemepart. The relation between themes and rhemes of the text can reveal how the text isconducted, which is known as thematic progression. Through thematicprogression,coherence of the text can be established. …………1.2 Purpose of the studyThrough the perspective of Systemic-Functional Grammar, 42 written texts ofEnglish advertisements are taken as the corpus and their thematic structures andthematic progression patterns are analyzed one by one. The author will analyze thedistribution of different themes and explore the use of four basic thematic progressionpatterns in this type of advertisements, trying to answer three questions:(1) What are the features of the usage of different themes in English advertisements?(2) Which thematic progression is used most often and why?(3) What pragmatic effects do these four thematic progressions have in Englishadvertisements?In the whole thesis, these three questions will be answered through analyzing theparticularEnglish advertisements. Halliday's(1994) theory of thematic structure and XuShenghuan's(1982) four basic thematic progression patterns will be adopted asanalytical framework, the reason of which will be explained later in Chapter 2.…………Chapter II Literature review2.1 Studies on thematic structureTheme and rheme distinction was firstly described by V. Mathesius in 1939 (HuZhuanglin 1994:137). In his mother tongue, Czech,he tries to analyze sentences fromthe perspective of communication and function and show how the information in asentence is expressed. Firbas translates Mathesius' definition of theme as: "[the theme]is that which is known or at least obvious in the given situation and from which thespeaker proceeds."(Martin 1992:434) Therefore, according to him, theme is the startingpoint of the message, which is known or given in the utterance and from which thespeaker proceeds, while rheme plays a role as new information, which is about what thespeaker says ontheme and represents the very important information that the speakerwants to convey to the hearer. In his opinion,a clause is divided into three parts: theme,rheme and transition. Of course, it is obvious that Mathesius does not use the exactexpression of "theme" and "rheme".Though Mathesius' point of view has some deficiencies, it influences Praguescholars greatly. One of his well-known followers, Firbas, proposes a view to improvethe thematic theories. He believes that theme is one that has lower degree ofcommunicative dynamism in some certain context while rheme has higher one.Different from Mathesius in dividing a clause into three parts (Hu Zhuanglin et al1989),Firbas (1992) merges the concept of transition into rheme and divides a clauseinto two.Following with their opinions, there are two groups differing from each other. Onegroup thinks that theme is equal to "given" while the other one, Systemic School,accepts 'separating approach' which disentangles the two. Systemic School argues thatthere are differences existing between information structure (given-new) and thematicstructure (theme-rheme).…………2.2 Studies on thematic progression patternsIn discourse analysis,a sentence is understood as a message,conveyinginformation from the speaker to the listener. It can be separated into two segments:theme and rheme. Mathesius' (1976) concept of theme and rheme leads to a surge ofinterest in discourse analysis operated at the level of clause. The different choices andorders of discourse themes, the mutual connection and hierarchy between themes andrhemes, as well as their relationship to the hyperthemes of the superior discourse (suchas the paragraph, chapter, etc.) to the whole text or to the situation would influence theinternal structure of the text. Halliday (1985:227) subscribes to that opinion too,statingthat "the success of a text does not lie in the grammatical correctness of its individualsentences,but in the multiple relationships established among them". Therefore,thematic progression performs an important role in discourse analysis.Both scholars abroad and at home make great contributions to the study ofthematic structure together with thematic progression.…………Chapter III Analytical framework of the study and research design (20)3.1 Analytical framework of the study (20)3.1.1 Analytical framework of thematic structure (21)3.1.2 Analytical framework of thematic progression patterns (22)3.2 Research design (24)3.2.1 Consideration on selecting data used in the analysis (25)3.2.2 Analytical procedures (27)3.3 Summary (30)Chapter IV Analysis of thematic structure (33)4.1 Some rules of identifying and counting themes........334.2 Simple theme, multiple theme and zero theme (35)4.2.1 Distribution of simple theme, multiple theme and zero theme (36)4.2.? Data analysis (38)4.3 Textual theme, interpersonal theme and experiential theme (39)4.3.1 Distribution of three functional themes (40)4.3.2 Data analysis (42)4.4 Summary (43)Chapter V Analysis of thematic progression patterns........445.1 Distribution of thematic progression patterns (44)5.2 Data analysis (44)5.3 Summary (45)Chapter V Analysis of thematic progression patterns5.1 Distribution of thematic progression patternsBefore discussing the distribution of thematic progression patterns, anadvertisement sample will be taken as an example, which is selected from Michelin.Example 3:GE(T1) is building the world by providing capital, expertise and infrastructure for a globaleconomy(Rl). GE Capital(T2) has provided billions in financing so businesses can build and growtheir operations and consumers can build their financial futures(R2). We(T3) build appliances,lighting, power systems and other products that help millions of homes, offices, factories and retailfacilities around theworld work better(R3).^In this example given above, themes and rhemes have already been marked forconvenience. T1 refers to the theme of the first clause while R1 refers to the rheme, andso on. These three sentences in this piece of advertisement are all concerned about GEenterprise, although there is a slight difference among them. According to ZhuYongsheng (1985),these themes can be seen as the same one and these clauses aresharing the same theme. ……………ConclusionThis thesis is focused on the thematic structure and thematic progression patternsof English advertisements, aiming to find some features and favored patterns.A literature review on thematic structure,thematic progression patterns andEnglish advertisements is made before the detailed analysis and finds that fewresearches are done on advertisements with a perspective of thematic organization andby a case study of one specific kind of advertisements. Therefore, the author conducts astudy on English advertisements by setting a theoretical framework,including theHalliday's theory of thematic structure and Xu Shenghuan's classification of thematicprogression patterns. Through these methods,the research is done by investigating thestatistics and results are given below: English advertisements prefer to use simpler themes to convey' informationquickly and directly. Multiple themes and clauses with themes omitted are used not sooften and differ from each other not so much in number because of the uniquecharacteristics of advertisements.……………Reference (omitted)英语论文范文精选篇四第一章引言1.1研究背景传统的课堂英语教学已经不能满足日益提高的英语学习要求,而网络化的英语在线学习系统提供大量不断更新的资源,突破地域和时间的限制,为学生和教师提供课内或课外的网络学习平台。
英语哲学思想解读50题1. The statement "All is flux" was proposed by _____.A. PlatoB. AristotleC. HeraclitusD. Socrates答案:C。
本题考查古希腊哲学思想家的观点。
赫拉克利特提出了“万物皆流”的观点。
选项A 柏拉图强调理念论;选项B 亚里士多德注重实体和形式;选项D 苏格拉底主张通过对话和反思来寻求真理。
2. "Know thyself" is a famous saying from _____.A. ThalesB. PythagorasC. DemocritusD. Socrates答案:D。
此题考查古希腊哲学家的名言。
“认识你自己”是苏格拉底的名言。
选项A 泰勒斯主要研究自然哲学;选项B 毕达哥拉斯以数学和神秘主义著称;选项C 德谟克利特提出了原子论。
3. Which philosopher believed that the world is composed of water?A. AnaximenesB. AnaximanderC. ThalesD. Heraclitus答案:C。
本题考查古希腊哲学家对世界构成的看法。
泰勒斯认为世界是由水组成的。
选项A 阿那克西美尼认为是气;选项B 阿那克西曼德认为是无定;选项D 赫拉克利特提出万物皆流。
4. The idea of the "Forms" was put forward by _____.A. PlatoB. AristotleC. EpicurusD. Stoics答案:A。
这道题考查古希腊哲学中的概念。
柏拉图提出了“理念论”,即“形式”。
选项B 亚里士多德对其进行了批判和发展;选项C 伊壁鸠鲁主张快乐主义;选项D 斯多葛学派强调道德和命运。
5. Who claimed that "The unexamined life is not worth living"?A. PlatoB. AristotleC. SocratesD. Epicurus答案:C。
《科学探究方法》高中生英语作文Title: The Scientific Method of InquiryThe scientific method is a systematic approach used to investigate natural phenomena and acquire knowledge.It is a crucial part of scientific education, fostering critical thinking and intellectual curiosity among students.The process typically involves making observations, formulating questions, conducting experiments, analyzing data, and drawing conclusions.The first step in the scientific method is making careful observations.This involves paying close attention to the surrounding environment and noticing any patterns or anomalies.Observations help to identify potential research questions and provide a foundation for further investigation.Once observations have been made, the next step is to formulate a research question.This question should be specific, measurable, and relevant to the observations made.Formulating a clear research question helps to focus the investigation and provides a clear direction for the experiment.After formulating a research question, the next step is to design and conduct experiments.This involves identifying variables, developing a hypothesis, and selecting appropriate equipment and materials.Experiments should be designed to test the hypothesis andprovide data to support or refute the proposed explanation.Once the experiments have been conducted, the next step is to analyze the data collected.This involves organizing and interpreting the data to identify patterns or trends.Data analysis helps to determine whether the hypothesis was supported by the evidence and whether any conclusions can be drawn.The final step in the scientific method is drawing conclusions.Based on the data analysis, students should evaluate the hypothesis and determine whether it was supported or refuted.Conclusions should be drawn carefully, considering any limitations of the study and the potential for further research.It is important to note that the scientific method is an iterative process.This means that the steps may need to be repeated or revised based on new observations or findings.The scientific method encourages students to question, explore, and think critically about the natural world, fostering a lifelong love of learning and discovery.In conclusion, the scientific method is a systematic approach to inquiry that promotes critical thinking and intellectual curiosity.By making observations, formulating questions, conducting experiments, analyzing data, and drawing conclusions, students can acquire knowledge and gain a deeper understanding of the natural world.The scientific method is an essential tool for scientific education, empoweringstudents to explore, discover, and make meaningful contributions to the field of science.。
福建省莆田市第二十五中学2024-2025学年高三上学期期中考试英语试题一、阅读理解The Best Caves in The WorldHang Son Doong, VietnamNatural caves don’t come much larger than Hang Son Doong, close to the border between Laos and Vietnam. This cave possesses the largest cross-section of any known cave on the planet, a vast area that is difficult to describe. Supposedly, a Boeing 747 could fly through without damaging its wings, but that doesn’t really do justice to the vastness of Hang Son Doong. The stalactites (钟乳石) here are pretty massive too, with some reaching up to 80 metres.Waitomo Caves, New ZealandGlowworms (萤火虫) are there, as far as the eye can see. Okay, not literally, but the Waitomo Cave system on New Zealand’s North Island is best-known for the fluorescent fauna that light up the walls, giving it the not-particularly-creative but completely acceptable “Glowworm Caves” nickname. They are more accessible than other caves on this list, with rafting and adventure tours available to those looking for something a little more thrilling.Mammoth Cave, the USAIf you have certain expectations from somewhere called “Mammoth Cave”, that is entirely understandable. Mammoth Cave in Kentucky is the world’s longest known cave system, an incredible 420 miles of underground wonder. That’s twice as long as the next longest, by the way, although it isn’t unusual for the USA to go all out on such things.Reed Flute (芦笛) Cave, China Named after the reeds that grow outside, which are used to make flutes, obviously, the Reed Flute Cave’s walls are covered with inscriptions from centuries gone by—if evidence was needed that people have been paying attention to this place for a long old time. The inside part of the cave is also lit up by multicoloured lights, giving it a real otherworldly theme that adds weight to the nickname.1.What is special about Hang Son Doong?A.It was once a base of a factory.B.It is the deepest cave in the world.C.It has the highest stalactites in the world.D.It owns the largest cross-section in the world.2.Which of the following can be much easier to enter?A.Hang Son Doong.B.Waitomo Caves.C.Mammoth Cave.D.Reed Flute Cave.3.Which country probably has the longest cave system in the world?A.Vietnam.B.New Zealand.C.The USA.D.China.Food vlogger Julia Pacheco undertook an ambitious challenge to see if she could sustain herself on just $ 10 for an entire week, covering all meals from breakfast to dinner. The aim was not only to test the feasibility (可行性) of such a tight budget but also to explore how it would affect her overall well-being by the end of the week.Julia ’s journey began with a strategic shopping trip to Walmart, where she carefully selected key ingredients to maximize both her budget and nutritional intake. This initial phase set the tone for the week, showcasing the importance of planning in budget-friendly eating.Julia ’s grocery list included affordable staples (主食) like pasta, brown rice, mixed vegetables, bread, lentils, pinto beans, and some fresh produce such as apples, tomatoes, onions, and garlic. These items were chosen for their versatility (多用途) and nutritional value. The total cost of these groceries perfectly hit the $10 mark, setting the stage for a week of simple yet thoughtful meal planning. This careful selection was crucial as it laid the foundation for her entire week’s living.For the first five days, the mum’s breakfast routine consisted of oatmeal flavoured with apple. This choice was not only cost-effective but also provided a warm, hearty start to her day. On the last two days, Julia switched things up by having boiled eggs on toast, adding variety within budget. These breakfast options demonstrated that even on a tight budget, one could enjoy a wholesome and satisfying start to the day.By cooking large portions and storing them for later, she minimized waste and ensured she always had a meal ready, reducing the desire to snack unnecessarily. This approach also highlighted the importance of planning and preparation when working with a limited budget.Throughout the week, she felt full and content, proving that it’s possible to maintain a healthy diet even on a tight budget. Julia’s experience showed the potential to eat well with limitedfinancial resources.4.Why did Julia undertake the $10 challenge?A.To see if she could survive on a strict budget.B.To develop budget-friendly eating habits.C.To test the quality of food at Walmart.D.To promote a new way of living.5.What can we know about the grocery list that Julia chose?A.It fitted her budget and nutrition.B.It was full of her favourite staples.C.It was too complicated.D.It was out of a random choice.6.How did Julia manage to reduce waste during her challenge?A.By skipping breakfast.B.By preparing less staples.C.By cooking more food each time.D.By snacking unnecessarily.7.How was Julia’ s overall experience during the challenge?A.she struggled with her budget.B.she found it tough to continue.C.she ate well and felt satisfied.D.she suffered hunger sometimes.We all notice bright colors. People who choose to go eye-catching, whether they express themselves through clothes or accessories (配饰), hear everything from “No one is going to miss you at the party” to “I would never have the courage to wear that.” But according to research, those comments may be both accurate and expected.Adam D.Pazda and Christopher A.Thorstenson (2019) examined how we perceive people at first impression who wear bright colors. They specifically examined the effect of chroma (色度). They found that targets, both male and female, who were wearing or surrounded by high-chroma colors were perceived as more open and outgoing than in a low-chroma setting. They concluded that chroma is a variable of perception that can influence first impressions of personality.Drilling down further, they found that high-chroma colors strengthened viewer perspective of openness and extraversion (外向), but not other personalities. These observations are important because some job responsibilities capitalize on some of the personalities inferred through brightcolors.Pazda and Thorstenson recognize what job seekers no doubt consider as they look for a career to match their personal nature: in some occupations, success is fueled by possessing certain personality qualities. They give examples of industries such as sales and marketing as well as customer service as fields where extraverts thrive (繁荣). Accordingly, applicants for these positions may be viewed more favorably and judged as more competent if they wear highly chromatic clothing.Regarding the generality of their results, Pazda and Thorstenson note that one of the limitations of their study was their use of participants living in the United States, which means their findings may not predict results in other cultures. They note the possibility that chroma may influence the perception of personality differently in non-Western countries, and that high-chroma clothing may be perceived as at odds with social norms in other cultures. The practical takeaway, at least in the United States, appears to be that bright colors, like the peacock’s tail, will get you noticed. But depending on your goals, consider tailoring your chroma to the circumstances, personally and professionally.8.What is the focus of the study mentioned in the passage?A.The cultural implications of high chroma colors.B.The influence of clothing on viewer perceptions.C.The connection between clothing and job suitability.D.The impact of high chroma colors on first impression.9.What does the underlined phrase “capitalize on” in paragraph 4 probably mean?A.Draw on.B.Approve of.C.Subscribe to.D.Dig up. 10.Which might be a limitation of the study?A.The culturally specific findings.B.The unmonitored research process.C.The outdated data analysis methods.D.The relatively insufficient theoretical basis. 11.What is the practical advice given by the author in the last paragraph?A.Reserve bright colors for social events.B.Always wear bright colors to be noticed.C.Avoid bright colors in professional settings.D.Use bright colors strategically based on your goals.In recent decades, experiments have begun to catch up with what people who work closely with animals have always known—that animals have an inner life, and consciousness isn’t uniquely human.Consciousness is a concept that is extremely difficult to define. There have been many attempts: is it awareness, or awareness of that awareness, or self-awareness instead? But a useful working definition might be that it is any kind of subjective experience, ranging from how we perceive the external world to our inner thoughts and emotions. Because you can never be inside another living being’s head, questions of consciousness are both hard to answer and open to bias (偏见).Findings of experiments inspired a group of scientists in April to write The New York Declaration Animal Consciousness, which now has over 300 supporters. It states that there is “strong scientific support for conscious experience” in mammals and birds and “at least a realistic possibility of conscious experience” in fish and other species.That animals have some form of inner life must surely be self-evident to many people who live or work with them, just as I would guess that most carers of newborn babies don’t see these infant as senseless automatic machines. The experiences of people with thorough knowledge of either have historically, been viewed as subjective and biased, as emotional connection tends to influence logical reasoning. Our consciousness leads us to over-empathize with others we cannot truly know, the argument goes.But, as the biologist Marc Bekoff wrote, if we humans have something, then other animals are likely to have it too. I personally feel that attempts to divorce emotion, feeling and experience from how we see animals can be as unscientific. For too long, we assumed that humans are unique and animals don’t feel pain or emotions the way that we do, a convenient but cruel null hypothesis (无效假设), when we could have started from the position that perhaps they do instead. 12.Which is a key characteristic of consciousness according to the passage?A.It means any emotional experience of humans.B.It refers to individual’s subjective experience.C.It is all about how we perceive the external world.D.It refers to a common quality shared by all animals.13.Why are the carers of newborn babies mentioned?A.To show that animals are just as conscious as human babies.B.To help readers understand why animals possess consciousness.C.To argue against the view of people living or working with animals.D.To explain why animal carers would assume animals have an inner life.14.What might be the author’s attitude towards Marc Bckoff’s assumption about animal emotions?A.Doubtful.B.Objective.C.Supportive.D.Uncertain. 15.which can be the best title of the passage?A.Consciousness Improved Through PracticeB.Questions of Human Consciousness AnsweredC.Factors Affecting Animal Consciousness DiscoveredD.Conscious Experience Found in Certain Animal SpeciesRejection creates an emotional roller coaster. We feel the sadness, anger, and loss of the person who rejects us, and also our self-respect is hurt. 16 While there is no quick fix, the following suggestions can help you overcome the confusion and ease the pain of rejection.17 People tend to avoid their pain. There is no healing without feeling your feelings. There is no healing without pain. You must go through the feelings to heal. Let yourself feel the loss and the pain and express your difficult emotions. You should know that you are strong enough and can face pain with courage to heal and grow.Take it one moment at a time. It takes time to heal and for the hurt to lessen. In the abyss (深渊), it is hard to think too far, so be patient during this difficult period. Unfortunately, today, we live in the fast lane. We don’t have patience and we want things to happen immediately. Patience is a lost art. 18Develop gratitude. Gratitude is cure for pain. Pain is about what we are missing. Gratitude is about what we are having. If we focus on the negative, we will see the negatives. 19 Those who show gratitude, develop a positive attitude, and appreciate their situation minimize their pain and create more happy moments in their lives.Getting over a rejection is hard. 20 It is there to make you grow. Everyone heals from rejection at their own pace in their own way. We must sympathize with ourselves and acceptthat reality. It’s important to take your time to deal with rejection and to use the practices suggested to heal and move on to the new opportunities life presents us. It won’t be easy, but it will be worth it.A.Feel it to heal it.B.Don’t take it too personally.C.However, your pain won’t last forever.D.So, even if you are broken, keep busy.E.With time, the sense of loss and hurt will ease.F.This is why getting over a rejection is challenging.G.If we focus on the positive, we will see the positives.二、完形填空A young woman, swept out to sea while swimming at a beach in China, has been rescued37 hours later, having drifted (漂流) more than 50 miles in the pacific ocean. 21 as an Australian national in her 20s, the woman’s 22 ended thanks to the combined efforts of China’s coast guard and two ships.The dramatic rescue began after the woman’s friend reported her 23 Monday night while they were swimming at the beach. It is 24 that she was swept out to sea by a powerful 25 China’s coast guard launched an extensive search operation, searching the waters for any sign of her.Noticeably, about 36 hours later, a cargo ship 26 the woman adrift in the ocean. The ship immediately 27 a passing LPG tanker (油轮), the He Ping No.8, for assistance. Demonstrating 28 bravery, two crew members of the tanker jumped into the rough 6.5-foot waves to rescue her.The crew members recalled29 encouragement to the woman, urging her not to give up as she struggled to stay adrift. They tied a 30 around her, and with the help of their fellow crew members, successfully pulled her to safety aboard the tanker. She was then 31 by a coast guard helicopter to shore.Despite her hardship, the woman was found to be in good health, though 32dehydrated (脱水). Experts noted that she was extraordinarily 33 to survive, considering the 34 dangers of heat stroke, hypothermia (低体温) at night, or even being struck by a ship in the dark.Zhang Siqi, a senior member of the Society of Water Rescue and Survival Research, described the woman’s 35 as “a miracle” during a televised interview. 21.A.Identified B.Considered C.Introduced D.Employed 22.A.ambitions B.memories C.sufferings D.discoveries 23.A.excitement B.assistance C.disappearance D.entertainment 24.A.believed B.expected C.remembered D.proved 25.A.ship B.hand C.foot D.current 26.A.spotted B.heard C.felt D.watched 27.A.ran into B.called upon C.stuck to D.learned from 28.A.unsung B.unnecessary C.incredible D.intentional 29.A.referring B.advising C.spreading D.shouting 30.A.chain B.rope C.ring D.scarf 31.A.targeted B.airlifted C.driven D.caught 32.A.truly B.usually C.surely D.slightly 33.A.safe B.likely C.brave D.fortunate 34.A.preventable B.influential C.potential D.occasional 35.A.effort B.survival C.struggle D.spirit三、语法填空阅读下面短文, 在空白处填入1个适当的单词或括号内单词的正确形式。
Mining the Web for Answers to Natural LanguageQuestionsDragomir R.Radev,Hong Qi,Zhiping Zheng,Sasha Blair-Goldensohn,ZhuZhang,Weiguo Fan,John PragerSchool of Information,University of Michigan,Ann Arbor,MI48109Department of EECS,University of Michigan,Ann Arbor,MI48109School of Business,University of Michigan,Ann Arbor,MI48109IBM TJ Watson Research Center,Hawthorne,NY10529radev,hqi,zzheng,sashabg,zhuzhang,wfan@jprager@ABSTRACTThe web is now becoming one of the largest information and knowl-edge repositories.Many large scale search engines(Google,Fast, Northern Light,etc.)have emerged to help usersfind information. In this paper,we study how we can effectively use these existing search engines to mine the Web and discover the“correct”answers to factual natural language questions.We propose a probabilistic algorithm called(Question An-swering using Statistical Models)that learns the best query para-phrase of a natural language question.We validate our approach for both local and web search engines using questions from the TREC evaluation.We also show how this algorithm can be com-bined with another algorithm(AnSel)to produce precise answers to natural language questions.1.INTRODUCTIONThe web is now becoming one of the largest information and knowl-edge repositories.Many large scale search engines(Google,Fast, Northern Light,etc.)have emerged to help usersfind informa-tion.An analysis of the Excite corpus[9]of2,477,283actual user queries shows that around8.4%of the queries are in the form of natural language questions.The breakdown of these questions is as follows:43.9%can be counted as factual questions(e.g.,“What is the country code for Belgium”)while the rest are either proce-dural(“How do I...”)or“other”(e.g.,syntactically incorrect).A significant portion of the91.6%that are not in the form of natural language questions were still generated by the user with a question in mind.Traditional information retrieval systems(including modern Web-based search engines as mentioned above)operate as follows:a user types in a query and the IR system returns a set of documents ordered by their expected relevance to the user query and,by exten-sion,to the user’s information need.This framework suffers from two problems:first,users are expected to follow a specific engine-dependent syntax to formulate their information need in the form of a query and second,only a small portion of each document may be relevant to the user query.Moreover,a study of search engine capabilities to return relevant documents[22]when the query is in the form of a natural language question shows that search engines provide some limited form of processing of the question,namely removing stop words such as“who”and“where”.Unfortunately, in factual question answering,such words can indicate the type of the answer sought and therefore simple removal may lead to lower accuracy of the search results.We address these two problems in the context of factual,natural language question answering.In our scenario,when a user types in factual natural language questions such as“When did the Ne-anderthal man live?”or“Which Frenchman declined the Nobel Prize for Literature for ideological reasons?”,he/she will expect to get back the precise answer to these questions rather than a set of documents that simply contain the same keywords as the questions. We make two assumptions here:one,users should not have to learn idiosyncratic search engine query syntax to ask a factual question; two,a document returned by a search engine may contain only a small portion that is relevant to the user question.It is therefore important to allow users to type questions as they seefit and only get the answer to their questions rather than the full document that contains it.We introduce a probabilistic algorithm for domain-independent nat-ural language factual question answering,,which converts a natural language question into a search engine specific query.In our approach,we view question answering and the related problem of natural language document retrieval as instances of the noisy channel problem[4].We assume that there exists a single best query that achieves high precision and recall given a particular information need,a particular search engine,and a particular docu-ment collection.The query is then transformed into a grammat-ical natural language question through a noisy channel by the new proposed algorithm.Our goal is,given,to recover the original query from the space of all possible queries that can be generated from using a limited sequence of linguistically-justified transformation operators.is based on expectation maximization(EM)and learns which paraphrase from achieves the highest score on a stan-dardized benchmark.That paraphrase is then assumed to be theclosest approximation to the original query.The algorithm makes use of a moderate number of labeled question-answer pairs for bootstrapping.Its generalization ability is based on the use of sev-eral classes of linguistic(lexico-semantic and collocational)knowl-edge.We have incorporated in a system which is used to answer domain-independent natural language questions using both a local corpus and the Web as knowledge sources.1.1Question answeringThe problem of factual question answering in the context of the TREC evaluation is best described in[24].The goal is to return the most likely answers to a given question that can be located in a predefined,locally accessible corpus of news documents.To that end,most systems need to perform the following steps:query anal-ysis(to determine,for example,that a“who”question is looking for a person),document retrieval(to establish which documents are likely to contain the answer to the question),document analy-sis(to determine which portions of the document are relevant),and answer selection(return a short string containing the most likely correct answer from all retrieved documents).Note also that the questions used in the TREC evaluation are guaranteed to have at least one answer in the corpus,while in general no question is guar-anteed to an answer on the Web.The SMU system,[11],makes significant use of knowledge rep-resentation techniques such as semantic unification and abduction to retrieve relevant answers.For example,it is able to answer the question“Who was thefirst Russian astronaut to walk in space”by combining knowledge from the fact that thefirst person to walk in space was Leonov and from the fact that Leonov is Russian.The IBM project[21,23]requires that the corpus be tagged with semantic tokens called QA-tokens.For example,“In the Rocky Mountains”is tagged as PLACE$,while“The US Post Office”is tagged as ORG$.When a question is processed by the search en-gine,the QA tokens are passed along with the words in the query in order to enhance retrieval performance.This process requires that the entire corpus be pre-processed with QA-token information. We claim that it is unrealistic to rely on semantic tagging or deep se-mantic understanding of both question and source documents when the QA problem is moved from a pre-defined,locally accessible corpus to the Web.It can be argued that a search engine may indeed annotate semantically all pages in its index,however that process is quite expensive in terms of processing time and storage require-ments and it is clearly preferable to not have to rely on it.In other words,we are decoupling the problem of natural language docu-ment retrieval from the problem of answer selection once the docu-ments have been found.In that case,only the top-ranked retrieved documents need to be annotated.We need to note here that the service AskJeeves()is not a QA system.It does process natural language queries but it returns sites that may be relevant to them without trying to identify the precise answers.In this paper we will describe an algorithm for domain-independent factual question answering that does not rely on a locally-accessible corpus or to large-scale annotation.A system based on this algo-rithm thus provides the documents that contain the answers and needs to be connected to a component like AnSel to actually ex-tract these answers.Note that AnSel only needs to annotate a small number of documents retrieved by and thus the problem of annotating the whole Web disappears.Figure1shows some sample questions that our system can handle.Question/Answerand the goal becomes tofind the value of that maximizes. The string can be also thought as the original message that got somehow scrambled in the noisy channel and converted into the target string.Then the goal of translation modeling becomes that of recovering.Since the space of all possible strings in is in-finite,different heuristic techniques are normally used.We will discuss some of these techniques below.Statistical translation models originate in speech processing(see [12]for an overview),where they are used to estimate the proba-bility of an utterance given its phonetic representation.They have been also successfully used in part of speech tagging[7],machine translation[3,5],information retrieval[4,20],transliteration[13] and text summarization[14].The reader can refer to[15]for a detailed description of statistical translation models in various ap-plications.In statistical machine translation(SMT),there exist many tech-niques to convert a string from the source language to the target language.For example,IBM’s model three[6]makes use of the translation,fertility,and swap operators.Translation probabilisti-cally produces a word in the target language that is among all pos-sible translations of a given word in the source language.For ex-ample,the probability distribution for the English word“the”may be as follows:while and the sum of all other probabilities from“the”is equal to.14.Fertil-ity produces a variable number of words,e.g.,the English word “play”may produce one word in French(“jouer”)or three words (“pi`e ce de th´eˆa tre”).Swap produces words in different positions from the positions where they appear in the source language.The goal of SMT is then tofind out of the infinite number of possible transformations which one is the most likely to have been gener-ated from the source string.SMT makes use of an intermediate (hidden)structure,the so-called alignment between the source and target string.Given that there may be many ways of producing the same target string from a given source string,the alignment speci-fies the exact transformations that were undertaken to produce the target string.For example,“le petit garc¸on”may have been pro-duced from“the little boy”by translating“le”into“the”and“pe-tit”into“little”or by translating“le”into“little”and“petit”into “the”and then swapping the two words.The alignment is hidden because the system can only see a pair of French and English sen-tences that are translations of one another and has no information about the particular alignment.Obviously,all possible alignments of two given strings are not equally likely,however it is practically more accurate to think about the translation process in such a way (see[3,6]for more details)and then employ parameter estimation techniques to determine the probabilities of applying given opera-tors on particular input strings.In question answering,the source string is the query that pro-duces the best results while the target string is the natural language question.Obviously,there is an infinite number of queries that can be generated from a natural language question.These para-phrases are typically produced through a series of transformations. For example,the natural language question“Who wrote King Lear”can be converted into the query(wrote author)‘‘king lear’’by applying the following operators:(1)bracket“King”and“Lear”together to form an immutable phrase,(2)insert the word“author”as an alternative to“wrote”,and(3)remove the question word“who”(note that before removing that word,the in-formation that it conveys,namely the fact that the expected answer to the question is a person,is preserved through the use of the“in-sert”operator which adds a person word“author”to the query.Theproblem of question answering using a noisy channel model is re-duced essentially to the problem offinding the best query which may have been generated from.We will discuss in the next sec-tion what“best”means.We have identified the following differences between statistical ma-chine translation and question answering(QA).1.In QA,the swap operator is not particularly needed as typicalsearch engines give the same hits regardless of the order ofthe query terms.2.Since swaps are not allowed,alignments are simpler in QAand the process of parameter estimation is simpler.3.In QA,the source and target language are essentially thesame language.The only difference is the addition of log-ical operators,parentheses,and double quotes in the queries.4.The generation of queries in QA is much more robust thantranslation in the sense that the performance of a query typ-ically degrades gracefully when an operator is applied to itwhile a correct translation can immediately become a horri-ble one when two words are swapped.5.Since queries don’t need to be grammatical English sentences,there is no need to use a language model(e.g.,a bigrammodel)to determine the correctness of a query.On the otherhand,this places an additional burden on the translation model, given that in SMT,the language model prevents a high-probability but ungrammatical translation from being pro-duced.6.SMT is trained on a parallel corpus with aligned sentenceswith hidden alignments.QA needs a parallel corpus of ques-tions and answers while the actual queries that can producedocuments containing the answers are hidden.2.THE ALGORITHMThe algorithm(Question Answering using Language Mod-eling)is based on the premise that it is possible to select the best op-erator to apply on a particular natural language question(or query). That operator will produce a new query which is“better”than the one from which it was generated using some objective function such as precision or recall.We will later define some empirically justified objective functions to compare query paraphrases.In a sense the problem offinding the best operator given a query para-phrase is a classification problem.The class of a paraphrase is the same as the class of all equivalent paraphrases and is the operator that would most improve the value of the objective function when applied to the paraphrase.For example,the class of the paraphrase “who wrote King Lear”may be“delete-wh-word”if the resulting query“wrote King Lear”is the best of all possible queries gener-ated by applying a single operator to“who wrote King Lear”.Note that we make a distinction between the composite operator that con-verts a question to a query through a series of steps and the atomic operators that compose each step.For practical reasons,it is more feasible to deal with atomic operators when training the system.In other words,we need to build a classifier which decides what operator is best applied on a given question.In practice,we need to decompose the problem into a series of smaller problems and to produce a sequence of paraphrases with the following properties:1.Thefirst paraphrase is the same as the natural languagequestion.2.Each subsequent paraphrase is generated from us-ing a single atomic operator.Note that operators have to beunambiguous given an arbitrary query.For example,“bracket”is ambiguous,because in a general query,many differentsubsequences can be bracketed together.On the other hand,“bracket the leftmost noun phrase in the query”is unambigu-ous.3.If is the objective function that determines how gooda paraphrase is(we will call this function thefitness function,borrowing a term from the evolutionary computation litera-ture),then we want to be chosen from among all possibleparaphrases of using a single operator,or thatfor.4.The sequence of operators is interrupted whenis the index of the identity operator .The identity operator has the following property:.In other words,when no atomic operator can improve F,the process stops.Note that other stopping conditions(e.g.,stability of the probability matrix)are also possible.Two problems need to be resolved at this stage.First,it is ob-vious that the sequence of operators depends on the initial query .Since there is an infinite number of natural language questions,it is necessary to perform some sort of smoothing. In other words,each question(and by extension,each query)has to be converted to a representation that preserves some subset of the properties of the question.The probability of applying a given operator on a question will depend on its representation,not on the question itself.This way,we avoid maintaining an operator probability distribution for each natural language question.In the following section,we will discuss a particular solution to the rep-resentation problem.Second,we need to learn from a set of examples the optimal oper-ator to apply given a particular paraphrase.The decomposition of transformation operators into atomic operators such as“insert”or “bracket”significantly reduces the complexity offinding the right operator to apply on a given question.Since it is very expensive to produce a large training corpus of pairs of questions and their best paraphrases,we have to recur to an algo-rithm that is stable with regard to missing data.Such an algorithm is the expectation maximization(EM)algorithm.2.1The EM algorithmThe EM algorithm[8]is an iterative algorithm for maximum likeli-hood estimation.It is used when certain values of the training data are missing.In our case,the missing values are the paraphrases that produce the best answers for given natural language questions. We only have question-answer pairs but no paraphrases.In other words,the known variables are the scores for each operator;the hidden variables are the probabilities for picking each operator.The EM algorithm uses all the data available to estimate the values of the missing parameters.Then it uses the estimated values to im-prove its current model.In other words,the EM algorithm works as follows:first,it seeds the parameters of the model with some rea-sonable values(e.g.,according to the uniform distribution).Then it performs the following two steps repeatedly until a local maximum has been reached.-step:use the best available current classifier to classify some data points-step:modify the classifier based on the classes produced by the-step.The theory behind EM[8]shows that such an algorithm is guaran-teed to produce increasingly better models and eventually reach a local maximum.2.2Generic operatorsWe now need operators that satisfy the following criteria:they must be easy to implement,they must be unambiguous,they must be em-pirically justified,and they must be implemented by a large number of search engines.A list of such generic operators follows(see also Figure2).We call them generic because they are not written with any particular search engine in mind.In the following section,we will discuss how some of these operators can be operationalized in a real system.1.INSERT,add a word or phrase to the query(similar to thefertility operator in SMT),2.DELETE,remove a word or phrase(“infertility”operator),3.DISJUNCT,add a set of words or phrases in the form of adisjunction,4.REPLACE,replace a word or phrase with another,5.BRACKET,add double quotes around a phrase,6.REQUIRE,insist that a word or a phrase should appear(most search engines require such operator to explicitly in-clude a word in the query),7.IGNORE,for example the query cleveland-ohio willreturn documents about President Cleveland and not about the city of Cleveland,Ohio.8.SWAP,change the order of words or phrases,9.STOP,this is the identity operator,10.REPEAT,add another copy of the same word or phrase2.3The probabilistic generation modelWe will now turn to the need for question and query representa-tions.The space of questions is infinite and therefore any classi-fication algorithm must use a compact representation of the ques-tions.An empirical analysis[22]shows that certain features of questions interact with the scores obtained from the search engines and that questions with the same features tend to be treated sim-ilarly.In the next section we will discuss the particular features that we have implemented.At this moment,we only want to spec-ify that all questions with the same feature representation will be treated in the same way by our algorithm.We will call the state of all questions with the same values for all features the context of the query.Figure2:Sample operators.The model contains the probabilities of applying op-erator given context are represented in a two-dimensional probability matrix.Given a state(context)we can determine the most likely ter in this section we will discuss our algo-rithm for learning the values of the matrix.Note that has states and operators and the sum of operator probabilities for each row(state)is:.2.4Generating query paraphrasesIn order to learn the matrix,we need to be able to produce dif-ferent paraphrases of a given natural language question.We will N as the initial seed query:and then applyto produce subsequent paraphrases from.2.5Evaluating paraphrase strengthAt each iteration,we must determine which operator is the best. We need to define afitness function.In general,suchfitness func-tion can be one of a large number of information theoretic measures such as entropy or perplexity.It can also be a metric from informa-tion retrieval such as precision(accuracy)or recall(coverage).In the next section,we will introduce,a metric that we found particularly useful.2.6The algorithmWe will now introduce the algorithm.It is shown in Algo-rithm1.Some notational clarifications:represents the context (or state)of a question.is the probabilistic model that determines the probability of applying a given operator on question that is rep-resented in a particular state.The initialization step shown in the Figure can be replaced with one that uses an appropriate prior dis-tribution other than the uniform distribution.After each iteration of the EM algorithm,the query is modified using an operator gen-erated probabilistically from the current distribution for the state where the current paraphrase belongs.The global stop criterion is a function of the change in the probability matrix.2.7Decoding algorithmOnce the probabilistic model has been trained,it can be used to process unseen questions.The decoding algorithm(to borrow another term from SMT)is shown in Algorithm2initialize withset;setrepeatextract documents that matchcompute paraphrasefitness for allletif thennext iterationend ifpick an operator according to.recompute contextrerank operators based onfitnessreadjust/normalize based on the rerankingapply on to produceincrementuntilAlgorithm2Decoding algorithm3.IMPLEMENTATION AND EXPERIMENTS We now describe the operationalization of the algorithm in our system.We will address the following issues:choice of opera-tors,training and test corpora of questions and answers,choice of document corpora and search engine,question preprocessing,etc.3.1Specific operatorsWe note here that the operators that we discussed in the previous section are tricky to implement in practice because they are am-biguous.For example,to“insert”a word in a query,we have to know what word to insert and where to insert it.Another observa-tion is that there exist many“bases”of atomic operators that can be used to generate arbitrary(but reasonable,that is likely to produce superior queries at least occasionally)paraphrases.We should note here,that traditionally,the idea of adding words to a query is called “query expansion”and is done either manually or automatically(in which case it is called“pseudo relevance feedback”[18].We have chosen a“basis”composed of15operators,grouped into four categories:DELETE,REPLACE,DISJUNCT,and OTHER. We usefive DELETE operators(delete all prepositions,delete all wh-words,delete all articles,delete all auxiliaries,and delete all other stop words based on a list of163stop words).We also use four REPLACE operators(replace thefirst noun phrase with another,replace the second noun phrase,replace the third noun phrase,and replace thefirst verb phrase in the question).Note that by specifying which exact terms we will replace,we are ad-dressing the“where”question.The four DISJUNCT operators are similar to the REPLACE operators,except that the new words and phrases are disjoined using OR statements.Finally,the IDENTITY operator doesn’t modify the query.When deciding what word to insert with the REPLACE and DIS-JUNCT operators,we use two alternative approaches.In thefirst case,we look at thefirst three synonyms and the closest hypernym of thefirst sense of the word(in the correct part of speech:verb or noun)based on the WordNet database[17].The second case uses distributional clusters offixed size of the words[19].We don’t have room here to describe the implementation of the distributional al-gorithm but here is the basic idea:if two words appear in the same context(looking at words in a window)in a large corpus,they are considered to be distributionally similar.As an example,“profes-sor”and“student”are not distributionally similar because they tend to appear together,while“book”and“essay”are since they are in complementary distribution in text.Figures3and4show up tofive words or phrases similar to the word in the left column.wordbookmachine,expert,calculator,reckoner,figurerfruitleader,schemernewspaperrelated wordsautobiography,essay,biography,memoirs,novelscomputerleafy,canned,fruits,flowers,grapespoliticiandaily,globe,newspapers,newsday,paperFigure4:Related words produced by distributional similarity. Note that we did at this point,our implementation doesn’t include the following operators:IGNORE,REPEAT,and SWAP.We im-plemented BRACKET in the preprocessing stage.3.2Fitness functionsInstead of precision and recall,we use total reciprocal document rank().For each paraphrase,the value of is the sum of the reciprocal values of the rank of all correct documents among the top40extracted by the system:. For example,if the system has retrieved10documents,of whichthree:the second,eighth,and tenth,contain the correct answer, for that given paraphrase isNo.Paraphrase0What country is the”biggest producer”of tungsten DEL country is the”biggest producer”of tungstenDEL What country the”biggest producer”of tungstenDEL What country is”biggest producer”of tungstenDEL What country is the”biggest producer”tungstenDEL What country is the”biggest producer”of tungstenREPL What(”administrative district”OR”political unit”rural area”)is the”biggest producer”of tungsten REPL What country is the”biggest producer”of(”Figure5:Example of the operation of.The next paraphrase is generated according to the new probabil-ity distribution.Obviously,at this stage,all operators still have a chance of being selected as nofitness score was zero.In this exam-ple,operator0(IDENTITY)was selected.The new state is the same as the previous one:(“LOCATION”,8,0). The probability distribution has now changed.Since the values of in the second iteration will be the same as in thefirst,theprobabilities will be readjusted once more in the same proportion as after thefirst iteration.After the second EM iteration,the probabil-ities for state(“LOCATION”,8,0)are as follows:0.0374,0.1643, 0.0932,0.1100,0.1100,0.0374,0.0745,and0.1074.Afterfive iterations in the same state they become0.0085,0.3421,0.0830, 0.1255,0.1255,0.0085,0.0473,and0.1184.After23iterations in the same state,the probability of applying operator2is0.9673 while all14other probabilities tend to0.If we allow QASM to pick each subsequent operator according to we observe that the sequence that achieves the highest score for the state(“LOCATION”,8,0)is to apply operator2followed by op-erator4.Note that the state of the paraphrase changes after operator 2is applied as one word has been removed.After all probabilities are learned from the training corpus of2,224 datapoints,can proceed to unseen questions.For lack of space,we omit a complete illustration of the decoding process.We will just take a look at all questions that fall in the same class as the one in the example.There are18such questions in our test corpus and for14of them(77.8%),as predicted,the sequence of operators2and4is the one that achieves the highest score.In two other cases,this sequence is second best and in the last two cases it is still within20%of the performance of the best sequence.For the 18questions the performance over the baseline(sending the natural language question directly to the search engine)goes from1.31to 1.86(an increase of42.0%).4.RELATED WORKThere has been a lot of effort in applying the notion of language modeling and its variations to other problems.For example,Ponte and Croft[20]adopt a language modeling approach to information retrieval.They argue that much of the difficulty for IR lies in the lack of an adequate indexing model.Instead of making prior para-metric assumptions about the similarity of documents,they propose a non-parametric approach to retrieval based probabilistic language modeling.Empirically,their approach significantly outperforms traditional tf*idf weighting on two different collections and query sets.Berger and Lafferty[4]suggest a similar probabilistic approach to information retrieval based on the ideas and methods of statistical machine translation.The central ingredient in their approach is a noisy-channel model of how a user might“translate”a given doc-ument into a query.To assess the relevance of a document to a user’s query,they estimate the probability the query would have been generated as a translation of the document,and factor in the user’s general preferences in the form of a prior distribution over documents.They propose a simple,well-motivated model of the document-to-query translation process,and describe the EM algo-rithm for learning the parameters of this model in an unsupervised manner from a collection of documents.Brown et al.[5]lay out the mathematical foundation of statistical machine translation,while Berger et al.[3]presents an overview of Candide,a system that uses probabilistic methods to automatically translate French text into English.Text summarization is anotherfield where the language modeling approach seems to work for at least some problems.While most re-search in this area focuses on sentence extraction,one could argue that when humans produce summaries of documents,they do not simply extract sentences and concatenate them.Rather,they create new sentences that are grammatical,that cohere with one another, and that capture the most salient pieces of information in the orig-inal document.Given the large collections of text/abstract pairs now available online,it is possible to envision algorithms that are trained to mimic this process.Knight and Marcu[14]take thefirst step in this direction by addressing sentence compression.They de-vise both noisy-channel and decision-tree approaches to the prob-lem results.Similarly,[2]employ a probabilistic model to generate headlines of news articles.Glover et al.[10]address the issue of query modification when searching for specific types of Web pages such as personal pages, conference calls for papers,and product announcements.They em-ploy support vector machines to learn engine-specific words and phrases that can be added to a query to locate particular types of pages.Some of the words are quite unobvious,for example adding “w”as a word to a query improves the performance of their system, Inquirus2,when it is set to search for personal pages. Dempster,Laird,and Rubin[8]are thefirst ones to formalize the EM algorithm as an estimation technique for problems with incom-plete data.。
英语托福试题及答案一、听力部分1. 问题:What is the main topic of the lecture?答案:The main topic of the lecture is the impact of industrialization on the environment.2. 问题:According to the professor, what is the primarycause of air pollution?答案:The primary cause of air pollution, according to the professor, is the burning of fossil fuels.3. 问题:What is the student's suggestion to reduce pollution?答案:The student suggests using renewable energy sourcesto reduce pollution.二、阅读部分1. 问题:What does the author argue about the role of technology in education?答案:The author argues that technology has the potentialto enhance learning experiences but also emphasizes the importance of its proper integration into the curriculum.2. 问题:What evidence does the author provide to support the benefits of technology in education?答案:The author provides evidence such as increasedstudent engagement, access to a wider range of resources, and the ability to personalize learning.3. 问题:What is the author's view on the challenges of integrating technology into education?答案:The author believes that challenges include the need for teacher training, the digital divide, and the risk of distraction.三、口语部分1. 问题:Describe a memorable event from your childhood.答案:One memorable event from my childhood was my first visit to a zoo, where I was amazed by the variety of animals and learned about their habitats.2. 问题:Why do you think it is important to learn a second language?答案:Learning a second language is important because it opens up opportunities for communication, broadens cultural understanding, and enhances cognitive abilities.3. 问题:What are some ways to improve your English speaking skills?答案:Some ways to improve English speaking skills include practicing with native speakers, joining language exchange groups, and using language learning apps.四、写作部分1. 问题:Do you agree or disagree with the following statement? University education should be free for all students.答案:[Your response should be a well-organized essay that includes an introduction, body paragraphs with supporting arguments, and a conclusion.]2. 问题:Some people believe that the government should spend more on art and culture, while others think that this money should be used for other public services. Discuss both views and give your opinion.答案:[Your response should be a well-organized essay that presents the arguments for both views, provides your own opinion, and includes a conclusion.]3. 问题:Describe a person who has had a significant influence on your life and explain why this person is important to you.答案:[Your response should be a descriptive essay that outlines the person's characteristics, the impact they have had on you, and the reasons for their significance.]。
2012年第28期(总第43期)科技视界Science &Technology VisionSCIENCE &TECHNOLOGY VISION科技视界0引言知识表示是对知识的一种描述,或者说是一种约定,探索新的知识表示方法一直是人工智能研究的重要课题之一。
目前已有许多种知识表示方法。
例如,谓词逻辑、语义网络、产生式规则、框架、概念从属等。
这些方法对于描述特定领域的问题求解已经足够,且已得到广泛应用。
但是,从来没有人认为这些知识表示方法已经达到了最终的目的,因此知识表示仍是很久以来人工智能研究的中心课题。
对它的研究还需要相当深入的研究。
概念结构理论的出现为知识表示研究带来了一种新的思路。
本文正是从这个角度出发,在研究人工智能中知识基本概念、分类及传统知识表示方法基础上,主要研究概念图知识表示方法的基本理论及方法,通过实例阐述概念图知识表示方法的优点及其在实际工程中的应用。
1知识表示知识表示是人工智能研究的一个重要课题,无论应用人工智能技术解决什么问题,首先遇到的问题就是所涉及的各类知识如何加以表示。
研究知识表示的主要目的是为用户提供一种有利于进行逻辑推理,能够充分表示领域内知识和便于高效率进行程序设计的知识表示。
合理的知识表示,可以使问题的求解变得容易,并且有较高的求解效率。
一个好的知识表示方法应该具备以下的性质:1.1表达充分性能够将问题求解所需的知识正确有效的表达出来。
1.2推理有效性能够与高效的推理机制密切结合,支持系统的控制策略。
1.3操作维护性便于实现模块化,便于知识更新和知识库的维护。
1.4理解透明性知识表示便于人类理解,易读、易懂,便于知识的获取。
1.5良好访问性能够很好的接受访问并有效的利用所访问的知识对其进行有效的利用。
2传统知识表示方法基于前面所描述的知识表示方法所应具备的性质,目前普遍应用的传统知识表示方法主要有逻辑表示模式、基于规则的产生式系统、语义网、框架表示法、剧本表示法、脚本表示法等。
A KNOWLEDGE-BASED METHODOLOGY FOR TUNING ANALYTICALMODELSR.S. Freedman*G.R. Stuzin**ABSTRACTMany computer-based analytical models for decision-making and forecasting have been developed in recent years, particularly in the areas of economics and finance. Analytic models have an important limitation which has restricted their use: a model cannot anticipate every factor that may be important in making a decision. Some analysts attempt to compensate for this limitation by making heuristic adjustments to the model in order to "tune" the results. Tuning produces a model forecast that is consistent with intuitive expectations, and maintains the detail and structure of the analytic model. This is a very difficult task unless the user has expert knowledge of the model and the task domain. This paper describes a new methodology, called knowledge-based tuning, that allows a human analyst and a knowledge-based system to collaborate in adjusting an analytic model. Such a methodology makes the model more acceptable to a decision-maker, and offers the potential of improving the decisions that either an analyst or a model can make alone. In knowledge-based tuning, subjective judgments about missing factors are specified by the analyst in terms of linguistic variables. These linguistic variables and knowledge of the model error history are used by the tuning system to infer a specific model adjustment. A logic programming system was developed that illustrates the tuning methodology for a macroeconometric forecasting model that empirically demonstrates how the predictability of the model can be improved.*Department of Computer Science, Polytechnic University**Department of Computer Science, St. John's University.To appear in IEEE Transactions on Systems, Man, and Cybernetics, Volume 21, Number 2, March, 1991.Biosketch of AuthorsRoy S. Freedman received his BS and MS in Mathematics in 1975, his MS in Electrical Engineering in 1978, and his PhD in Mathematics in 1979, all from Polytechnic University. He joined the Hazeltine Corporation Research Laboratories in 1979, and was responsible for the software engineering of several well-known AI systems. In 1985 he returned to Polytechnic University as an Associate Professor. As a consultant, Dr. Freedman has done extensive work for a broad range of clients in the financial services and telecommunication industries, including the New York Stock Exchange, Equitable Life, MCI, Chemical Bank, Intelligent Technology Inc. (Tokyo), Grumman Aerospace, and Hazeltine Corporation. In 1989, he founded Inductive Solutions, Inc., a firm that produces tools for building embedded knowledge-based systems.Dr. Freedman has published over thirty papers in artificial intelligence and software engineering as well as two books, Programming Concepts with the Ada Language and Programming with APSE Software Tools, both now published by McGraw-Hill. He has been an Associate Editor of the journal IEEE Expert for the past four years, and is a member of the IEEE Computer Society, AAAI, ACM, and SIAM.Dr. Freedman can be reached at the Department of Electrical Engineering and Computer Science at Polytechnic University, 333 Jay Street, Brooklyn, NY 11201 or at Inductive Solutions, Inc., 380 Rector Place, Suite 4A, New York, NewYork 10280.Gerald J. Stuzin is an Associate Professor of Computer Science at St. John's University. He obtained his Ph.D. from Polytechnic University in 1989, and an MBA in Economics from New York University in 1972.Since 1969, Dr. Stuzin has worked as an economic consultant at American Can Company, CBS, Inc., Union Carbide, and the International Forecast Group. He is presently a consultant at Business Planning Systems, an economics consulting firm. His current interests are in developing intelligent analytic tools.Dr. Stuzin can be reached at the Department of Computer Science at St. John’s University, Jamaica, New York 11471.I. IntroductionMany analytic models for decision-making have been developed in recent years. For our purposes, a model is a dynamical systemx(n) = A(n-1)x(n-1) + B(n-1)u(n-1)(1)y(n)= C(n)x(n) + D(n)u(n)where, at period n, x(n) is a vector in R m of state variables, u(n) is a vector in R k of control variables, and y(n) is a vector in R p of observed model variables. Here, A is anm x m matrix, B is an m x k matrix, C is a p x m matrix, and D is a p x m matrix. These matrices are assumed to be known. Usually, they are estimated by statistical methods. The solution to (1) can be computed recursively given an initial state x(i), i = n-1 from y(n)= C(n)A(n-1)x(n-1)+ [C(n) B(n-1)u(n-1) + D(n)u(n)]= C(n)A(n-1)x(n-1) + c(n)(2) The expression c(n) in (2) that is independent of the model variables is called the constant term. The constant term is also usually estimated by statistical methods. Intuitively, this term relates the averaged effect of "excluded" model variables (called factors) to model variables. Formally, factors correspond to the effect of the control variables u(n) on the model variables. As the model evolves, factors may be formally introduced as model variables y(n) in a new model.Model (1) approximates the behavior of a more complex "real-world" systemx*(n) = F*(x*(n-1), u*(n-1), n-1)(3)y*(n)= G*(x*(n),n)where functions F* and G* are not known. Given a metric d, the model error at time n is defined to bee(n)= d(y*(n), y(n))(4) Model quality and sensitivity analysis is performed by evaluating the convergence and ergodic properties of the error term. Sensitivity analysis is evaluated by determining the effects of changing the initial conditions of the state variables.Tuning is a process that is concerned with creating a new model y t(n+1) of y*(n+1) in terms of some function f of the historical errors e(n) and the predicted value of the old model:y t(n+1)= y(n+1) + f(e(n))(5) Tuning can be considered to create a model adjustment of one or more components of the constant term c(n) in Equation (2). The function f depends on the subjective judgements of the model users and model experts, and on the metric d that is used to define the historical error. Because of variable interdependencies, this model adjustment process results in a new set of equations to be solved. For example, after an assessment that the value of y6 is "too high," an analyst may decide in a heuristic way to change the constant term for y1 from 0.445 to 0.449 and the constant term for y5 from -0.988 to -0.776. The justification for these changes can be based on the analyst's judgement regarding the effects of particular factors that are excluded as model variables. These adjusted values would then be propagated through the model and the new values would be assessed again. The tuning process can be iterated and stops when the model values are consistent with the analyst's judgement and intuition.The rationale for tuning an analytic model is that judgment may serve to compensate for the following unavoidable deficiencies in the model:•inadequate theory due to missing model variables and relationships•short-term disturbances•data revisionsAnother motive for tuning a model is to produce a forecast of y*(n) that is consistent with intuitive expectations, while maintaining the detail and at least some of the structure that an analytic model has to offer. As Pindyck and Rubinfeld [1] observe:There is a ...method that is often used to make minor adjustments in ...models, particularly those that are used for forecasting purposes. Thismethod is called 'tuning' and consists of making small changes in some ofthe model's coefficients,...so as to improve the ability of the model toforecast.[Tuning has] come to be used in large... forecasting models, particularlythose constructed for commercial or business applications (often they areadjusted to keep the forecast "in line" with intuitive forecasts - thus tosome extent negating thepredictions of the model). Needless to say, [these adjustments] can easilybe misused (and often are).Evans [2] comments on how common the practice is in econometrics:...I am sure it is no secret that virtually everyone who uses an econometricmodel for forecasting does so only after he has adjusted the constant termsin some or even all of the stochastic equations.... Adjust the constant terms,incorporating as a guideline the average residuals of the previous period,but using judgment to adjust the residuals further.This is typical in many other applications of algorithmic models: even though some models often outperform human decision-makers, the model outputs are not generally well-accepted. Many analysts will usually reject the model's conclusions, especially if they are presented with binary choice between the total acceptance of the model's decision and total rejection. Some reasons for this are:•Even the best models may on occasion produce decisions muchworse than a human analyst would, because some important factorshave not been included.•The models utilize uncertain theory as well as uncertain data.•The analyst's risk preference in dealing with uncertain outcomesmay differ from that of the model.•The analyst's role is trivialized if decisions are solely provided by themodel.•Models provide precision at the expense of intuition and commonsense.What is needed is a methodology that allows a human decision maker and a knowledge-based system to collaborate in adjusting the model by explicitly producing model adjustments. Instead of explicitly changing constants and coefficients directly in the model, an analyst can have the following kind of interaction [3]:Analyst: I think the value for consumption is too low.Computer:I was told that consumption depends on thefactors consumer debt and consumer confidence.The specific rules are:1.Consumer debt is extremely important to consumption.2. Consumer confidence is very important to consumption.Would you like to enter your assessment about these factors?Analyst: In this forecast for consumption, I think consumer confidencehas a strong, positive impact. I think consumer debt has a verystrong, positive impact.Computer:Are there any other factors that you consider significant in thisforecast?Analyst: Yes. I think another factor you should consider is the stockmarket. The recent stock market crash will have a significantnegative impact on consumption.Computer: The percent change in the forecast for consumption based on anew forecast is -17%. Shall we continue?The advantages to such an approach is to provide documented decisions consistent with user intuition. Ad hoc tuning can be replaced by an integrated analytic model/knowledge-based system that can explain and justify its model adjustments. Such a methodology makes the model more acceptable to an analyst, and offers the potential of improving the decisions that either an analyst or a model can make alone. Moreover, such a system can be used to elicit new domain knowledge for model evaluation and improvement [4]. A data-flow diagram illustrating this process is shown in Figure 1.Figure 1. Data Flow for the Tuning ProcessWe have developed a knowledge-based tuning methodology that integrates a mathematical model, a knowledge-based system and a human evaluator. The mathematical model provides computational power and the underlying theory. The knowledge-base represents expert domain knowledge on managing and making subjective model adjustments to the model computations. The evaluator provides the domain knowledge to insure that the final result has practical usefulness.In our research, we have used the models associated with economic forecasting as a case study. Economic forecasting is one of the four example "analytic languages" that was discussed in the context of judgement and analytic knowledge elicitation in [4], and is alsoan analytic language that we are most familiar (one of the authors was an economic consultant for twenty years). The specific model that we chose to illustrate our methodology is the macroeconometric model described in [1].Requirements for the knowledge representations for tuning are described in Section II. The knowledge-based tuning methodology is described and demonstrated in Section III, where we also discuss the relationship between tuning and sensitivity analysis. In Section IV, we show an example that demonstrates how knowledge-based tuning can be used to help analysts improve the predictability of an econometric model.II. Knowledge Representations for TuningA. Previous Approaches to TuningThe desirability of developing techniques by which humans and computers collaborate in making decisions, rather than the decision being made by one or the other, has been recognized for some time. In 1961, Yntema and Torgerson [5] questioned how to combine the analytical speed of the computer with the "good sense" of the human user, without sacrificing too much of either. They proposed to let the machine make the decisions according to simple rules, but require the analyst to monitor the result and change the machine's answer if the analyst finds the results too foolish.Computer models have become much more complex since 1961. However, it is still the case that all abstract models are only approximations and that optimization achieved with respect to the model is not the same as optimization with respect to the real world. This is particularly true in the case of forecasting. Ultimately, only a human can judge if the discrepancy between the real world and the model is large or small.In 1964, Shepard[6] proposed an approach similar to that of Yntema and Torgerson. He noted the possibility of achieving subjective optimality by decomposing a decision process into a human effort and a computer effort. The human would be responsible for a set of elementary comparisons with respect to the underlying subjective variables. The computer would deal with the algorithmic process of combining the judgments. Similar division of labor ideas were also proposed in 1968 in the context of a Probabilistic Information Processing System [7] where humans would be used to estimate likliehood ratios and the computer would be used to compute payoff matrices.More recently, Zimmer [8] discussed the possibilities of man/computer collaboration in forecasting. He suggested to elicit expert rules for qualitative predictions and combine inferences based on these rules with the results of quantitative forecasts. Rules that resulted in subjective predictions would have to be formally described, and algorithms would have to be developed to translate qualitative judgments into analytic parameters. His conclusion was that a system that integrates qualitative and quantitative techniques in this way would also increase the acceptability of forecasts.The possibility of combining analytic techniques with ideas from artificial intelligence leads to a new kind of intelligent analytic tool. They are not so much intelligent assistants as they are collaborators.From the perspective of knowledge-based systems, an intelligent analytic collaborator should relieve an analyst of routine computation and data handling. A collaborator should should also explain its reasoning. In this regard, collaborators are similar to apprentices [9] and a tutors [10] in that they compare user behavior to expert behavior and attempt to minimize the difference by negotiation. In the our methodology, tuning heuristics determine what constitutes "minimum difference." The objective of the negotiating process is to influence the user's behavior, making it as rational as possible from the perspective of the domain expert knowledge.A general program for an "artificial laboratory" of such tools was also proposed in [11], where the assistance provided analysts was classified into three components: model developers and representers; model testers; and model refiners. Such tools may be used as collaborators in scientific discovery (model creation) as well as collaborators in model utilization.In this context, our approach to knowledge-based tuning provides a new example of an intelligent computational tool that assists in the refinement of the model.Examples of intelligent computational tools that assist in the development and representation of models are discussed in [12]. These systems collaborate with analysts in understanding and displaying some qualitative characteristic (like stability or periodicity of the solutions of differential equations).Examples of intelligent computational tools that assist in the testing and maintenance of the model have also been demonstrate. Expert system methods and spreadsheet-based algebraic models were integrated to provide (model verification) advice on sensitivity analysis for financial problems [13]. In another example, [14] shows how a rule-based system can maintain the correspondence between model knowledge and semantic constraints that are important to the problem, but are not represented in the model.B. Tuning Analytic Models Used in Decision Support SystemsDecision support systems might be described either as man machine problem-solving systems, or as interactive computer systems that assist a person in making decisions. These systems are most valuable in problems that are complex and quantitative enough to make computers useful, but still require a considerable amount of human judgment. Typically, in these problems what constitutes an "optimum" solution is ultimately a subjective determination. The concept of "satisficing" is applicable here [15]. The decision-making exercise ends when the analyst is satisfied with the decision.Several mathematical models are currently being used to advantage in decision support systems. These include linear and non-linear programming, game theory, decision analysis, utility theory, queuing theory and time series analysis. Typical applications of themodels include inventory policy, production scheduling, facility location, capital allocation and forecasting. The discussion below focuses on issues relevant to the application of decision support systems that are based on tuned analytic models.A typical mathematical model for decision-making accepts parameter values as inputs and computes outputs which constitute the "decision." Experience to date shows that they can perform better in some domains than can knowledge-based systems. Most model-based decision support system lack the desirable features of a knowledge-based system, namely:•the ability to accept linguistic input.•the ability to add or delete chunks of knowledge.•the ability to provide explanations and guidance on proceeding tothe user's goal.Tuning such a system can only be justified when users of the model have knowledge that bears on the decision which is not an input to the model, and is not part of the model computations. It is this extra-model information which drives the development of the tuning system. The representation of tuning knowledge has three major steps:1.The determination of what type of user knowledge must berepresented.2.The representation of methods for incorporating the userknowledge into the decision process in order to adjust the modelcomputations.3.The development of an interactive architecture for man-modelcollaboration.The first step is an exercise in knowledge engineering. Expert decision-makers in the specific problem domain, particularly those with experience using the model, will be the best source of the type of knowledge the tuning system should accept as input. However, since novice users will have the greatest need for guidance in using the model, an understanding of how they work must be reflected in the tuning system's knowledge base. In the second step, an expert in the particular mathematical model must specify a method for modifying the model computations to reflect the information specified during the first step. This amounts to specifying the error metric d and the error evaluation function f (Equations 4 and 5 in Section I). We believe that the tuning heuristics specified in Section III can be applicable to any model, as long as certain domain specific parameters are also incorporated in the representation.The design of an appropriate interactive system goes to the heart of the tuning process. Tuning a model is essentially a trial and evaluate activity. The model user does not know his "goal" in advance. This is to be expected. When a person must decide among a number of alternatives, each involving many factors, he cannot fully anticipate the consequences of his choices. However, when he sees the results of two such choices, he may very well be able to say which he prefers. The process is one of approximation withfeedback correction. The decision-maker continues the process until he reaches a "satisfactory" outcome.Probably the most difficult task in constructing a tuning system is the development of the knowledge for interacting with the user in order to insure an adequate feedback loop. The system must be controlled by the margin of error (as perceived by the user) with reference to an external goal, but the goal may be changing. Figure 2 illustrates the structure of this feedback representation. The analyst acts as evaluator, where error is defined in Equation (4) of Section I. The model errors are obtained by comparing the model variable predictions with the actual variable measurements. The historical error performance of the model is used by the analyst to create model adjustments, which, according to the analyst, reflects a "better" forecast as is viewed as a model refinement: if the model were perfect there would be no need for tuning.Usually, the probability distributions of the model forecast errors are assumed to normal with zero mean. Since the errors are not biased, the errors themselves cannot be used to improve the forecast. Only "extra-model" information (such as that acquired through tuning) can be used to adjust the errors. Consequently, tuning (as an example of model refinement) can be viewed as the first step in the development of new models and new model representations, based on the results of the old model, the historical model errors, and the new parameters based on the tuning process.Figure 2. Tuning as a Feedback SystemIn designing the feedback loop, it must be remembered that as the tuning proceeds, the analyst may change goals and either retract old information or add information in a non-monotonic manner. Consequently, the tuning system must be stable in the feedback sense [16], and converge to a satisfactory outcome. Since the model itself is usually not designed to be stable when tuned, the burden for maintaining stability and validity falls to the rules of the tuning system and the user. The factors that will impact on stabilitycorrespond to the different types of knowledge in the system. The tuning system will very likely be so complex that insuring stability by purely analytic methods will not be possible. If desired, stability behavior can be demonstrated by simulation expertiments. In fact, it will probably be necessary to tune the tuning knowledge-base, particularly in regard to the domain-specific parameters for f, in light of simulation experiments.C. Difficulties in Representing Subjective Tuning KnowledgeThe design of a knowledge-based tuning system is predicated on the assumption that although the user's subjective input is indispensable, it is also quite fallible and it should be used selectively. The assumption of the fallibility of human judgment in decision-making is based on numerous studies. Among the psychological tendencies which have been reported are:Anchoring. This is the tendency not to stray from an initial judgment even when confronted with conflicting evidence. Experiments have shown [17] that the amount of probability revision made by the subjects, as indicated by the difference between posterior and prior probabilities, is consistently smaller than would be prescribed by Bayes' theorem. In other words, the maximum information possibly derived from experience is greater than what is actually learned. Subjects are reluctant to revise their opinion in light of experience. This may be related to what psychologists call "cognitive dissonance" [18], a theory explaining the tendency to come down excessively heavily on one side or the other when confronted with conflicting evidence.Inconsistency. Given quantities A, B, and C, consistent behavior would require a subject to treat them as though they satisfied the following two properties:1. Exclusivity of comparison.Either A > B or A < B or A = B.2. Transitivity of comparison.If A > B and B > C then A > C.However, violations of both properties have been seen. If a pair of alternatives is presented to a subject many times, successive presentations being well separated by other choices, a given subject does not necessarily choose the same alternative each time [19]. Sometimes the subject claimed that A > B and at other times that B > A. Shanteau [20] described a classic experiment in which "experts" were asked to judge samples of produce. When judged a second time, the experts frequently made different assessments. Edwards [21] reviews experiments in which subjects violated the transitive property in making choices and suggests that they arise from the inability of people to focus on the dimension in question, i.e., they are distracted by some other dimension.Selectivity. This refers to using only a portion of the information available. Commonly, people use only those pieces of information that come readily to mind. People make poordecisions when they must take into account a number of attributes simultaneously: decision-makers may be aware of many different factors, but it is seldom more than one or two that they consider at any one time [6]. Similarly, it is observed that expert judgments are based on little information [20]. One reason for this is that experts are often influenced by irrelevant information.Fallacy. This refers to the improper use of probabilistic reasoning. Common errors includeconservatism (the failure to revise prior probabilities sufficiently based on new information [17]) and calibration (the discrepancy between subjective probability and objective probability).Representativeness. This refers to the focusing on how closely a hypothesis matches the most recent information to the exclusion of generally available information [22].Other issues concerned with measuring the accuracy of knowledge is discussed in [23]. D. Representing Expert KnowledgeKnowledge-based tuning utilizes domain-specific knowledge in a way that is somewhat different from the utilization of domain-specific knowledge in expert systems.The important differences between the tuning and expert system utilizations of domain-specific knowledge are:Non-monotonicity. In an expert system it is assumed that the rules are correct, at least to some specified degree of probability or confidence, and that there is enough knowledge to produce a satisfactory solution to the problem. In a tuning system the rules are tentative and assumed to be incomplete. It is expected that the user will supplement and revise the knowledge.Integration. Expert systems are typically "stand alone" systems, solving problems that are important in their own right. A tuning system is embedded in a larger system which incorporates a mathematical model. It is the problem solved by the larger system which is of primary interest.Autonomy vs. Collaboration. Expert systems are designed to perform a task ordinarily performed by a human expert. The objective is to emulate expert knowledge and inference procedures autonomously. Applied to decision-making situations, a classical expert system would obtain data from the system user, determine an optimum decision and explain its reasoning. A physician using a medical expert system, for example, enters information about a patient in response to questions and obtains a therapy recommendation and explanation.Decision-makers do not wish to turn over control of a decision entirely to a computer. Just as a decision-maker is disinclined to surrender control of a decision to a mathematical model, he would not wish to surrender control to an expert system that tunes the model.。