Improvement of Thériault Algorithm of Index Calculus of Jacobian of Hyperelliptic Curves o
- 格式:pdf
- 大小:226.47 KB
- 文档页数:13
probability of improvement详解概述及解释说明1. 引言1.1 概述在优化问题中,为了找到最佳解决方案,我们需要评估不同选择的潜在改进。
而"Probability of Improvement"(改善概率)就是一种常用且有效的指标,用于衡量所做决策在达到更好结果的可能性。
它能够帮助我们在复杂和多样化的情境下作出明智的决策。
1.2 文章结构本文将首先介绍"Probability of Improvement"的定义和原理,在此基础上详细说明该指标的计算方法以及其在优化问题中的应用。
随后,文章将进行与其他指标的比较分析,包括与"Expected Improvement"(预期改善)和"Probability of Feasibility"(可行性概率)等指标之间的对比。
最后,我们将通过实例分析和案例研究来展示"Probability of Improvement"在实际问题中的应用效果,并从中得出结论和展望未来研究方向。
1.3 目的本文旨在全面深入地探讨并详细解释“Probability of Improvement”这一指标。
通过对其定义、原理、计算方法以及应用案例等内容的阐述,读者可以全面了解该指标在优化问题中发挥的作用以及相对于其他指标的优势和差异。
同时,本文还将展望未来研究方向,为相关领域的学者提供参考和启示。
2. Probability of Improvement的定义和原理2.1 Probability of Improvement的概念Probability of Improvement(改进概率)是一种用于优化问题的指标,其目标是寻找当前最优解附近可能出现更优解的概率。
在概率模型中,我们可以将问题建模为一个高斯过程(Gaussian Process),其中每个点都与一个潜在的数值相关联。
提高科学决策能力方法论英文回答:Improving scientific decision-making capabilities requires a systematic approach that involves several key steps. Firstly, it is important to gather and analyze relevant data. This involves conducting thorough research, collecting accurate and reliable data, and using appropriate statistical methods to analyze the data. For example, when making a decision about implementing a new healthcare policy, it is crucial to gather data on the current state of healthcare, the potential impact of the policy, and the costs involved. This data can then be analyzed to determine the most effective course of action.Once the data has been analyzed, the next step is to evaluate the potential risks and benefits of different options. This involves considering both the short-term and long-term consequences of each option. For instance, when deciding whether to invest in renewable energy sources, itis important to consider the immediate costs of implementation as well as the long-term environmental benefits and potential savings. This evaluation process should also take into account any uncertainties or unknowns, and include contingency plans for unexpected outcomes.Another important aspect of improving scientific decision-making capabilities is to involve multiple stakeholders. This ensures that diverse perspectives and expertise are considered, leading to more informed and comprehensive decisions. For example, when developing a public transportation system, it is important to involve representatives from the government, transportation experts, urban planners, and community members. Each stakeholder brings unique insights and considerations that cancontribute to a more well-rounded decision-making process.Furthermore, effective communication is essential in scientific decision-making. It is important to clearly communicate the findings, implications, and uncertainties associated with different options. This involves usingplain language and avoiding technical jargon, so that theinformation can be easily understood by all stakeholders. For instance, when presenting the findings of a scientific study on the effects of climate change, it is important to clearly communicate the potential risks and the urgency of taking action, without overwhelming the audience with complex scientific terminology.In addition, continuous learning and adaptation are crucial for improving scientific decision-making capabilities. This involves regularly reviewing and evaluating the outcomes of decisions, and makingadjustments as necessary. For example, if a new policy is implemented and it does not achieve the desired outcomes,it is important to identify the reasons for the failure and make changes accordingly. This iterative process allows for continuous improvement and ensures that decisions are based on the best available evidence and knowledge.Overall, improving scientific decision-makingcapabilities requires a systematic approach that involves gathering and analyzing data, evaluating risks and benefits, involving multiple stakeholders, communicating effectively,and continuously learning and adapting. By following these steps, decision-makers can make more informed and effective decisions that are based on sound scientific principles and considerations.中文回答:提高科学决策能力需要采取系统性的方法。
SCI论文摘要中常用的表达方法要写好摘要,需要建立一个适合自己需要的句型库(选择的词汇来源于SCI高被引用论文)引言部分(1)回顾研究背景,常用词汇有review, summarize, present, outline, describe等(2)说明写作目的,常用词汇有purpose, attempt, aim等,另外还可以用动词不定式充当目的壮语老表达(3)介绍论文的重点内容或研究范围,常用词汇有study, present, include, focus, emphasize, emphasis, attention等方法部分(1)介绍研究或试验过程,常用词汇有test study, investigate, examine,experiment, discuss, consider, analyze, analysis等(2)说明研究或试验方法,常用词汇有measure, estimate, calculate等(3)介绍应用、用途,常用词汇有use, apply, application等结果部分(1)展示研究结果,常用词汇有show, result, present等(2)介绍结论,常用词汇有summary, introduce,conclude等讨论部分(1)陈述论文的论点和作者的观点,常用词汇有suggest, repot, present, expect, describe 等(2)说明论证,常用词汇有support, provide, indicate, identify, find, demonstrate, confirm, clarify等(3)推荐和建议,常用词汇有suggest,suggestion, recommend, recommendation, propose,necessity,necessary,expect等。
摘要引言部分案例词汇review•Author(s): ROBINSON, TE; BERRIDGE, KC•Title:THE NEURAL BASIS OF DRUG CRA VING - AN INCENTIVE-SENSITIZATION THEORY OF ADDICTION•Source: BRAIN RESEARCH REVIEWS, 18 (3): 247-291 SEP-DEC 1993 《脑研究评论》荷兰SCI被引用1774We review evidence for this view of addiction and discuss its implications for understanding the psychology and neurobiology of addiction.回顾研究背景SCI高被引摘要引言部分案例词汇summarizeAuthor(s): Barnett, RM; Carone, CD; 被引用1571Title: Particles and field .1. Review of particle physicsSource: PHYSICAL REVIEW D, 54 (1): 1-+ Part 1 JUL 1 1996:《物理学评论,D辑》美国引言部分回顾研究背景常用词汇summarizeAbstract: This biennial review summarizes much of Particle Physics. Using data from previous editions, plus 1900 new measurements from 700 papers, we list, evaluate, and average measuredproperties of gauge bosons, leptons, quarks, mesons, and baryons. We also summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors, probability, and statistics. A booklet is available containing the Summary Tables and abbreviated versions of some of the other sections of this full Review.SCI摘要引言部分案例attentionSCI摘要方法部分案例considerSCI高被引摘要引言部分案例词汇outline•Author(s): TIERNEY, L SCI引用728次•Title:MARKOV-CHAINS FOR EXPLORING POSTERIOR DISTRIBUTIONS 引言部分回顾研究背景,常用词汇outline•Source: ANNALS OF STATISTICS, 22 (4): 1701-1728 DEC 1994•《统计学纪事》美国•Abstract: Several Markov chain methods are available for sampling from a posterior distribution. Two important examples are the Gibbs sampler and the Metropolis algorithm.In addition, several strategies are available for constructing hybrid algorithms. This paper outlines some of the basic methods and strategies and discusses some related theoretical and practical issues. On the theoretical side, results from the theory of general state space Markov chains can be used to obtain convergence rates, laws of large numbers and central limit theorems for estimates obtained from Markov chain methods. These theoretical results can be used to guide the construction of more efficient algorithms. For the practical use of Markov chain methods, standard simulation methodology provides several Variance reduction techniques and also gives guidance on the choice of sample size and allocation.SCI高被引摘要引言部分案例回顾研究背景presentAuthor(s): L YNCH, M; MILLIGAN, BG SC I被引用661Title: ANAL YSIS OF POPULATION GENETIC-STRUCTURE WITH RAPD MARKERS Source: MOLECULAR ECOLOGY, 3 (2): 91-99 APR 1994《分子生态学》英国Abstract: Recent advances in the application of the polymerase chain reaction make it possible to score individuals at a large number of loci. The RAPD (random amplified polymorphic DNA) method is one such technique that has attracted widespread interest.The analysis of population structure with RAPD data is hampered by the lack of complete genotypic information resulting from dominance, since this enhances the sampling variance associated with single loci as well as induces bias in parameter estimation. We present estimators for several population-genetic parameters (gene and genotype frequencies, within- and between-population heterozygosities, degree of inbreeding and population subdivision, and degree of individual relatedness) along with expressions for their sampling variances. Although completely unbiased estimators do not appear to be possible with RAPDs, several steps are suggested that will insure that the bias in parameter estimates is negligible. To achieve the same degree of statistical power, on the order of 2 to 10 times more individuals need to be sampled per locus when dominant markers are relied upon, as compared to codominant (RFLP, isozyme) markers. Moreover, to avoid bias in parameter estimation, the marker alleles for most of these loci should be in relatively low frequency. Due to the need for pruning loci with low-frequency null alleles, more loci also need to be sampled with RAPDs than with more conventional markers, and sole problems of bias cannot be completely eliminated.SCI高被引摘要引言部分案例词汇describe•Author(s): CLONINGER, CR; SVRAKIC, DM; PRZYBECK, TR•Title: A PSYCHOBIOLOGICAL MODEL OF TEMPERAMENT AND CHARACTER•Source: ARCHIVES OF GENERAL PSYCHIATRY, 50 (12): 975-990 DEC 1993《普通精神病学纪要》美国•引言部分回顾研究背景,常用词汇describe 被引用926•Abstract: In this study, we describe a psychobiological model of the structure and development of personality that accounts for dimensions of both temperament and character. Previous research has confirmed four dimensions of temperament: novelty seeking, harm avoidance, reward dependence, and persistence, which are independently heritable, manifest early in life, and involve preconceptual biases in perceptual memory and habit formation. For the first time, we describe three dimensions of character that mature in adulthood and influence personal and social effectiveness by insight learning about self-concepts.Self-concepts vary according to the extent to which a person identifies the self as (1) an autonomous individual, (2) an integral part of humanity, and (3) an integral part of the universe as a whole. Each aspect of self-concept corresponds to one of three character dimensions called self-directedness, cooperativeness, and self-transcendence, respectively. We also describe the conceptual background and development of a self-report measure of these dimensions, the Temperament and Character Inventory. Data on 300 individuals from the general population support the reliability and structure of these seven personality dimensions. We discuss the implications for studies of information processing, inheritance, development, diagnosis, and treatment.摘要引言部分案例•(2)说明写作目的,常用词汇有purpose, attempt, aimSCI高被引摘要引言部分案例attempt说明写作目的•Author(s): Donoho, DL; Johnstone, IM•Title: Adapting to unknown smoothness via wavelet shrinkage•Source: JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 90 (432): 1200-1224 DEC 1995 《美国统计学会志》被引用429次•Abstract: We attempt to recover a function of unknown smoothness from noisy sampled data. We introduce a procedure, SureShrink, that suppresses noise by thresholding the empirical wavelet coefficients. The thresholding is adaptive: A threshold level is assigned to each dyadic resolution level by the principle of minimizing the Stein unbiased estimate of risk (Sure) for threshold estimates. The computational effort of the overall procedure is order N.log(N) as a function of the sample size N. SureShrink is smoothness adaptive: If the unknown function contains jumps, then the reconstruction (essentially) does also; if the unknown function has a smooth piece, then the reconstruction is (essentially) as smooth as the mother wavelet will allow. The procedure is in a sense optimally smoothness adaptive: It is near minimax simultaneously over a whole interval of the Besov scale; the size of this interval depends on the choice of mother wavelet. We know from a previous paper by the authors that traditional smoothing methods-kernels, splines, and orthogonal series estimates-even with optimal choices of the smoothing parameter, would be unable to perform in a near-minimax way over many spaces in the Besov scale.Examples of SureShrink are given. The advantages of the method are particularly evident when the underlying function has jump discontinuities on a smooth backgroundSCI高被引摘要引言部分案例To investigate说明写作目的•Author(s): OLTV AI, ZN; MILLIMAN, CL; KORSMEYER, SJ•Title: BCL-2 HETERODIMERIZES IN-VIVO WITH A CONSERVED HOMOLOG, BAX, THAT ACCELERATES PROGRAMMED CELL-DEATH•Source: CELL, 74 (4): 609-619 AUG 27 1993 被引用3233•Abstract: Bcl-2 protein is able to repress a number of apoptotic death programs. To investigate the mechanism of Bcl-2's effect, we examined whether Bcl-2 interacted with other proteins. We identified an associated 21 kd protein partner, Bax, that has extensive amino acid homology with Bcl-2, focused within highly conserved domains I and II. Bax is encoded by six exons and demonstrates a complex pattern of alternative RNA splicing that predicts a 21 kd membrane (alpha) and two forms of cytosolic protein (beta and gamma). Bax homodimerizes and forms heterodimers with Bcl-2 in vivo. Overexpressed Bax accelerates apoptotic death induced by cytokine deprivation in an IL-3-dependent cell line. Overexpressed Bax also counters the death repressor activity of Bcl-2. These data suggest a model in which the ratio of Bcl-2 to Bax determines survival or death following an apoptotic stimulus.SCI高被引摘要引言部分案例purposes说明写作目的•Author(s): ROGERS, FJ; IGLESIAS, CA•Title: RADIATIVE ATOMIC ROSSELAND MEAN OPACITY TABLES•Source: ASTROPHYSICAL JOURNAL SUPPLEMENT SERIES, 79 (2): 507-568 APR 1992 《天体物理学杂志增刊》美国SCI被引用512•Abstract: For more than two decades the astrophysics community has depended on opacity tables produced at Los Alamos. In the present work we offer new radiative Rosseland mean opacity tables calculated with the OPAL code developed independently at LLNL. We give extensive results for the recent Anders-Grevesse mixture which allow accurate interpolation in temperature, density, hydrogen mass fraction, as well as metal mass fraction. The tables are organized differently from previous work. Instead of rows and columns of constant temperature and density, we use temperature and follow tracks of constant R, where R = density/(temperature)3. The range of R and temperature are such as to cover typical stellar conditions from the interior through the envelope and the hotter atmospheres. Cool atmospheres are not considered since photoabsorption by molecules is neglected. Only radiative processes are taken into account so that electron conduction is not included. For comparison purposes we present some opacity tables for the Ross-Aller and Cox-Tabor metal abundances. Although in many regions the OPAL opacities are similar to previous work, large differences are reported.For example, factors of 2-3 opacity enhancements are found in stellar envelop conditions.SCI高被引摘要引言部分案例aim说明写作目的•Author(s):EDV ARDSSON, B; ANDERSEN, J; GUSTAFSSON, B; LAMBERT, DL;NISSEN, PE; TOMKIN, J•Title:THE CHEMICAL EVOLUTION OF THE GALACTIC DISK .1. ANALYSISAND RESULTS•Source: ASTRONOMY AND ASTROPHYSICS, 275 (1): 101-152 AUG 1993 《天文学与天体物理学》被引用934•Abstract:With the aim to provide observational constraints on the evolution of the galactic disk, we have derived abundances of 0, Na, Mg, Al, Si, Ca, Ti, Fe, Ni, Y, Zr, Ba and Nd, as well as individual photometric ages, for 189 nearby field F and G disk dwarfs.The galactic orbital properties of all stars have been derived from accurate kinematic data, enabling estimates to be made of the distances from the galactic center of the stars‘ birthplaces. 结构式摘要•Our extensive high resolution, high S/N, spectroscopic observations of carefully selected northern and southern stars provide accurate equivalent widths of up to 86 unblended absorption lines per star between 5000 and 9000 angstrom. The abundance analysis was made with greatly improved theoretical LTE model atmospheres. Through the inclusion of a great number of iron-peak element absorption lines the model fluxes reproduce the observed UV and visual fluxes with good accuracy. A new theoretical calibration of T(eff) as a function of Stromgren b - y for solar-type dwarfs has been established. The new models and T(eff) scale are shown to yield good agreement between photometric and spectroscopic measurements of effective temperatures and surface gravities, but the photometrically derived very high overall metallicities for the most metal rich stars are not supported by the spectroscopic analysis of weak spectral lines.•Author(s): PAYNE, MC; TETER, MP; ALLAN, DC; ARIAS, TA; JOANNOPOULOS, JD•Title:ITERA TIVE MINIMIZATION TECHNIQUES FOR ABINITIO TOTAL-ENERGY CALCULATIONS - MOLECULAR-DYNAMICS AND CONJUGA TE GRADIENTS•Source: REVIEWS OF MODERN PHYSICS, 64 (4): 1045-1097 OCT 1992 《现代物理学评论》美国American Physical Society SCI被引用2654 •Abstract: This article describes recent technical developments that have made the total-energy pseudopotential the most powerful ab initio quantum-mechanical modeling method presently available. In addition to presenting technical details of the pseudopotential method, the article aims to heighten awareness of the capabilities of the method in order to stimulate its application to as wide a range of problems in as many scientific disciplines as possible.SCI高被引摘要引言部分案例includes介绍论文的重点内容或研究范围•Author(s):MARCHESINI, G; WEBBER, BR; ABBIENDI, G; KNOWLES, IG;SEYMOUR, MH; STANCO, L•Title: HERWIG 5.1 - A MONTE-CARLO EVENT GENERA TOR FOR SIMULATING HADRON EMISSION REACTIONS WITH INTERFERING GLUONS SCI被引用955次•Source: COMPUTER PHYSICS COMMUNICATIONS, 67 (3): 465-508 JAN 1992:《计算机物理学通讯》荷兰Elsevier•Abstract: HERWIG is a general-purpose particle-physics event generator, which includes the simulation of hard lepton-lepton, lepton-hadron and hadron-hadron scattering and soft hadron-hadron collisions in one package. It uses the parton-shower approach for initial-state and final-state QCD radiation, including colour coherence effects and azimuthal correlations both within and between jets. This article includes a brief review of the physics underlying HERWIG, followed by a description of the program itself. This includes details of the input and control parameters used by the program, and the output data provided by it. Sample output from a typical simulation is given and annotated.SCI高被引摘要引言部分案例presents介绍论文的重点内容或研究范围•Author(s): IDSO, KE; IDSO, SB•Title: PLANT-RESPONSES TO ATMOSPHERIC CO2 ENRICHMENT IN THE FACE OF ENVIRONMENTAL CONSTRAINTS - A REVIEW OF THE PAST 10 YEARS RESEARCH•Source: AGRICULTURAL AND FOREST METEOROLOGY, 69 (3-4): 153-203 JUL 1994 《农业和林业气象学》荷兰Elsevier 被引用225•Abstract:This paper presents a detailed analysis of several hundred plant carbon exchange rate (CER) and dry weight (DW) responses to atmospheric CO2 enrichment determined over the past 10 years. It demonstrates that the percentage increase in plant growth produced by raising the air's CO2 content is generally not reduced by less than optimal levels of light, water or soil nutrients, nor by high temperatures, salinity or gaseous air pollution. More often than not, in fact, the data show the relative growth-enhancing effects of atmospheric CO2 enrichment to be greatest when resource limitations and environmental stresses are most severe.SCI高被引摘要引言部分案例介绍论文的重点内容或研究范围emphasizing •Author(s): BESAG, J; GREEN, P; HIGDON, D; MENGERSEN, K•Title: BAYESIAN COMPUTATION AND STOCHASTIC-SYSTEMS•Source: STATISTICAL SCIENCE, 10 (1): 3-41 FEB 1995《统计科学》美国•SCI被引用296次•Abstract: Markov chain Monte Carlo (MCMC) methods have been used extensively in statistical physics over the last 40 years, in spatial statistics for the past 20 and in Bayesian image analysis over the last decade. In the last five years, MCMC has been introduced into significance testing, general Bayesian inference and maximum likelihood estimation. This paper presents basic methodology of MCMC, emphasizing the Bayesian paradigm, conditional probability and the intimate relationship with Markov random fields in spatial statistics.Hastings algorithms are discussed, including Gibbs, Metropolis and some other variations. Pairwise difference priors are described and are used subsequently in three Bayesian applications, in each of which there is a pronounced spatial or temporal aspect to the modeling. The examples involve logistic regression in the presence of unobserved covariates and ordinal factors; the analysis of agricultural field experiments, with adjustment for fertility gradients; and processing oflow-resolution medical images obtained by a gamma camera. Additional methodological issues arise in each of these applications and in the Appendices. The paper lays particular emphasis on the calculation of posterior probabilities and concurs with others in its view that MCMC facilitates a fundamental breakthrough in applied Bayesian modeling.SCI高被引摘要引言部分案例介绍论文的重点内容或研究范围focuses •Author(s): HUNT, KJ; SBARBARO, D; ZBIKOWSKI, R; GAWTHROP, PJ•Title: NEURAL NETWORKS FOR CONTROL-SYSTEMS - A SURVEY•Source: AUTOMA TICA, 28 (6): 1083-1112 NOV 1992《自动学》荷兰Elsevier•SCI被引用427次•Abstract:This paper focuses on the promise of artificial neural networks in the realm of modelling, identification and control of nonlinear systems. The basic ideas and techniques of artificial neural networks are presented in language and notation familiar to control engineers. Applications of a variety of neural network architectures in control are surveyed. We explore the links between the fields of control science and neural networks in a unified presentation and identify key areas for future research.SCI高被引摘要引言部分案例介绍论文的重点内容或研究范围focus•Author(s): Stuiver, M; Reimer, PJ; Bard, E; Beck, JW;•Title: INTCAL98 radiocarbon age calibration, 24,000-0 cal BP•Source: RADIOCARBON, 40 (3): 1041-1083 1998《放射性碳》美国SCI被引用2131次•Abstract: The focus of this paper is the conversion of radiocarbon ages to calibrated (cal) ages for the interval 24,000-0 cal BP (Before Present, 0 cal BP = AD 1950), based upon a sample set of dendrochronologically dated tree rings, uranium-thorium dated corals, and varve-counted marine sediment. The C-14 age-cal age information, produced by many laboratories, is converted to Delta(14)C profiles and calibration curves, for the atmosphere as well as the oceans. We discuss offsets in measured C-14 ages and the errors therein, regional C-14 age differences, tree-coral C-14 age comparisons and the time dependence of marine reservoir ages, and evaluate decadal vs. single-year C-14 results. Changes in oceanic deepwater circulation, especially for the 16,000-11,000 cal sp interval, are reflected in the Delta(14)C values of INTCAL98.SCI高被引摘要引言部分案例介绍论文的重点内容或研究范围emphasis •Author(s): LEBRETON, JD; BURNHAM, KP; CLOBERT, J; ANDERSON, DR•Title: MODELING SURVIV AL AND TESTING BIOLOGICAL HYPOTHESES USING MARKED ANIMALS - A UNIFIED APPROACH WITH CASE-STUDIES •Source: ECOLOGICAL MONOGRAPHS, 62 (1): 67-118 MAR 1992•《生态学论丛》美国•Abstract: The understanding of the dynamics of animal populations and of related ecological and evolutionary issues frequently depends on a direct analysis of life history parameters. For instance, examination of trade-offs between reproduction and survival usually rely on individually marked animals, for which the exact time of death is most often unknown, because marked individuals cannot be followed closely through time.Thus, the quantitative analysis of survival studies and experiments must be based oncapture-recapture (or resighting) models which consider, besides the parameters of primary interest, recapture or resighting rates that are nuisance parameters. 结构式摘要•T his paper synthesizes, using a common framework, these recent developments together with new ones, with an emphasis on flexibility in modeling, model selection, and the analysis of multiple data sets. The effects on survival and capture rates of time, age, and categorical variables characterizing the individuals (e.g., sex) can be considered, as well as interactions between such effects. This "analysis of variance" philosophy emphasizes the structure of the survival and capture process rather than the technical characteristics of any particular model. The flexible array of models encompassed in this synthesis uses a common notation. As a result of the great level of flexibility and relevance achieved, the focus is changed from fitting a particular model to model building and model selection.SCI摘要方法部分案例•方法部分•(1)介绍研究或试验过程,常用词汇有test,study, investigate, examine,experiment, discuss, consider, analyze, analysis等•(2)说明研究或试验方法,常用词汇有measure, estimate, calculate等•(3)介绍应用、用途,常用词汇有use, apply, application等SCI高被引摘要方法部分案例discusses介绍研究或试验过程•Author(s): LIANG, KY; ZEGER, SL; QAQISH, B•Title: MULTIV ARIATE REGRESSION-ANAL YSES FOR CATEGORICAL-DATA •Source:JOURNAL OF THE ROY AL STA TISTICAL SOCIETY SERIES B-METHODOLOGICAL, 54 (1): 3-40 1992《皇家统计学会志,B辑:统计方法论》•SCI被引用298•Abstract: It is common to observe a vector of discrete and/or continuous responses in scientific problems where the objective is to characterize the dependence of each response on explanatory variables and to account for the association between the outcomes. The response vector can comprise repeated observations on one variable, as in longitudinal studies or genetic studies of families, or can include observations for different variables.This paper discusses a class of models for the marginal expectations of each response and for pairwise associations. The marginal models are contrasted with log-linear models.Two generalized estimating equation approaches are compared for parameter estimation.The first focuses on the regression parameters; the second simultaneously estimates the regression and association parameters. The robustness and efficiency of each is discussed.The methods are illustrated with analyses of two data sets from public health research SCI高被引摘要方法部分案例介绍研究或试验过程examines•Author(s): Huo, QS; Margolese, DI; Stucky, GD•Title: Surfactant control of phases in the synthesis of mesoporous silica-based materials •Source: CHEMISTRY OF MATERIALS, 8 (5): 1147-1160 MAY 1996•SCI被引用643次《材料的化学性质》美国•Abstract: The low-temperature formation of liquid-crystal-like arrays made up of molecular complexes formed between molecular inorganic species and amphiphilic organic molecules is a convenient approach for the synthesis of mesostructure materials.This paper examines how the molecular shapes of covalent organosilanes, quaternary ammonium surfactants, and mixed surfactants in various reaction conditions can be used to synthesize silica-based mesophase configurations, MCM-41 (2d hexagonal, p6m), MCM-48 (cubic Ia3d), MCM-50 (lamellar), SBA-1 (cubic Pm3n), SBA-2 (3d hexagonal P6(3)/mmc), and SBA-3(hexagonal p6m from acidic synthesis media). The structural function of surfactants in mesophase formation can to a first approximation be related to that of classical surfactants in water or other solvents with parallel roles for organic additives. The effective surfactant ion pair packing parameter, g = V/alpha(0)l, remains a useful molecular structure-directing index to characterize the geometry of the mesophase products, and phase transitions may be viewed as a variation of g in the liquid-crystal-Like solid phase. Solvent and cosolvent structure direction can be effectively used by varying polarity, hydrophobic/hydrophilic properties and functionalizing the surfactant molecule, for example with hydroxy group or variable charge. Surfactants and synthesis conditions can be chosen and controlled to obtain predicted silica-based mesophase products. A room-temperature synthesis of the bicontinuous cubic phase, MCM-48, is presented. A low-temperature (100 degrees C) and low-pH (7-10) treatment approach that can be used to give MCM-41 with high-quality, large pores (up to 60 Angstrom), and pore volumes as large as 1.6 cm(3)/g is described.Estimates 介绍研究或试验过程SCI高被引摘要方法部分案例•Author(s): KESSLER, RC; MCGONAGLE, KA; ZHAO, SY; NELSON, CB; HUGHES, M; ESHLEMAN, S; WITTCHEN, HU; KENDLER, KS•Title:LIFETIME AND 12-MONTH PREV ALENCE OF DSM-III-R PSYCHIATRIC-DISORDERS IN THE UNITED-STA TES - RESULTS FROM THE NATIONAL-COMORBIDITY-SURVEY•Source: ARCHIVES OF GENERAL PSYCHIATRY, 51 (1): 8-19 JAN 1994•《普通精神病学纪要》美国SCI被引用4350次•Abstract: Background: This study presents estimates of lifetime and 12-month prevalence of 14 DSM-III-R psychiatric disorders from the National Comorbidity Survey, the first survey to administer a structured psychiatric interview to a national probability sample in the United States.Methods: The DSM-III-R psychiatric disorders among persons aged 15 to 54 years in the noninstitutionalized civilian population of the United States were assessed with data collected by lay interviewers using a revised version of the Composite International Diagnostic Interview. Results: Nearly 50% of respondents reported at least one lifetime disorder, and close to 30% reported at least one 12-month disorder. The most common disorders were major depressive episode, alcohol dependence, social phobia, and simple phobia. More than half of all lifetime disorders occurred in the 14% of the population who had a history of three or more comorbid disorders. These highly comorbid people also included the vast majority of people with severe disorders.Less than 40% of those with a lifetime disorder had ever received professional treatment,and less than 20% of those with a recent disorder had been in treatment during the past 12 months. Consistent with previous risk factor research, it was found that women had elevated rates of affective disorders and anxiety disorders, that men had elevated rates of substance use disorders and antisocial personality disorder, and that most disorders declined with age and with higher socioeconomic status. Conclusions: The prevalence of psychiatric disorders is greater than previously thought to be the case. Furthermore, this morbidity is more highly concentrated than previously recognized in roughly one sixth of the population who have a history of three or more comorbid disorders. This suggests that the causes and consequences of high comorbidity should be the focus of research attention. The majority of people with psychiatric disorders fail to obtain professional treatment. Even among people with a lifetime history of three or more comorbid disorders, the proportion who ever obtain specialty sector mental health treatment is less than 50%.These results argue for the importance of more outreach and more research on barriers to professional help-seekingSCI高被引摘要方法部分案例说明研究或试验方法measure•Author(s): Schlegel, DJ; Finkbeiner, DP; Davis, M•Title:Maps of dust infrared emission for use in estimation of reddening and cosmic microwave background radiation foregrounds•Source: ASTROPHYSICAL JOURNAL, 500 (2): 525-553 Part 1 JUN 20 1998 SCI 被引用2972 次《天体物理学杂志》美国•The primary use of these maps is likely to be as a new estimator of Galactic extinction. To calibrate our maps, we assume a standard reddening law and use the colors of elliptical galaxies to measure the reddening per unit flux density of 100 mu m emission. We find consistent calibration using the B-R color distribution of a sample of the 106 brightest cluster ellipticals, as well as a sample of 384 ellipticals with B-V and Mg line strength measurements. For the latter sample, we use the correlation of intrinsic B-V versus Mg, index to tighten the power of the test greatly. We demonstrate that the new maps are twice as accurate as the older Burstein-Heiles reddening estimates in regions of low and moderate reddening. The maps are expected to be significantly more accurate in regions of high reddening. These dust maps will also be useful for estimating millimeter emission that contaminates cosmic microwave background radiation experiments and for estimating soft X-ray absorption. We describe how to access our maps readily for general use.SCI高被引摘要结果部分案例application介绍应用、用途•Author(s): MALLAT, S; ZHONG, S•Title: CHARACTERIZATION OF SIGNALS FROM MULTISCALE EDGES•Source: IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 14 (7): 710-732 JUL 1992•SCI被引用508次《IEEE模式分析与机器智能汇刊》美国•Abstract: A multiscale Canny edge detection is equivalent to finding the local maxima ofa wavelet transform. We study the properties of multiscale edges through the wavelet。
估计创新措施英语作文Estimating Innovation Measures。
Innovation is the key to success in today's rapidly changing world. Companies that fail to innovate are doomed to fall behind their competitors and eventually fail. Therefore, it is essential for companies to constantly evaluate their innovation measures to ensure that they are effective and efficient.There are several ways to estimate innovation measures. One of the most common methods is to conduct a survey of employees. This survey should ask employees about their opinions on the company's current innovation measures and whether they have any suggestions for improvement. The survey should also ask employees about their level of engagement in the innovation process and whether they feel that their ideas are valued.Another way to estimate innovation measures is toanalyze data on the company's innovation performance. This data can include metrics such as the number of patentsfiled, the number of new products launched, and the revenue generated from new products. By analyzing this data, companies can identify areas where they are excelling and areas where they need to improve.A third way to estimate innovation measures is to benchmark against other companies in the same industry.This benchmarking can be done through research or by attending industry conferences and networking events. By comparing their innovation measures to those of their competitors, companies can identify best practices andareas where they need to improve.Once a company has estimated their innovation measures, they can take steps to improve them. This may involve investing in new technology, hiring more innovative employees, or creating a more collaborative work environment. It may also involve changing the companyculture to be more supportive of innovation and risk-taking.In conclusion, estimating innovation measures is essential for companies that want to stay competitive in today's rapidly changing world. By conducting surveys, analyzing data, and benchmarking against competitors, companies can identify areas where they need to improve and take steps to do so. With a strong focus on innovation, companies can thrive and succeed in the long run.。
算法的利与弊英语作文Algorithm: Pros and Cons。
With the rapid development of technology, algorithms have become an increasingly important part of our lives. An algorithm is a set of instructions designed to perform a specific task. It has both advantages and disadvantages, and this essay will discuss them in detail.Advantages:1. Efficiency: Algorithms are designed to perform tasks quickly and efficiently. They can process large amounts of data in a short amount of time, making them ideal for tasks such as data analysis and machine learning.2. Consistency: Algorithms are designed to follow a set of rules and procedures, which means they are consistent in their output. This makes them ideal for tasks such as financial analysis and medical diagnosis.3. Accuracy: Algorithms are designed to be precise and accurate. They can perform complex calculations with a high degree of accuracy, making them ideal for tasks such as weather forecasting and stock market analysis.4. Automation: Algorithms can be automated, which means they can perform tasks without human intervention. This can save time and reduce the risk of errors.5. Scalability: Algorithms can be scaled to handle large amounts of data. This makes them ideal for tasks such as social media analysis and online advertising.Disadvantages:1. Bias: Algorithms can be biased, which means they may produce results that are unfair or discriminatory. This can be a problem in areas such as hiring and lending.2. Lack of Creativity: Algorithms are designed tofollow a set of rules and procedures, which means they lackcreativity. This can be a problem in areas such as art and music.3. Security: Algorithms can be vulnerable to security breaches and cyber attacks. This can be a problem in areas such as banking and online shopping.4. Dependence: Algorithms can create a dependence on technology, which can be a problem if the technology fails or is unavailable. This can be a problem in areas such as healthcare and transportation.5. Privacy: Algorithms can collect and use personal data, which can be a violation of privacy. This can be a problem in areas such as social media and online advertising.Conclusion:In conclusion, algorithms have both advantages and disadvantages. They are efficient, consistent, accurate, automated, and scalable. However, they can also be biased,lack creativity, be vulnerable to security breaches, create dependence, and violate privacy. It is important to weigh these pros and cons when deciding whether to use algorithms in a particular task or situation.。
The Model and Algorithm to Estimatethe Difficulty Levels of Sudoku PuzzlesChungen Xu&Weng XuDepartment of Applied Mathematics,Nanjing University of Science&TechnologyNanjing210094,Jiangsu,ChinaTel:86-25-8431-5877E-mail:xuchung@The research isfinanced by the National Natural Science Foundation of China under Grant No.70871059.AbstractSudoku is a number placement mathematical puzzle based on logic.The purpose of this paper is to discuss suitable models and algorithm to generate Sudoku puzzles of varying difficulty.It is generally recognized that hand-made puzzles are more enjoyable than those generated by computer.Our goal is to establish models to generate Sudoku puzzles of varying difficulty,which are as enjoyable as hand-made ones.As we believe that puzzles generated by simulating the design process of hand-made ones will also be of much enjoyment,we established ourfirst model-No Brute-Force. Brute-Force technique is excluded from this algorithm,for there is no enjoyment in solving puzzles using it.We have implemented the algorithm with a JA V A program.We conclude that it is possible and reasonable to generate Sudoku puzzles of varying difficulty as enjoyable as hand-made ones.Keywords:Sudoku,Puzzle,Difficulty ratings,Complexity1.IntroductionIn history,Suduko,which is an interesting number placement puzzle that requires logic skills and patience,first appeared in the U.S.in1979(WIKIPEDIA,July2008).The game was designed by Howard Garns,an architect who upon retirement turned to puzzle creation.In the1980s,the game grew in popularity in Japan.It is a fantastic puzzle game that can be found in newspapers,books and even on the game websites nowadays.Its rules are extremely simple.The classic puzzle consists of a9x9grid that is divided into nine3x3blocks,with some cells already contain numbers,known as”givens”. The goal is tofill the empty cells,so that each column,row,and blocks contains the number1through9exactly one.It is true that nowadays,millions of websites offer Sudoku puzzles online,and several free generator software are avail-able,but there are quantities of solvers prefer hand-made puzzles rather than computer-generated ones.The most im-portant reason is that the computer-generated puzzles are not as enjoyable as crafted ones.For instance,it is a tiresome process to solve a puzzle either too difficult or exists more than one solution.However,as the group of Sudoku fans expanded quickly,it is impossible to design all the puzzles needed by hand.As a result,it is wrong to ignore the computer generators’advantages,such as efficiency.What’s more,we should develop computer generators both in theories and techniques(ALBERTO,2006.p16-21).Theoretically,it is essential to develop an algorithm to generate Sudoku puzzles of varying difficulty,which are as en-joyable as those crafted ones.The algorithm should guarantee a unique solution.And it is necessary to minimize the complexity of the algorithm.Difficulty rating is a complex topic,for it is a subjective process which is uneasy to make quantitative analyses.In this paper,we have established a model to generate puzzles as enjoyable as possible,based on that they at least meet the classic Sudoku’s basic requirements.The model has its own metrics to define a difficulty level.We have implemented the model with algorithms performed well in programming language JA V A,and we have analyzed the complexity of the algorithm in details.Rating difficulty is a complex topic,for it is a subjective process.In order to develop an algorithm to generate puzzles of varying difficulty,we should develop metrics to define difficulty level.So,factors having nothing to do with difficulty ratings should not be considered.Difficulty levels ranked by computer should be consistent with those most of the Sudoku solvers’believed.Algorithms can take a factor related to difficulty rating into consideration only if that the factor can beanalyzed quantitatively.The complexity of a solving technique required and how many times it has been used can be estimated by computer. This estimation allows computer generators either construct a Sudoku puzzle with certain difficulty level or to tailor their puzzles to audiences of varied solving experience.Firstly,we construct a model simulating the design process of hand-made puzzles-No Brute-Force Model.It implements the other standard,and it never uses brute-force in generating a puzzle,thus,it guarantees the puzzles’enjoyment and reduces the complexity of algorithm.2.No Brute-Force Model2.1Foundationsa)Number of givensThe Japanese publisher Nikoli limits its Sudoku puzzles to a maximum of32givens.One can conclude,however,that a Sudoku with more than40givens is probably not a very difficult one.b)No Brute-ForceBrute-Force technique can deal with all the Sudoku puzzles,even those of more than one solution.However,it is a technique for computer rather than people.Moreover,there is no enjoyment in solving Sudoku puzzles using Brute-Force technique.The probability that a Sudoku puzzle generated by computer randomly demands Brute-Force technique is not small.Thus, if we want to exclude those puzzles,we have to solve the puzzle using techniques without Brute-Force in advance.If the puzzle is happened to be a puzzle demands Brute-Force,we have to generate a new puzzle.c)Solving techniquesIn this model,our algorithm also only implements these techniques,such as:Naked Single,Hidden Single,Naked Pair,Hidden Pair,Intersection Removal,Naked Triplet,Naked Quad,Hidden Triplet, Hidden Quad,X-wing,Swordfish,Jellyfish,XY-wing,XYZ-wing,WXYZ-wing and Forcing chain.If a puzzle demands more techniques,we assume that it is too hard for people to solve.Our algorithm won’t generate puzzles like that for our goal is to develop an algorithm to generate Sudoku puzzles of varying difficulty,which are as enjoyable as hand-made ones.Codes of these solving techniques reference the source code of”Sudoku Explainer”,a free generator whose source code is available,as well as thefirst mode(SANTA,Dec28,2007).2.2Developmenta)Generate a grid with some givens.Begin with an empty grid,and then generate a puzzle take advantage of thefirst algorithm,only use Naked Single and Hidden Single techniques.If the difficulty level needed is easy,the puzzle has been generated successfully.b)Remove cells one by one as much as possible.We have explained that the number of givens has little or no bearing on a puzzle’s difficulty.But solving a puzzle with more than40givens is not an enjoyable process,for it provides little space for us to exert.Our algorithm is designed to remove givens one by one as much as possible until the puzzle can’t guarantee a unique solution.Then put the last cell back,and try another one until all the givens have been tested.c)Make difficulty ratings on a puzzle and generate a puzzle with needed difficulty.In this algorithm,we make difficulty ratings with another standard different with thefirst model.There is a value represent the difficulty for each solving technique.Furthermore,considering that it is more difficult to use a technique at thefirst time than others(see Table2), we set a higher value for thefirst use of a certain technique.We are inspired by a discussion in a forum.(HOW ARD,July 31,2005).In order to make difficulty ratings,we have done a statistics about the puzzles’difficulty level provided by newspapers (see Table1).Although there are a small number of samples,we consider that it can be applied in our models.Here are the results:see Table2.Therefore,we have made4levels as Table3.Rate the puzzle generated to test its difficulty level.If the puzzle’s difficulty level is consistent with the requirement,the puzzle has been generated successfully.Otherwise,repeat from step1.d)Flow diagram of the algorithm.Flow diagram of the algorithm see Figure1.e)Implementation.We implement the program in Eclipse3.3.0with the programming language JA V A.Codes of solving techniques reference the source code of”Sudoku Explainer”,a free generator whose source code is available(SANTA,Dec28,2007).3.ConclusionsThe model we established has developed an algorithm to generate Sudoku puzzles of varying difficulty,which are as enjoyable as hand-made ones.No Brute-Force Model,without Brute-Force technique,is also an excellent generator with the ability to produce enjoyable puzzles of which difficulty level are consistent with the level needed.The metric to definea difficulty level is different from others and is reasonable.Further work we have considered about our model is to adjust them to befit for other forms of Sudoku puzzles.ReferencesAlberto,Moraglio,Julian,Togelius and Simon,Lucas.(2006).Product Geometric Crossover for the Sudoku Puzzle.In 2006IEEE Congress on Evolutionary Computation.July2006.p.16-21.Dai,Min.(2006).Data Structure Courseware.Tianjin University of Technology.2006.p.8.David,Bau.(2006).Sudoku Generator.[Online]Available:/archives/2006/09/04/sudoku generator.html. September04.2006.Frazer,Jarvis.(2008).Sudoku enumeration problems.[Online]Available:http://www.afjarvis.staff/sudoku/, February.2008.Huang,Jing.(2006).iSudokuHelp.A-Crystal-Man Software Studio,2006.Howard.(2005).Rating Difficulty.[Online]Available:/sudoku/viewtopic.php?t=142&mforum=sudoku.Jul31,2005.JBL.(2008).Difficulty ratings.[Online]Available:/sudoku/difficulty.html.Santa,Clara.(2007).Sudoku Explainer.[Online]Available:http://diuf.unifr.ch/people/juillera/Sudoku/Sudoku.html. Dec28.2007.Wikipedia:Sudoku.(2008).[Online]Available:/wiki/Sudoku(July,2008).Lee,Wei-Meng.(2006).Programming Sudoku,Apress.2006.p.146.Table1.Thefirst time than othersTechniques First time OthersNaked Single11Hidden Single22Naked Pair53Naked Triplet64Naked Quad85Intersection Removal85Hidden Pair106Hidden Triplet137Hidden Quad168X-wing106Swordfish168Jellyfish189XY-wing137XYZ-wing168WXYZ-wing2010Forcing chain2412Table2.The puzzles’difficulty levelLevel I II III IV V VI VII VIII IX X RangeEasy62665850695853507165≤71 Intermediate7988838685768381867474-88 Hard8489959798859694898884-98 Challenging1091121031021089812796124120≥96Table3.4levelsLevel Range EasyEasy≤72Intermediate72-86Hard86-98Challenging>98Figure1.1Flow diagram of the algorithm。
LTE 系统中一种改进的基于 CP 的 M L 频偏估计算法漆 飞,胡捍英,周 游QI Fei, HU Hanying, ZHOU Y ou解放军信息工程大学 信息系统工程学院,郑州 450002Institute of Information System Engineering, PLA Information Engineering University, Zhengzhou 450002, China QI Fei, HU Hanying, ZHOU You. Improved ML CFO estimation algorithm based on CP in L TE system. Computer Engineering and Applications, 2014, 50(20):223-228. Abstract :This paper researches the carrier frequency sychronization for downlink of L TE system and analyzes two tradi- tional Carrier Frequency Offset (CFO ) estimation algorithms which are the estimation based PSS correlation and the Max-Likelihood (ML )estimation based on Cycle Prefix (CP ). Aiming at the latter algorithm has the shortcomings that are a poor ability to resi st multipath and polarity reversion and a smaller estimation scale, this paper proposes an improved algo- rithm. This improved algorithm reduces the influence of multi-path through substracting a part of interferenced data from the CP when usi ng the CP to estimate the CFO. To expand the estimation scale and enhance the ability of resi sting the polarity reversion, this improved algorithm uses the polarity information of the estimated result of algorithm based on PSS correla- tion. The simulations show that, the improved algorithm has a better performance in the multi-path condition compared to the original algorithm, when |εF | approach to 0.5, the original algorithm can resit the polarity information effectively andhas good robust ability to the changement of |εF | nearby 0.5, when |εF | > 0.5 , the original algorithm is invalid but the improved algorithm still can estimate the εF and keep a higher estimation performance.Key words :Long Term Evolution (L TE ); Carrier Frequency Offse (t CFO )estimation; reducing influence of multi-path; resisting polarity reversion; expanding estimation scale摘 要:主要研究了 L TE 系统下行链路中的载波同步,分析了基于 PSS 相关估计和基于 CP 的 ML 估计两类传统频偏 估计算法。
第14卷 第4期 太赫兹科学与电子信息学报Vo1.14,No.4 2016年8月 Journal of Terahertz Science and Electronic Information Technology Aug.,2016 文章编号:2095-4980(2016)04-0625-05一种改进的无核信息系统属性约简算法杨素敏1,2,蒙洁1,张政保2,袁红丽2(1.电子信息系统复杂电磁环境效应国家重点实验室,河南洛阳 471003;2.军械工程学院信息工程系,河北石家庄 050003)摘 要:针对无核信息系统的特点,基于互信息提出了一种新的启发式属性约简算法,该算法以增加属性后的互信息增量和属性自身的信息熵2项指标作为评价属性重要度的依据。
实验结果表明,该算法避免了对于没有核属性的无核信息系统因随机选择初始属性造成计算复杂度增大的问题,并且属性约简效率提高,属性约简后的个数也相对较少。
关键词:粗糙集;属性约简;互信息;无核信息系统中图分类号:TN911.2 文献标识码:A doi:10.11805/TKYDA201604.0625Attribute reduction algorithm for non-core information systemYANG Sumin1,2,MENG Jie1,ZHANG Zhengbao2,YUAN Hongli2(1.State Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System,Luoyang Henan471003,China;2.Department of Information Engineering,Ordnance Engineering College,Shijiazhuang Hebei 050003,China)Abstract:According to the characteristics of non-core information system, one new heuristic attribute reduction algorithm is proposed based on mutual information, in which the evaluation of attributeimportance depends on two indexes, the increment of mutual information and the information entropy.When one attribute is added to the reduction sets, the attribute with the largest attribute importance isselected for the core attribute. This method can solve the problem of increased computational complexitycaused by the randomly selected attributes. The simulation experiments indicate that the proposedalgorithm is effective, which can not only improve the efficiency of attribute reduction,but also decreasethe number of attribute reduction.Key words:rough set;attribute reduction;mutual information;non-core information system属性约简就是在保持信息系统分类能力不变的情况下,约去不必要的属性,是电子信息系统进行指标优选的关键环节。
Improvement of Th´e riault Algorithm of Index Calculus for Jacobian of Hyperelliptic Curves ofSmall GenusKoh-ichi Nagao ∗,Dept.of Engineering,Kanto-Gakuin Univ.May 20,2004AbstractGaudry present a variation of index calculus attack for solving the DLP in the Jacobian of hyperelliptic curves.Harley and Th´e rialut improve these kind of algorithm.Here,we will present a variation of these kind of algorithm,which is faster than previous ones.Keywords Index calculus attack,Jacobian,Hyperelliptic curve,DLP,1IntroductionGaudry [3]first present a variation of index calculus attack for hyper-elliptic curves that could solve the DLP on the Jacobian of an hyper-elliptic curve of small ter,Harley(cf.[2])and Th´e riault[1]improve this algorithm.In [1],these algorithms work in timeO (q 2−2g +1+ ),and O (q 2−42g +1+ )respectively.Th´e riault’s algorithm uses the almost-smooth divisor D = D (P i )that all but one of theP i ’s are in the set B called factor base.This technique was often used in the number field sieve factorization algorithm,which uses the almost-smooth integer n = p i ,that all but one of the p i ’s are in the factor base B ,which is the set of small primes.In factorization algorithm,the cost of factorizing integer is larger than that of primary testing.So,the cost of factorizing almost-smooth integer is larger than that of normal integer of the same size,and the number that p i ∈B must be one.How-ever,for the index calculus for the Jacobian of curves,we first compute the point of Jacobian and later consult whether it is almost smooth or not.So that,the new algorithm that use the 2-almost smooth divisors,that all but 2of the P i ’s are in the set B ,is useful.For example,the al-most smooth divisor of the form v 1= terms of B +D (P 1),and the 2-almost smooth divisors of the form v 2= terms of B +D (P 1)+D (P 2),∗nagao@kanto-gakuin.ac.jp1v3=terms of B+D(P2)+D(P3)are given,v1−v2=terms of B−D(P2),v1−v2+v3=terms of B+D(P3)are other almost smoothdivisors.So,we can get much more almost smooth divisors from gath-ering2-almost smooth divisors.From this improvement,we get an attack of a running time of O(q2−2g+ ).2Jacobian arithmeticLet C be a hyperelliptic curve of genus g over F q of the form y2+ h(x)y=f(x)with deg f=2g+1and deg h≤g.e J q for Jac C(F q).Further,we will assume that|J q|is odd prime number,for simplic-ity.Definition1.Given D1,D2∈J q such that D2∈<D1>,DLP for (D1,D2)on J q is computingλsuch that D2=λD1.For an element P=(x,y)in C(¯F q),put−P:=(x,−h(x)−y). Lemma1.C(F q)is written by the union of disjoint sets P∪−P∪{∞}, where P:={−P|P∈P}.Proof.Since|J q|is odd prime,we have2||J q|and there are no point P∈C(F q)such that P=−P.Further,we willfix P.Point of Jac C can be represented uniquely by the reduced divisor of the formki=1n i P i−ki=1n i∞,P i∈C(¯F q),P i=−P j for i=jwith n i≥0andn i≤g.Definition2.The reduced divisor of a point of Jacobian J q is written by the elements of C(F q)i.e.ki=1n i P i−ki=1n i∞,P i∈C(F q).Then the point is said to be potentially smooth point.Let D(P):=P−∞.Note that P+(−P)∼2∞.From lemma1, potentially smooth point v of J q can be represented of the formP∈P n(v)PD(P)with n(v)P ∈Z andP∈P|n(v)P|≤g.Further,we will use this repre-sentation to potentially smooth points.2Definition 3.A subset B of P used to define smoothness is called factor base.Definition 4.A point P ∈P\B is called large prime.Definition 5.A divisor v of the formP ∈Bn (v )P D (P )is called smooth divisor.Definition 6.A divisor v of the formP ∈Bn (v )P D (P )+n (v )P P ,where P is a large prime,is called 1-almost smooth divisor or almost smooth divisor.Definition 7.A divisor v of the formP ∈Bn (v )P D (P )+n (v )P P +n (v )P P ,where P ,P are large primes,is called 2-almost smooth divisor.Definition 8.An element J ∈J q is called c point,if the reduced divisor representing J is smooth (resp.almost smooth,resp.2-almost smooth)divisor.Further,we will consider the coefficients n P of a smooth (resp.almost smooth,resp.2-almost smooth)divisor modulo |J q |.For a smooth (resp.almost smooth,resp.2-almost smooth)divisor v ,putl (v ):=#{P ∈B |n (v )P =0}.Lemma 2.Let v 1,v 2be smooth (resp.almost smooth,resp.2-almost smooth)divisors and let r 1,r 2be integers modulo |J q |.Then the cost for computing r 1v 1+r 2v 2is O (g 2(log q )2(l (v 1)+l (v 2)).Proof.It requires l (v 1)+l (v 2)-time products and additions modulo |J q |.Note that |J q |.=q g .Since the cost of one elementary operation modulo |Jq |is log |J q |=(g log q )2,we have this estimation.3Outline of algorithmIn this section,we present the outline of the proposed algorithm.Let k be a real number satisfying 0<k <1/2g .Further in this paper,we will use k as a parameter of this algorithm.Putr :=r (k )=g −1+k g.We will fix a set of factor base B with |B |=q r .3Lemma3.2r>1+k>1>1+r2=2g+k−12g>(g−1)+(g+1)kg.Proof.trivial.The whole algorithm consists of the following7parts.Input C/F q hyper elliptic curve of small genus g,D1,D2∈J q such that D2∈<D1>.Output Integerλmodulo|J q|such that D2=λD1.1Computing all points of C(F q)and making P andfix B⊂P with |B|=q r.2Gathering2-almost smooth divisors and almost smooth divisors Computing a set V2of2-almost smooth points and a set V1of almost smooth points of J q,of the formαD1+βD2with|V1|>2q(g−1)+(g+1)kgand|V2|>q1+k.3Computing a set of almost smooth divisor H m with|H m|>q(1+r)/2. 4Computing a set of smooth divisor H with|H|>q r.5Solving linear algebra of the size q r×q r Computing integers{r h}h∈H modulo|J q|,satisfyingh∈Hr h h≡0mod|J q|.6Computing integers{s v}v∈V1∪V2modulo|J q|,satisfyingv∈V1∪V2s v v≡0mod|J q|.7Computingλ.44Gathering2-almost smooth points andalmost smooth pointsAlgorithm1Gathering the2-almost smooth pts and almost smooth pts Input:C/F q curve of genus g,D1,D2∈Jac C(F q)Output:V1a set of almost smooth divisors,V2a set of2-almost smooth divisors such that|A2|>q1+k,|V1|>2q(g−1)+(g+1)kg,Inte-gers{(αv,βv)}v∈V1∪V2such that v=αv D1+βv D21:V1←{},V2←{}2:repeat3:Letα,βbe random numbers modulo|J q|4:Compute v=αJ1+βJ25:if v is almost smooth then6:V1←V1∪{v}7:(αv,βv)←(α,β)8:end if9:if v is2-almost smooth then10:V2←V2∪{v}11:(αv,βv)←(α,β)12:end if13:until|A2|>q1+k and|V1|>2q(g−1)+(g+1)kg14:return V1,V2,{(αv,βv)}v∈V1∪V2Lemma4.The probability that a point in J q is almost smooth is1(g−1)!q(−1+r)(g−1)and the probability that a point is2-almost smooth is12(g−2)!q(−1+r)(g−2).Proof.We can get above lemma similarly from proposition3,4,5in [1].For example,the probability of2-almost smooth points is roughly estimated by(2|B|)g−2(2|P\B|)22!(g−2)!÷|J q|.=(q r)g−2q22!(g−2)!q=12(g−2)!q(−1+r)(g−2).From this lemma,the number of the loops that|V2|>q1+k is estimated byq(1+k)·2(g−2)!q(1−r)(g−2)=2(g−2)!q2r,5and the number of the loops that|V1|>2q(g−1)+(g+1)kg is estimated by2q(g−1)+(g+1)kg·(g−1)!q(1−r)(g−1)=2(g−1)!q2r.Since the cost of computing Jacobian v=αD1+βD2is O(g2(log q)2) and the cost of judging whether v is potentially smooth or not is O(g2(log q)3),the total cost of this part is estimated byO(g2(g−1)!(log q)3q2r).Here,we will estimate the required storage.Note that the bit-length of one relative smooth point is2g log q.So,the storage for V1,the set of almost smooth divisors,is O(g log q q(g−1)+(g+1)kg)and the storage for V2,the set of2-almost smooth divisors,is O(g log q q(1+k)).From lemma3,we have g log q q(1+k)>>g log q q(g−1)+(g+1)kg.So the total required storage can be estimated byO(g log q q(1+k)).5Elimination of large prime(Flame work)Let E be a set of smooth divisors,and let F be a set of2-almost smooth divisors or a set of smooth divisors.Note that element e∈E and f∈F are written bye=P∈B n(e)PP+n(e)P1P1,f=P∈B n(f)PP+n(f)P2P2(+n(f)P3P3).Put sup(e):={P1}and sup(f):={P2,(P3)}.When P∈sup(e)∩sup(f),putφ(e,f,P):=n(f)pe−n(e)p f.Trivially,φ(e,f,P)is almost smooth divisor,if F is a set of2-almost smooth divisors andφ(e,f,P)is smooth divisor,if F is a set of almost smooth divisors and e is not of the form constant times f.Definition9.e♥f:=φ(e,f,P)if P∈sup(a)∩sup(b)and e=Const×f ∅otherwise.E♥F:=∪e∈E,f∈F e♥f.Lemma5.E♥F is a set of almost smooth(resp.smooth)divisors,if F is a set of2-almost smooth(resp.almost smooth)divisors. Proof.Trivial!6We will estimate the size of E ♥F .Lemma 6.The size of E ♥F is estimated by|E ♥F |= |E ||F |/q if F is 2-almost smooth 12|E ||F |/qif F is almost smooth Proof.Let e ∈E ,f ∈F be randomly chosen elements.Put P :=sup(e ).if F is a set of 2-almost smooth divisors (resp.almost smooth divisors),the probability that P ∈sup(f )is 2|P\B |.=1q (resp.1|P\B |.=12q ),and the size is estimated by 1q ×|E ||F |(resp.12q ×|E ||F |).In order to compute E ♥F ,we use this algorithm.Algorithm 2Heartsuit operatorInput:E ,FOutput:E ♥F1:set P\B ={R 1,R 2,..,R |P\B |}2:for i =1,2,..,|P\B |do3:st [i ]←{}4:od5:for all e ∈E do6:P =sup(e )7:Compute i s.t.P =R i 8:st [i ]←st [i ]∪{e }9:od10:V ←{}11:for all f ∈F do12:for all P ∈sup(f )do13:Compute i s.t.P =R i 14:if st [i ]=∅then15:for all e ∈st [i ]s.t.e =Const ×f do16:V ←V ∪{φ(e,f,P )}17:od18:end if19:od20:od21:return HWe will estimate the cost and the storage for computing E ♥F .Lemma 7.Put c 1:=max {l (e )|e ∈E }and c 2:=max {l (f )|f ∈F }.Assume that |E |<<q .Then the cost of computing E ♥F isO (c 1(log q )2|E |)+O ((log q )2|F |)+O ((c 1+c 2)(log q )2|E ||F |/q ).and the required storage isO (c 1log q |E |)+O ((c 1+c 2)log q |E ||F |/q ).7Proof.The required storage for st [i ]is O (c 1log q |E |)and the required storage for V is O ((c 1+c 2)log q |E ||F |/q ),since |V |.=|E ||F |/q and max {l (v )|v ∈V }=c 1+c 2from lemma 2.Note that the cost of the routine ”Computing index i ”is log q log |P\B |=O ((log q )2).Also note that |E ♥F |=O (|E ||F |/q )and remark that the probability of st [i ]=∅is very small,since |E |<<q .Thus,we see that the cost of the 1st loop is O (c 1(log q )2|E |),the cost of the part ”Computing index i ”of the 2nd loop is O ((log q )2|F |),and the cost of the part ”Computing the elements of V ”of the 2nd loop is O ((c 1+c 2)(log q )2|E ||F |/q )from lemma 2.6Computing H mIn this section,we will construct H m a set of almost smooth divisors |H m |>2q (1+r )/2.Algorithm 3Computing H mInput:V 1a set of almost smooth divisors s.t.|V 1|>2q (g −1)+(g +1)kg ,V 2a set of 2-almost smooth divisors s.t.|V 2|>q(1+k )Output:Integer m >0and H m a set of almost smooth divisors s.t.|H m |>2q (1+r )/21:H 1←V 12:i ←13:repeat4:i ++5:H i ←H i −1♥V 26:until |H i |>2q (1+r )/27:m ←i8:return m ,H m From lemma 6,the size of H i is estimated by|H i |=|H 1|×(q k )i −1=2q(g −1)+(g i +1)k g .So,solving the equation(g −1)+(g i +1)k g =(1+r (k ))/2for i ,we have the following.Lemma 8.m is estimated by1−k 2gk.Further,we will assume m =O (1gk ).Note that {l (v )|v ∈∪i ≤m H i }≤mg .From lemma 7,the cost for computing H m ism ×(O ((log q )2q (1+k ))+O (mg (log q )2q (1+r )/2)))8and the required storage isO(mg log q q(1+r)/2).7Computing HIn this section,we compute H a set of smooth divisors for|H|>q r. Algorithm4Computing HInput:H m a set of almost smooth divisors s.t.|H m|>2q(1+r)/2Output:H a set of smooth divisors s.t.|H|>q r.1:H←H m♥H m2:return HFrom lemma6,the size of H is estimated by|H|=|H m|2/2q=2q r.Note that{l(v)|v∈∪i≤m H}≤2mg.From lemma7,the cost for computing H isO((log q)2q(1+r)/2)+O(mg(log q)2q r)and the required storage isO(mg log q q(1+r)/2).8Two ways representation of h∈HAn element h∈H is written by the formh=P∈B a(h)PD(P),since it is a smooth divisor.Moreover,form its construction,we see easily thatl(h)=#{P∈B|a(h)P=0}≤2mg.Set B={R1,R2,...,R|B|}.Definition10.Put vec(h):=(a(h)R1,a(h)R2,...,a(h)R|B|).The computation of h(=vec(h))means the set of pairs{(a(h)R i,R i)}for non-zero a(h)R i .Note that the required storage for one h is O(m g log q).On the other hands,form its construction,h is written by linear sum of2m elements of V1∪V2.i.e.h=v∈V1∪V2b(h)vv,#{v|b(h)v=0}=2m.9Definition 11.Put v (h ):={(b (h )v ,v )|b (h )v =0}.Note that the required storage for one v (h )is O (m log q ).Important Remark By little modifying the algorithm 3,4,we can obtain both representations of h of the forms vec (h )and v (h ).(The order of the cost and the order of the storage for computing H is essentially the same.)Further,we will assume that the computations of vec (h )and v (h )are done.9Linear algebraIn this section,we will solve the linear algebra and finding a linear relation of H .Algorithm 5Linear algebraInput:H a set of smooth divisors such that |H |>q rOutput:Integers {γh }h ∈H modulo |J q |s.t. h ∈H γh h ≡0mod |J q |1:Set H ={h 1,h 2,...,h |H |}2:Set matrix M =(t vec (h 1),t vec (h 2),...,t vec (h |H |))3:Solve linear algebra of M and compute (γ1,γ2,...,γ|H |)such that |H |i =1γi vec (h i )= 04:return {γi }Note that the elements of matrix is integers modulo |J q |.=q g .Sothe cost of elementary operation modulo J q is O (g 2(log q )2).M is a sparse matrix of the size q r ×q r .Note that the number of non-zero elements in one column is 2mg .So,using [4][5],we can compute {γi }.Its cost isO (g 2(log q )2·2mg ·q r q r )=O (mg 3(log q )2q 2r )and the required storage isO (log(q g )m g ·q r )=O (m g 2log q q r ).(The required storage for sparse linear algebra is essentially the storage for non-zero data.Note that the bit length of integer modulo |J q |is log(q g ),the number of nonzero elements of one row is mg .)10Computing s vRemember that each element h ∈H is of the form h = v ∈V 1∪V 2b (h )v v .In the previous section,we found {γh }such that h ∈H γh h ≡0mod10|J q |.So,puts v :=h ∈H γh b (h )v mod |J q |for all v ∈V 1∪V 2and we havev ∈V 1∪V 2s v v ≡0mod |J q |.Algorithm 6Computing s v Input:V 1,V 2,H ,{γh }h ∈H s.t. h ∈H γh h ≡0Output:{s v }v ∈V 1∪V 21:for all v ∈V 1∪V 2do2:s v ←03:od4:for all h ∈H do 5:for all v ∈V 1∪V 2s.t b (h )v =0do6:s v ←s v +γh b (h )v7:od8:od9:return {s v }The cost of this part isO (g log q q 1+k )+O (m g 2(log q )2q (1+r )/2)and the storage isO (g log q q 1+k ).11Finding discreet logIn the previous section,we found {s v }such that s v v ≡0mod |J q |.In the part 2of the algorithm,we have computed (αv ,βv )such thatv =αv D 1+βv D 2.So,we havev ∈V 1∪V 2s v (αv D 1+βv D 2)=(v ∈V 1∪V 2s v αv )D 1+( v ∈V 1∪V 2s v βv )D 2≡0.mod |J q |So,−( v ∈V 1∪V 2s v αv )/( v ∈V 1∪V 2s v βv )mod |J q |is required discreet log.Algorithm 7Computing λInput:V 1,V 2,{αv ,βv },{s v }Output:Integer λmod |J q |s.t.D 1=λD 21:return −( v ∈V 1∪V 2s v αv )/( v ∈V 1∪V 2s v βv )mod |J q |11Note that the cost of this part is O(g2(log q)2q1+k).12Cost estimationIn this section,we will estimate the cost and the required storage of whole algorithm under the assumption ofk=1 log q.First,remember that m=O(1gk )=O(log qg).By a direct computation,we haver=r(k)=g−1+kg=1−1g+1g log q,andq2r=q2−2g×exp(2)=O(q2−2g).From our cost estimation,the cost of the routine except part2and part5is written by the formO(g a(log q)b q c)a,b≤4,c≤1+k.On the other hands,the cost of the routine part2and part5is written byO(g2(g−1)!(log q)3q2r)and O(mg3(log q)2q2r).From lemma3,we see1+k<2r and the cost of the whole parts can be estimated byO(g2(g−1)!(log q)3q2r)=O(g2(g−1)!(log q)3q2−2g). Similarly,we see that the required storage(dominant part is part2 and part7,since1+k>1>(1+r)/2from lemma3)isO(g log q q1+k)=O(g log q q1+k)=O(g log q q exp(1))=O(g log q q). 13ConclusionIn ASIACRYPT2003,Th´e riault presented a variant of index calculus for the Jacobian of hyperelliptic curve of small genus,using almost smooth divisors.Here,we improve Th´e riault’s result,using2-almost divisors and propose an attack for DLP of the Jacobian of hyperelliptic curves of small genus,which works O(q2−2g+ )running time.12References[1]N.Th´e riault,Index calculus attack for hyperelliptic curves of smallgenus,ASIACRYPT2003,LNCS2894,Springer-Verlag,2003,pp.75–92.[2] A.Enge,P.Gaudry,A general framework for subexponential discretelogarithm algorithms,Acta Arith.,102,no.1,pp.83–103,2002.[3]P.Gaudry,An algorithm for solving the discrete log problem on hyper-elliptic curves,Eurocrypt2000,LNCS1807,Springer-Verlag,2000,pp.19–34.[4] Macchia,A.M.Odlyzko,Solving large sparse linear systems overfinitefields,Crypto’90,LNCS537,Springer-Verlag,1990,pp.109–133.[5] D.H.Wiedemann,Solving sparse linear equations overfinitefields,IEEErm.Theory,IT-32,no.1,pp.54–62,1986.13。