Search for rare quark-annihilation decays, B -- Ds() Phi
- 格式:pdf
- 大小:207.45 KB
- 文档页数:8
Vol.48,No.6Jun. 202 1第48卷第6期2 0 2 1年6月湖南大学学报)自然科学版)Journal of Hunan University (Natural Sciences )文章编号:1674-2974(2021 )06-0058-09 DOI : 10.16339/ki.hdxbzkb.2021.06.009深度优先局艺B 聚合哈希龙显忠g,程成李云12(1.南京邮电大学计算机学院,江苏南京210023;2.江苏省大数据安全与智能处理重点实验室,江苏南京210023)摘 要:已有的深度监督哈希方法不能有效地利用提取到的卷积特征,同时,也忽视了数据对之间相似性信息分布对于哈希网络的作用,最终导致学到的哈希编码之间的区分性不足.为了解决该问题,提出了一种新颖的深度监督哈希方法,称之为深度优先局部聚合哈希(DeepPriority Local Aggregated Hashing , DPLAH ). DPLAH 将局部聚合描述子向量嵌入到哈希网络 中,提高网络对同类数据的表达能力,并且通过在数据对之间施加不同权重,从而减少相似性 信息分布倾斜对哈希网络的影响.利用Pytorch 深度框架进行DPLAH 实验,使用NetVLAD 层 对Resnet18网络模型输出的卷积特征进行聚合,将聚合得到的特征进行哈希编码学习.在CI-FAR-10和NUS-WIDE 数据集上的图像检索实验表明,与使用手工特征和卷积神经网络特征的非深度哈希学习算法的最好结果相比,DPLAH 的平均准确率均值要高出11%,同时,DPLAH 的平均准确率均值比非对称深度监督哈希方法高出2%.关键词:深度哈希学习;卷积神经网络;图像检索;局部聚合描述子向量中图分类号:TP391.4文献标志码:ADeep Priority Local Aggregated HashingLONG Xianzhong 1,覮,CHENG Cheng1,2,LI Yun 1,2(1. School of Computer Science & Technology ,Nanjing University of Posts and Telecommunications ,Nanjing 210023, China ;2. Key Laboratory of Jiangsu Big Data Security and Intelligent Processing ,Nanjing 210023, China )Abstract : The existing deep supervised hashing methods cannot effectively utilize the extracted convolution fea tures, but also ignore the role of the similarity information distribution between data pairs on the hash network, result ing in insufficient discrimination between the learned hash codes. In order to solve this problem, a novel deep super vised hashing method called deep priority locally aggregated hashing (DPLAH) is proposed in this paper, which em beds the vector of locally aggregated descriptors (VLAD) into the hash network, so as to improve the ability of the hashnetwork to express the similar data, and reduce the impact of similarity distribution skew on the hash network by im posing different weights on the data pairs. DPLAH experiment is carried out by using the Pytorch deep framework. Theconvolution features of the Resnet18 network model output are aggregated by using the NetVLAD layer, and the hashcoding is learned by using the aggregated features. The image retrieval experiments on the CIFAR-10 and NUS - WIDE datasets show that the mean average precision (MAP) of DPLAH is11 percentage points higher than that of* 收稿日期:2020-04-26基金项目:国家自然科学基金资助项目(61906098,61772284),National Natural Science Foundation of China(61906098, 61772284);国家重 点研发计划项目(2018YFB 1003702) , National Key Research and Development Program of China (2018YFB1003702)作者简介:龙显忠(1985—),男,河南信阳人,南京邮电大学讲师,工学博士,硕士生导师覮 通信联系人,E-mail : *************.cn第6期龙显忠等:深度优先局部聚合哈希59non-deep hash learning algorithms using manual features and convolution neural network features,and the MAP of DPLAH is2percentage points higher than that of asymmetric deep supervised hashing method.Key words:deep Hash learning;convolutional neural network;image retrieval;vector of locally aggregated de-scriptors(VLAD)随着信息检索技术的不断发展和完善,如今人们可以利用互联网轻易获取感兴趣的数据内容,然而,信息技术的发展同时导致了数据规模的迅猛增长.面对海量的数据以及超大规模的数据集,利用最近邻搜索[1(Nearest Neighbor Search,NN)的检索技术已经无法获得理想的检索效果与可接受的检索时间.因此,近年来,近似最近邻搜索[2(Approximate Nearest Neighbor Search,ANN)变得越来越流行,它通过搜索可能相似的几个数据而不再局限于返回最相似的数据,在牺牲可接受范围的精度下提高了检索效率.作为一种广泛使用的ANN搜索技术,哈希方法(Hashing)[3]将数据转换为紧凑的二进制编码(哈希编码)表示,同时保证相似的数据对生成相似的二进制编码.利用哈希编码来表示原始数据,显著减少了数据的存储和查询开销,从而可以应对大规模数据中的检索问题.因此,哈希方法吸引了越来越多学者的关注.当前哈希方法主要分为两类:数据独立的哈希方法和数据依赖的哈希方法,这两类哈希方法的区别在于哈希函数是否需要训练数据来定义.局部敏感哈希(Locality Sensitive Hashing,LSH)[4]作为数据独立的哈希代表,它利用独立于训练数据的随机投影作为哈希函数•相反,数据依赖哈希的哈希函数需要通过训练数据学习出来,因此,数据依赖的哈希也被称为哈希学习,数据依赖的哈希通常具有更好的性能.近年来,哈希方法的研究主要侧重于哈希学习方面.根据哈希学习过程中是否使用标签,哈希学习方法可以进一步分为:监督哈希学习和无监督哈希学习.典型的无监督哈希学习包括:谱哈希[5(Spectral Hashing,SH);迭代量化哈希[6](Iterative Quantization, ITQ);离散图哈希[7(Discrete Graph Hashing,DGH);有序嵌入哈希[8](Ordinal Embedding Hashing,OEH)等.无监督哈希学习方法仅使用无标签的数据来学习哈希函数,将输入的数据映射为哈希编码的形式.相反,监督哈希学习方法通过利用监督信息来学习哈希函数,由于利用了带有标签的数据,监督哈希方法往往比无监督哈希方法具有更好的准确性,本文的研究主要针对监督哈希学习方法.传统的监督哈希方法包括:核监督哈希[9](Supervised Hashing with Kernels,KSH);潜在因子哈希[10](Latent Factor Hashing,LFH);快速监督哈希[11](Fast Supervised Hashing,FastH);监督离散哈希[1(Super-vised Discrete Hashing,SDH)等.随着深度学习技术的发展[13],利用神经网络提取的特征已经逐渐替代手工特征,推动了深度监督哈希的进步.具有代表性的深度监督哈希方法包括:卷积神经网络哈希[1(Convolutional Neural Networks Hashing,CNNH);深度语义排序哈希[15](Deep Semantic Ranking Based Hash-ing,DSRH);深度成对监督哈希[16](Deep Pairwise-Supervised Hashing,DPSH);深度监督离散哈希[17](Deep Supervised Discrete Hashing,DSDH);深度优先哈希[18](Deep Priority Hashing,DPH)等.通过将特征学习和哈希编码学习(或哈希函数学习)集成到一个端到端网络中,深度监督哈希方法可以显著优于非深度监督哈希方法.到目前为止,大多数现有的深度哈希方法都采用对称策略来学习查询数据和数据集的哈希编码以及深度哈希函数.相反,非对称深度监督哈希[19](Asymmetric Deep Supervised Hashing,ADSH)以非对称的方式处理查询数据和整个数据库数据,解决了对称方式中训练开销较大的问题,仅仅通过查询数据就可以对神经网络进行训练来学习哈希函数,整个数据库的哈希编码可以通过优化直接得到.本文的模型同样利用了ADSH的非对称训练策略.然而,现有的非对称深度监督哈希方法并没有考虑到数据之间的相似性分布对于哈希网络的影响,可能导致结果是:容易在汉明空间中保持相似关系的数据对,往往会被训练得越来越好;相反,那些难以在汉明空间中保持相似关系的数据对,往往在训练后得到的提升并不显著.同时大部分现有的深度监督哈希方法在哈希网络中没有充分有效利用提60湖南大学学报(自然科学版)2021年取到的卷积特征.本文提出了一种新的深度监督哈希方法,称为深度优先局部聚合哈希(Deep Priority Local Aggregated Hashing,DPLAH).DPLAH的贡献主要有三个方面:1)DPLAH采用非对称的方式处理查询数据和数据库数据,同时DPLAH网络会优先学习查询数据和数据库数据之间困难的数据对,从而减轻相似性分布倾斜对哈希网络的影响.2)DPLAH设计了全新的深度哈希网络,具体来说,DPLAH将局部聚合表示融入到哈希网络中,提高了哈希网络对同类数据的表达能力.同时考虑到数据的局部聚合表示对于分类任务的有效性.3)在两个大型数据集上的实验结果表明,DPLAH在实际应用中性能优越.1相关工作本节分别对哈希学习[3]、NetVLAD[20]和Focal Loss[21]进行介绍.DPLAH分别利用NetVLAD和Focal Loss提高哈希网络对同类数据的表达能力及减轻数据之间相似性分布倾斜对于哈希网络的影响. 1.1哈希学习哈希学习[3]的任务是学习查询数据和数据库数据的哈希编码表示,同时要满足原始数据之间的近邻关系与数据哈希编码之间的近邻关系相一致的条件.具体来说,利用机器学习方法将所有数据映射成{0,1}r形式的二进制编码(r表示哈希编码长度),在原空间中不相似的数据点将被映射成不相似)即汉明距离较大)的两个二进制编码,而原空间中相似的两个数据点将被映射成相似(即汉明距离较小)的两个二进制编码.为了便于计算,大部分哈希方法学习{-1,1}r形式的哈希编码,这是因为{-1,1}r形式的哈希编码对之间的内积等于哈希编码的长度减去汉明距离的两倍,同时{-1,1}r形式的哈希编码可以容易转化为{0,1}r形式的二进制编码.图1是哈希学习的示意图.经过特征提取后的高维向量被用来表示原始图像,哈希函数h将每张图像映射成8bits的哈希编码,使原来相似的数据对(图中老虎1和老虎2)之间的哈希编码汉明距离尽可能小,原来不相似的数据对(图中大象和老虎1)之间的哈希编码汉明距离尽可能大.h(大象)=10001010h(老虎1)=01100001h(老虎2)=01100101相似度尽可能小相似度尽可能大图1哈希学习示意图Fig.1Hashing learning diagram1.2NetVLADNetVLAD的提出是用于解决端到端的场景识别问题[20(场景识别被当作一个实例检索任务),它将传统的局部聚合描述子向量(Vector of Locally Aggregated Descriptors,VLAD[22])结构嵌入到CNN网络中,得到了一个新的VLAD层.可以容易地将NetVLAD 使用在任意CNN结构中,利用反向传播算法进行优化,它能够有效地提高对同类别图像的表达能力,并提高分类的性能.NetVLAD的编码步骤为:利用卷积神经网络提取图像的卷积特征;利用NetVLAD层对卷积特征进行聚合操作.图2为NetVLAD层的示意图.在特征提取阶段,NetVLAD会在最后一个卷积层上裁剪卷积特征,并将其视为密集的描述符提取器,最后一个卷积层的输出是H伊W伊D映射,可以将其视为在H伊W空间位置提取的一组D维特征,该方法在实例检索和纹理识别任务[23別中都表现出了很好的效果.NetVLAD layer(KxD)x lVLADvectorh------->图2NetVLAD层示意图⑷Fig.2NetVLAD layer diagram1201NetVLAD在特征聚合阶段,利用一个新的池化层对裁剪的CNN特征进行聚合,这个新的池化层被称为NetVLAD层.NetVLAD的聚合操作公式如下:NV((,k)二移a(x)(血⑺-C((j))(1)i=1式中:血(j)和C)(j)分别表示第i个特征的第j维和第k个聚类中心的第j维;恣&)表示特征您与第k个视觉单词之间的权.NetVLAD特征聚合的输入为:NetVLAD裁剪得到的N个D维的卷积特征,K个聚第6期龙显忠等:深度优先局部聚合哈希61类中心.VLAD的特征分配方式是硬分配,即每个特征只和对应的最近邻聚类中心相关联,这种分配方式会造成较大的量化误差,并且,这种分配方式嵌入到卷积神经网络中无法进行反向传播更新参数.因此,NetVLAD采用软分配的方式进行特征分配,软分配对应的公式如下:-琢II Xi-C*II 2=—e(2)-琢II X-Ck,II2k,如果琢寅+肄,那么对于最接近的聚类中心,龟&)的值为1,其他为0.aS)可以进一步重写为:w j X i+b ka(x i)=—e-)3)w J'X i+b kk,式中:W k=2琢C k;b k=-琢||C k||2.最终的NetVLAD的聚合表示可以写为:N w;x+b kv(j,k)=移—----(x(j)-Ck(j))(4)i=1w j.X i+b k移ek,1.3Focal Loss对于目标检测方法,一般可以分为两种类型:单阶段目标检测和两阶段目标检测,通常情况下,两阶段的目标检测效果要优于单阶段的目标检测.Lin等人[21]揭示了前景和背景的极度不平衡导致了单阶段目标检测的效果无法令人满意,具体而言,容易被分类的背景虽然对应的损失很低,但由于图像中背景的比重很大,对于损失依旧有很大的贡献,从而导致收敛到不够好的一个结果.Lin等人[21]提出了Focal Loss应对这一问题,图3是对应的示意图.使用交叉爛作为目标检测中的分类损失,对于易分类的样本,它的损失虽然很低,但数据的不平衡导致大量易分类的损失之和压倒了难分类的样本损失,最终难分类的样本不能在神经网络中得到有效的训练.Focal Loss的本质是一种加权思想,权重可根据分类正确的概率p得到,利用酌可以对该权重的强度进行调整.针对非对称深度哈希方法,希望难以在汉明空间中保持相似关系的数据对优先训练,具体来说,对于DPLAH的整体训练损失,通过施加权重的方式,相对提高难以在汉明空间中保持相似关系的数据对之间的训练损失.然而深度哈希学习并不是一个分类任务,因此无法像Focal Loss一样根据分类正确的概率设计权重,哈希学习的目的是学到保相似性的哈希编码,本文最终利用数据对哈希编码的相似度作为权重的设计依据具体的权重形式将在模型部分详细介绍.正确分类的概率图3Focal Loss示意图[21】Fig.3Focal Loss diagram12112深度优先局部聚合哈希2.1基本定义DPLAH模型采用非对称的网络设计.Q={0},=1表示n张查询图像,X={X i}m1表示数据库有m张图像;查询图像和数据库图像的标签分别用Z={Z i},=1和Y ={川1表示;i=[Z i1,…,zj1,i=1,…,n;c表示类另数;如果查询图像0属于类别j,j=1,…,c;那么z”=1,否则=0.利用标签信息,可以构造图像对的相似性矩阵S沂{-1,1}"伊”,s”=1表示查询图像q,和数据库中的图像X j语义相似,S j=-1表示查询图像和数据库中的图像X j语义不相似.深度哈希方法的目标是学习查询图像和数据库中图像的哈希编码,查询图像的哈希编码用U沂{-1,1}"",表示,数据库中图像的哈希编码用B沂{-1,1}m伊r表示,其中r表示哈希编码的长度.对于DPLAH模型,它在特征提取部分采用预训练好的Resnet18网络[25].图4为DPLAH网络的结构示意图,利用NetVLAD层聚合Resnet18网络提取到的卷积特征,哈希编码通过VLAD编码得到,由于VLAD编码在分类任务中被广泛使用,于是本文将NetVLAD层的输出作为分类任务的输入,利用图像的标签信息监督NetVLAD层对卷积特征的利用.事实上,任何一种CNN模型都能实现图像特征提取的功能,所以对于选用哪种网络进行特征学习并不是本文的重点.62湖南大学学报(自然科学版)2021年conv1图4DPLAH结构Fig.4DPLAH structure图像标签soft-max1,0,1,1,0□1,0,0,0,11,1,0,1,0---------*----------VLADVLAD core)c)l・>:i>数据库图像的哈希编码2.2DPLAH模型的目标函数为了学习可以保留查询图像与数据库图像之间相似性的哈希编码,一种常见的方法是利用相似性的监督信息S e{-1,1}n伊"、生成的哈希编码长度r,以及查询图像的哈希编码仏和数据库中图像的哈希编码b三者之间的关系[9],即最小化相似性的监督信息与哈希编码对内积之间的L损失.考虑到相似性分布的倾斜问题,本文通过施加权重来调节查询图像和数据库图像之间的损失,其公式可以表示为:min J=移移(1-w)(u T b j-rs)专,B i=1j=1s.t.U沂{-1,1}n伊r,B沂{-1,1}m伊r,W沂R n伊m(5)受FocalLoss启发,希望深度哈希网络优先训练相似性不容易保留图像对,然而Focal Loss利用图像的分类结果对损失进行调整,因此,需要重新进行设计,由于哈希学习的目的是为了保留图像在汉明空间中的相似性关系,本文利用哈希编码的余弦相似度来设计权重,其表达式为:1+。
基本局部敏感哈希算法总结在计算机视觉中,哈希函数是一种常用的工具,用于将图像、视频、音频等多媒体数据压缩到一个较短的二进制串中,以便于快速检索、匹配和分类。
局部敏感哈希是一类专门针对高维稠密数据的哈希方法,它能够高效地处理各种图像特征、文本词向量等数据,大大提高了计算速度并减小了存储空间。
本文将简要总结基本的局部敏感哈希算法及其应用。
1. Locality Sensitive Hashing(LSH)LSH是局部敏感哈希领域的经典算法之一,它通过随机投影和哈希技巧来实现相似点之间的映射,从而在低维空间中近似计算它们之间的距离。
常用的LSH包括:Random Projection LSH、MinHash LSH和Entropy LSH等。
其中Random Projection LSH是最为常用的一种方法,它将数据向量随机投影到一个低维空间,并在此空间上运用哈希函数分组,以便于快速计算两个数据向量间的余弦距离。
MinHash LSH则采用Min-Hashing技术,将数据向量的随机排列作为哈希函数,以达到高效查找相似文档的目的。
Entropy LSH则根据数据的信息熵来设置哈希函数,以更好地处理高维、稀疏数据。
2. Product Quantization(PQ)PQ是一种哈希和量化相结合的方法,将高维向量分成若干子向量,对每个子向量进行独立的量化(如K-means聚类),得到若干计算中心点后,再编码为低维二进制码。
这些码排列组合,构成最终的哈希表。
常用的PQ算法包括:Product Quantization based Hashing、Fast Similarity Search in Large Databases using PQ-Hamming Distance及Robust Product Quantization等。
PQ广泛应用于图像、视频、音频等内容检索和机器学习等任务领域。
3. Locality-Sensitive Binary Code(LSBC)LSBC是一种适用于高维稠密数据的二进制编码技术,它与LSH的区别在于数据将被量化为二进制码,而不是哈希表。
比特币助记词正则
比特币助记词是一种用于恢复比特币钱包的重要工具。
它以一串单词的形式存在,每个单词都代表着一个唯一的概念。
这些概念涵盖了比特币钱包的所有信息,包括公钥、私钥和交易记录等。
比特币助记词的生成过程是基于数学算法的,但在本文中,我们将避免使用数学公式或计算公式来解释其原理。
相反,我们将以简单易懂的语言来描述这个过程。
比特币助记词的生成是通过一个熵源来实现的。
这个熵源可以是随机数生成器或者是一段随机的文字。
通过这个熵源,系统会生成一系列的随机数,然后将这些随机数转化成一组单词。
这组单词的数量通常为12个或24个,每个单词都是从一个预定义的单词列表中选取的。
这个单词列表是为了避免生成的助记词出现歧义或误导的情况。
因此,这个单词列表是经过精心筛选和设计的,以确保每个单词都是独一无二的,并且易于记忆和书写。
比特币助记词的生成过程是完全确定性的,这意味着只要有相同的熵源和算法,就可以生成相同的助记词。
这就为恢复钱包提供了方便,只需输入正确的助记词,系统就能够还原出钱包的所有信息。
比特币助记词的重要性不言而喻。
它是恢复钱包的唯一方式,也是保护个人财产安全的关键。
因此,我们应该妥善保管自己的助记词,避免泄露给他人或丢失。
比特币助记词是一种重要的工具,它可以帮助我们恢复比特币钱包,并保护个人财产的安全。
了解助记词的生成原理和使用方法对于比特币用户来说是至关重要的。
希望通过本文的介绍,读者能够更好地理解和运用比特币助记词。
To appear in ACM Transactions on Computer SystemsA General Framework for Prefetch Scheduling in Linked Data Structures and its Application to Multi-Chain PrefetchingSEUNGRYUL CHOIUniversity of Maryland,College ParkNICHOLAS KOHOUTEVI Technology LLC.SUMIT PAMNANIAdvanced Micro Devices,Inc.andDONGKEUN KIM and DONALD YEUNGUniversity of Maryland,College ParkThis research was supported in part by NSF Computer Systems Architecture grant CCR-0093110, and in part by NSF CAREER Award CCR-0000988.Author’s address:Seungryul Choi,University of Maryland,Department of Computer Science, College Park,MD20742.Permission to make digital/hard copy of all or part of this material without fee for personal or classroom use provided that the copies are not made or distributed for profit or commercial advantage,the ACM copyright/server notice,the title of the publication,and its date appear,and notice is given that copying is by permission of the ACM,Inc.To copy otherwise,to republish, to post on servers,or to redistribute to lists requires prior specific permission and/or a fee.c 2001ACM1529-3785/2001/0700-0001$5.00ACM Transactions on Computer Systems2·Seungryul Choi et al.Pointer-chasing applications tend to traverse composite data structures consisting of multiple independent pointer chains.While the traversal of any single pointer chain leads to the seri-alization of memory operations,the traversal of independent pointer chains provides a source of memory parallelism.This article investigates exploiting such inter-chain memory parallelism for the purpose of memory latency tolerance,using a technique called multi-chain prefetching. Previous works[Roth et al.1998;Roth and Sohi1999]have proposed prefetching simple pointer-based structures in a multi-chain fashion.However,our work enables multi-chain prefetching for arbitrary data structures composed of lists,trees,and arrays.This article makesfive contributions in the context of multi-chain prefetching.First,we intro-duce a framework for compactly describing LDS traversals,providing the data layout and traversal code work information necessary for prefetching.Second,we present an off-line scheduling algo-rithm for computing a prefetch schedule from the LDS descriptors that overlaps serialized cache misses across separate pointer-chain traversals.Our analysis focuses on static traversals.We also propose using speculation to identify independent pointer chains in dynamic traversals.Third,we propose a hardware prefetch engine that traverses pointer-based data structures and overlaps mul-tiple pointer chains according to the computed prefetch schedule.Fourth,we present a compiler that extracts LDS descriptors via static analysis of the application source code,thus automating multi-chain prefetching.Finally,we conduct an experimental evaluation of compiler-instrumented multi-chain prefetching and compare it against jump pointer prefetching[Luk and Mowry1996], prefetch arrays[Karlsson et al.2000],and predictor-directed stream buffers(PSB)[Sherwood et al. 2000].Our results show compiler-instrumented multi-chain prefetching improves execution time by 40%across six pointer-chasing kernels from the Olden benchmark suite[Rogers et al.1995],and by3%across four pared to jump pointer prefetching and prefetch arrays,multi-chain prefetching achieves34%and11%higher performance for the selected Olden and SPECint2000benchmarks,pared to PSB,multi-chain prefetching achieves 27%higher performance for the selected Olden benchmarks,but PSB outperforms multi-chain prefetching by0.2%for the selected SPECint2000benchmarks.An ideal PSB with an infinite markov predictor achieves comparable performance to multi-chain prefetching,coming within6% across all benchmarks.Finally,speculation can enable multi-chain prefetching for some dynamic traversal codes,but our technique loses its effectiveness when the pointer-chain traversal order is highly dynamic.Categories and Subject Descriptors:B.8.2[Performance and Reliability]:Performance Anal-ysis and Design Aids;B.3.2[Memory Structures]:Design Styles—Cache Memories;C.0[Gen-eral]:Modeling of computer architecture;System Architectures; C.4[Performance of Sys-tems]:Design Studies;D.3.4[Programming Languages]:Processors—CompilersGeneral Terms:Design,Experimentation,PerformanceAdditional Key Words and Phrases:Data Prefetching,Memory parallelism,Pointer Chasing CodeA General Framework for Prefetch Scheduling·3performance platforms.The use of LDSs will likely have a negative impact on memory performance, making many non-numeric applications severely memory-bound on future systems. LDSs can be very large owing to their dynamic heap construction.Consequently, the working sets of codes that use LDSs can easily grow too large tofit in the processor’s cache.In addition,logically adjacent nodes in an LDS may not reside physically close in memory.As a result,traversal of an LDS may lack spatial locality,and thus may not benefit from large cache blocks.The sparse memory access nature of LDS traversal also reduces the effective size of the cache,further increasing cache misses.In the past,researchers have used prefetching to address the performance bot-tlenecks of memory-bound applications.Several techniques have been proposed, including software prefetching techniques[Callahan et al.1991;Klaiber and Levy 1991;Mowry1998;Mowry and Gupta1991],hardware prefetching techniques[Chen and Baer1995;Fu et al.1992;Jouppi1990;Palacharla and Kessler1994],or hybrid techniques[Chen1995;cker Chiueh1994;Temam1996].While such conventional prefetching techniques are highly effective for applications that employ regular data structures(e.g.arrays),these techniques are far less successful for non-numeric ap-plications that make heavy use of LDSs due to memory serialization effects known as the pointer chasing problem.The memory operations performed for array traver-sal can issue in parallel because individual array elements can be referenced inde-pendently.In contrast,the memory operations performed for LDS traversal must dereference a series of pointers,a purely sequential operation.The lack of memory parallelism during LDS traversal prevents conventional prefetching techniques from overlapping cache misses suffered along a pointer chain.Recently,researchers have begun investigating prefetching techniques designed for LDS traversals.These new LDS prefetching techniques address the pointer-chasing problem using several different approaches.Stateless techniques[Luk and Mowry1996;Mehrotra and Harrison1996;Roth et al.1998;Yang and Lebeck2000] prefetch pointer chains sequentially using only the natural pointers belonging to the LDS.Existing stateless techniques do not exploit any memory parallelism at all,or they exploit only limited amounts of memory parallelism.Consequently,they lose their effectiveness when the LDS traversal code contains insufficient work to hide the serialized memory latency[Luk and Mowry1996].A second approach[Karlsson et al.2000;Luk and Mowry1996;Roth and Sohi1999],which we call jump pointer techniques,inserts additional pointers into the LDS to connect non-consecutive link elements.These“jump pointers”allow prefetch instructions to name link elements further down the pointer chain without sequentially traversing the intermediate links,thus creating memory parallelism along a single chain of pointers.Because they create memory parallelism using jump pointers,jump pointer techniques tolerate pointer-chasing cache misses even when the traversal loops contain insufficient work to hide the serialized memory latency.However,jump pointer techniques cannot commence prefetching until the jump pointers have been installed.Furthermore,the jump pointer installation code increases execution time,and the jump pointers themselves contribute additional cache misses.ACM Transactions on Computer Systems4·Seungryul Choi et al.Finally,a third approach consists of prediction-based techniques[Joseph and Grunwald1997;Sherwood et al.2000;Stoutchinin et al.2001].These techniques perform prefetching by predicting the cache-miss address stream,for example us-ing hardware predictors[Joseph and Grunwald1997;Sherwood et al.2000].Early hardware predictors were capable of following striding streams only,but more re-cently,correlation[Charney and Reeves1995]and markov[Joseph and Grunwald 1997]predictors have been proposed that can follow arbitrary streams,thus en-abling prefetching for LDS traversals.Because predictors need not traverse program data structures to generate the prefetch addresses,they avoid the pointer-chasing problem altogether.In addition,for hardware prediction,the techniques are com-pletely transparent since they require no support from the programmer or compiler. However,prediction-based techniques lose their effectiveness when the cache-miss address stream is unpredictable.This article investigates exploiting the natural memory parallelism that exists between independent serialized pointer-chasing traversals,or inter-chain memory parallelism.Our approach,called multi-chain prefetching,issues prefetches along a single chain of pointers sequentially,but aggressively pursues multiple independent pointer chains simultaneously whenever possible.Due to its aggressive exploitation of inter-chain memory parallelism,multi-chain prefetching can tolerate serialized memory latency even when LDS traversal loops have very little work;hence,it can achieve higher performance than previous stateless techniques.Furthermore,multi-chain prefetching does not use jump pointers.As a result,it does not suffer the overheads associated with creating and managing jump pointer state.Andfinally, multi-chain prefetching is an execution-based technique,so it is effective even for programs that exhibit unpredictable cache-miss address streams.The idea of overlapping chained prefetches,which is fundamental to multi-chain prefetching,is not new:both Cooperative Chain Jumping[Roth and Sohi1999]and Dependence-Based Prefetching[Roth et al.1998]already demonstrate that simple “backbone and rib”structures can be prefetched in a multi-chain fashion.However, our work pushes this basic idea to its logical limit,enabling multi-chain prefetching for arbitrary data structures(our approach can exploit inter-chain memory paral-lelism for any data structure composed of lists,trees,and arrays).Furthermore, previous chained prefetching techniques issue prefetches in a greedy fashion.In con-trast,our work provides a formal and systematic method for scheduling prefetches that controls the timing of chained prefetches.By controlling prefetch arrival, multi-chain prefetching can reduce both early and late prefetches which degrade performance compared to previous chained prefetching techniques.In this article,we build upon our original work in multi-chain prefetching[Kohout et al.2001],and make the following contributions:(1)We present an LDS descriptor framework for specifying static LDS traversalsin a compact fashion.Our LDS descriptors contain data layout information and traversal code work information necessary for prefetching.(2)We develop an off-line algorithm for computing an exact prefetch schedulefrom the LDS descriptors that overlaps serialized cache misses across separate pointer-chain traversals.Our algorithm handles static LDS traversals involving either loops or recursion.Furthermore,our algorithm computes a schedule even ACM Transactions on Computer SystemsA General Framework for Prefetch Scheduling·5when the extent of dynamic data structures is unknown.To handle dynamic LDS traversals,we propose using speculation.However,our technique cannot handle codes in which the pointer-chain traversals are highly dynamic.(3)We present the design of a programmable prefetch engine that performs LDStraversal outside of the main CPU,and prefetches the LDS data using our LDS descriptors and the prefetch schedule computed by our scheduling algorithm.We also perform a detailed analysis of the hardware cost of our prefetch engine.(4)We introduce algorithms for extracting LDS descriptors from application sourcecode via static analysis,and implement them in a prototype compiler using the SUIF framework[Hall et al.1996].Our prototype compiler is capable of ex-tracting all the program-level information necessary for multi-chain prefetching fully automatically.(5)Finally,we conduct an experimental evaluation of multi-chain prefetching us-ing several pointer-intensive applications.Our evaluation compares compiler-instrumented multi-chain prefetching against jump pointer prefetching[Luk and Mowry1996;Roth and Sohi1999]and prefetch arrays[Karlsson et al.2000], two jump pointer techniques,as well as predictor-directed stream buffers[Sher-wood et al.2000],an all-hardware prediction-based technique.We also inves-tigate the impact of early prefetch arrival on prefetching performance,and we compare compiler-and manually-instrumented multi-chain prefetching to eval-uate the quality of the instrumentation generated by our compiler.In addition, we characterize the sensitivity of our technique to varying hardware stly,we undertake a preliminary evaluation of speculative multi-chain prefetching to demonstrate its potential in enabling multi-chain prefetching for dynamic LDS traversals.The rest of this article is organized as follows.Section2further explains the essence of multi-chain prefetching.Then,Section3introduces our LDS descriptor framework.Next,Section4describes our scheduling algorithm,Section5discusses our prefetch engine,and Section6presents our compiler for automating multi-chain prefetching.After presenting all our algorithms and techniques,Sections7and8 then report on our experimental methodology and evaluation,respectively.Finally, Section9discusses related work,and Section10concludes the article.2.MULTI-CHAIN PREFETCHINGThis section provides an overview of our multi-chain prefetching technique.Sec-tion2.1presents the idea of exploiting inter-chain memory parallelism.Then, Section2.2discusses the identification of independent pointer chain traversals. 2.1Exploiting Inter-Chain Memory ParallelismThe multi-chain prefetching technique augments a commodity microprocessor with a programmable hardware prefetch engine.During an LDS computation,the prefetch engine performs its own traversal of the LDS in front of the processor,thus prefetching the LDS data.The prefetch engine,however,is capable of traversing multiple pointer chains simultaneously when permitted by the application.Conse-quently,the prefetch engine can tolerate serialized memory latency by overlapping cache misses across independent pointer-chain traversals.ACM Transactions on Computer Systems6·Seungryul Choi et al.<compute>ptr = A[i];ptr = ptr->next;while (ptr) {for (i=0; i < N; i++) {a)b)}<compute>ptr = ptr->next;while (ptr) {}}PD = 2INIT(ID ll);stall stall stallINIT(ID aol);stall stallFig.1.Traversing pointer chains using a prefetch engine.a).Traversal of a single linked list.b).Traversal of an array of lists data structure.To illustrate the idea of exploiting inter-chain memory parallelism,wefirst de-scribe how our prefetch engine traverses a single chain of pointers.Figure1a shows a loop that traverses a linked list of length three.Each loop iteration,denoted by a hashed box,contains w1cycles of work.Before entering the loop,the processor ex-ecutes a prefetch directive,INIT(ID ll),instructing the prefetch engine to initiate traversal of the linked list identified by the ID ll label.If all three link nodes suffer an l-cycle cache miss,the linked list traversal requires3l cycles since the link nodes must be fetched sequentially.Assuming l>w1,the loop alone contains insufficient work to hide the serialized memory latency.As a result,the processor stalls for 3l−2w1cycles.To hide these stalls,the prefetch engine would have to initiate its linked list traversal3l−2w1cycles before the processor traversal.For this reason, we call this delay the pre-traversal time(P T).While a single pointer chain traversal does not provide much opportunity for latency tolerance,pointer chasing computations typically traverse many pointer chains,each of which is often independent.To illustrate how our prefetch engine exploits such independent pointer-chasing traversals,Figure1b shows a doubly nested loop that traverses an array of lists data structure.The outer loop,denoted by a shaded box with w2cycles of work,traverses an array that extracts a head pointer for the inner loop.The inner loop is identical to the loop in Figure1a.In Figure1b,the processor again executes a prefetch directive,INIT(ID aol), causing the prefetch engine to initiate a traversal of the array of lists data structure identified by the ID aol label.As in Figure1a,thefirst linked list is traversed sequentially,and the processor stalls since there is insufficient work to hide the serialized cache misses.However,the prefetch engine then initiates the traversal of subsequent linked lists in a pipelined fashion.If the prefetch engine starts a new traversal every w2cycles,then each linked list traversal will initiate the required P T cycles in advance,thus hiding the excess serialized memory latency across multiple outer loop iterations.The number of outer loop iterations required to overlap each linked list traversal is called the prefetch distance(P D).Notice when P D>1, ACM Transactions on Computer SystemsA General Framework for Prefetch Scheduling·7 the traversals of separate chains overlap,exposing inter-chain memory parallelism despite the fact that each chain is fetched serially.2.2Finding Independent Pointer-Chain TraversalsIn order to exploit inter-chain memory parallelism,it is necessary to identify mul-tiple independent pointer chains so that our prefetch engine can traverse them in parallel and overlap their cache misses,as illustrated in Figure1.An important question is whether such independent pointer-chain traversals can be easily identi-fied.Many applications perform traversals of linked data structures in which the or-der of link node traversal does not depend on runtime data.We call these static traversals.The traversal order of link nodes in a static traversal can be determined a priori via analysis of the code,thus identifying the independent pointer-chain traversals at compile time.In this paper,we present an LDS descriptor frame-work that compactly expresses the LDS traversal order for static traversals.The descriptors in our framework also contain the data layout information used by our prefetch engine to generate the sequence of load and prefetch addresses necessary to perform the LDS traversal at runtime.While compile-time analysis of the code can identify independent pointer chains for static traversals,the same approach does not work for dynamic traversals.In dynamic traversals,the order of pointer-chain traversal is determined at runtime. Consequently,the simultaneous prefetching of independent pointer chains is limited since the chains to prefetch are not known until the traversal order is computed, which may be too late to enable inter-chain overlap.For dynamic traversals,it may be possible to speculate the order of pointer-chain traversal if the order is pre-dictable.In this paper,we focus on static LDS ter in Section8.7,we illustrate the potential for predicting pointer-chain traversal order in dynamic LDS traversals by extending our basic multi-chain prefetching technique with specula-tion.3.LDS DESCRIPTOR FRAMEWORKHaving provided an overview of multi-chain prefetching,we now explore the al-gorithms and hardware underlying its implementation.We begin by introducing a general framework for compactly representing static LDS traversals,which we call the LDS descriptor framework.This framework allows compilers(and pro-grammers)to compactly specify two types of information related to LDS traversal: data structure layout,and traversal code work.The former captures memory refer-ence dependences that occur in an LDS traversal,thus identifying pointer-chasing chains,while the latter quantifies the amount of computation performed as an LDS is traversed.After presenting the LDS descriptor framework,subsequent sections of this article will show how the information provided by the framework is used to perform multi-chain prefetching(Sections4and5),and how the LDS descriptors themselves can be extracted by a compiler(Section6).3.1Data Structure Layout InformationData structure layout is specified using two descriptors,one for arrays and one for linked lists.Figure2presents each descriptor along with a traversal code exampleACM Transactions on Computer Systems8·Seungryul Choi etal.a).b).Bfor (i = 0 ; i < N ; i++) {... = data[i].value;}for (ptr = root ; ptr != NULL; ) { ptr = ptr->next;}Fig.2.Two LDS descriptors used to specify data layout information.a).Array descriptor.b).Linked list descriptor.Each descriptor appears inside a box,and is accompanied by a traversal code example and an illustration of the data structure.and an illustration of the traversed data structure.The array descriptor,shown in Figure 2a,contains three parameters:base (B ),length (L ),and stride (S ).These parameters specify the base address of the array,the number of array elements traversed by the application code,and the stride between consecutive memory ref-erences,respectively.The array descriptor specifies the memory address stream emitted by the processor during a constant-stride array traversal.Figure 2b illus-trates the linked list descriptor which contains three parameters similar to the array descriptor.For the linked list descriptor,the B parameter specifies the root pointer of the list,the L parameter specifies the number of link elements traversed by the application code,and the ∗S parameter specifies the offset from each link element address where the “next”pointer is located.The linked list descriptor specifies the memory address stream emitted by the processor during a linked list traversal.To specify the layout of complex data structures,our framework permits descrip-tor composition.Descriptor composition is represented as a directed graph whose nodes are array or linked list descriptors,and whose edges denote address generation dependences.Two types of composition are allowed.The first type of composition is nested composition .In nested composition,each address generated by an outer descriptor forms the B parameter for multiple instantiations of a dependent inner descriptor.An offset parameter,O ,is specified in place of the inner descriptor’s B parameter to shift its base address by a constant offset.Such nested descriptors cap-ture the memory reference streams of nested loops that traverse multi-dimensional data structures.Figure 3presents several nested descriptors,showing a traversal code example and an illustration of the traversed multi-dimensional data structure along with each nested descriptor.Figure 3a shows the traversal of an array of structures,each structure itself containing an array.The code example’s outer loop traverses the array “node,”ac-cessing the field “value”from each traversed structure,and the inner loop traverses ACM Transactions on Computer SystemsA General Framework for Prefetch Scheduling·9a).b).c).for (i = 0 ; i < L 0 ; i++) {... = node[i].value;for (j = 0 ; j < L 1 ; j++) {... = node[i].data[j];}}for (i = 0 ; i < L 0 ; i++) {down = node[i].pointer;for (j = 0 ; j < L 1 ; j++) {... = down->data[j];}}node for (i = 0 ; i < L 0 ; i++) {for (j = 0 ; j < L 1 ; j++) {... = node[i].data[j];}down = node[i].pointer;for (j = 0 ; j < L 2 ; j++) {... = down->data[j];}}node Fig.3.Nested descriptor composition.a).Nesting without indirection.b).Nesting with indirection.c).Nesting multiple descriptors.Each descriptor composition appears inside a box,and is accompanied by a traversal code example and an illustration of the composite data structure.each embedded array “data.”The outer and inner array descriptors,(B,L 0,S 0)and (O 1,L 1,S 1),represent the address streams produced by the outer and inner loop traversals,respectively.(In the inner descriptor,“O 1”specifies the offset of each inner array from the top of each structure).Figure 3b illustrates another form of descriptor nesting in which indirection is used between nested descriptors.The data structure in Figure 3b is similar to the one in Figure 3a,except the in-ner arrays are allocated separately,and a field from each outer array structure,“node[i].pointer,”points to a structure containing the inner array.Hence,as shown in the code example from Figure 3b,traversal of the inner array requires indirect-ing through the outer array’s pointer to compute the inner array’s base address.In our framework,this indirection is denoted by placing a “*”in front of the inner descriptor.Figure 3c,our last nested descriptor example,illustrates the nestingACM Transactions on Computer Systems10·Seungryul Choi et al.main() { foo(root, depth_limit);}foo(node, depth) { depth = depth - 1; if (depth == 0 || node == NULL)return;foo(node->child[0], depth);foo(node->child[1], depth);foo(node->child[2], depth);}Fig.4.Recursive descriptor composition.The recursive descriptor appears inside a box,and is accompanied by a traversal code example and an illustration of the tree data structure.of multiple inner descriptors underneath a single outer descriptor to represent the address stream produced by nested distributed loops.The code example from Fig-ure 3c shows the two inner loops from Figures 3a-b nested in a distributed fashion inside a common outer loop.In our framework,each one of the multiple inner array descriptors represents the address stream for a single distributed loop,with the order of address generation proceeding from the leftmost to rightmost inner descriptor.It is important to note that while all the descriptors in Figure 3show two nesting levels only,our framework allows an arbitrary nesting depth.This permits describ-ing higher-dimensional LDS traversals,for example loop nests with >2nesting depth.Also,our framework can handle non-recurrent loads using “singleton”de-scriptors.For example,a pointer to a structure may be dereferenced multiple times to access different fields in the structure.Each dereference is a single non-recurrent load.We create a separate descriptor for each non-recurrent load,nest it under-neath its recurrent load’s descriptor,and assign an appropriate offset value,O ,and length value,L =1.In addition to nested composition,our framework also permits recursive compo-sition .Recursively composed descriptors describe depth-first tree traversals.They are similar to nested descriptors,except the dependence edge flows backwards.Since recursive composition introduces cycles into the descriptor graph,our frame-work requires each backwards dependence edge to be annotated with the depth of recursion,D ,to bound the size of the data structure.Figure 4shows a simple recursive descriptor in which the backwards dependence edge originates from and terminates to a single array descriptor.The “L”parameter in the descriptor spec-ifies the fanout of the tree.In our example,L =3,so the traversed data structure is a tertiary tree,as shown in Figure 4.Notice the array descriptor has both B and O parameters–B provides the base address for the first instance of the descriptor,while O provides the offset for all recursively nested instances.In Figures 2and 4,we assume the L parameter for linked lists and the D parame-ter for trees are known a priori,which is generally not ter in Section 4.3,we discuss how our framework handles these unknown descriptor parameters.In addi-ACM Transactions on Computer Systems。
在数字时代培养分辨有效信息能力的英文作文In the era of information overload, the ability to discern effective information has become increasingly vital. With the advent of digital technology, we are constantly bombarded with a deluge of data, making it crucial to develop the skill of separating wheat from chaff. Thisessay explores the significance of cultivating such a skill and suggests practical strategies to enhance it.Firstly, the importance of distinguishing effective information cannot be overstated. In the digital age, information is available at our fingertips, but not all ofit is accurate or useful. Fake news, misleading advertisements, and unverified rumors proliferate online, making it difficult to separate truth from falsehood.Failing to distinguish effective information can lead to misinformed decisions, wasted time, and even potential harm. Therefore, it is imperative to cultivate the ability to identify reliable and valuable information.To enhance this skill, several strategies can be employed. Firstly, developing a critical mindset is crucial.This involves questioning the source of information, assessing its credibility, and cross-checking facts. For instance, when encountering a news article, one should ask questions such as: "Is the source reliable?" "Are the claims backed by evidence?" "Are there conflicting reports?" By challenging the information presented, one can better assess its validity.Secondly, leveraging digital tools can greatly assist in separating effective information. There are numerousfact-checking websites and browser extensions that can help verify the accuracy of claims. Social media platforms also provide features that allow users to fact-check shared content. By utilizing these tools, individuals can make informed decisions based on reliable data.Moreover, staying updated with the latest trends and developments in the field of information technology is essential. Keeping abreast of new technologies, algorithms, and data analysis techniques can help individuals stay ahead of the curve and enhance their ability to discern effective information.In conclusion, the ability to distinguish effective information is paramount in the digital age. By developing a critical mindset, leveraging digital tools, and staying updated with the latest trends, individuals can enhance their information literacy and make informed decisions. In a world where information is king, it is imperative to have the skills necessary to separate truth from falsehood and harness the power of knowledge.**培养数字时代分辨有效信息的能力**在信息过载的时代,分辨有效信息的能力变得越来越重要。
哈希查找的名词解释
哈希查找(HashSearch)是一种快速检索技术,通过计算一个项目的哈希值,来快速检索该项目是否存在于数据表中。
它的原理是:数据集合中的每一个元素首先通过哈希函数映射成一个数字,然后根据这个数字对查询表进行定位,再根据查找表中的信息检索出查找的数据。
哈希查找可用于查看某个数据是否存在于某集合之中,也可以用于查看某个数据的各种相关信息。
哈希函数:
哈希函数是一种将原始数据映射成散列值的函数,它常用于实现哈希操作,即从原始数据中找到一个映射而来的数据。
根据哈希函数,相同的原始数据将会映射到相同的散列值上,由此来节省查找时间,提高查找效率。
桶:
桶(Bucket)是哈希查找的一种技术,它是把所有映射到同一散列值上的元素放在同一个桶中,以加快查找速度。
哈希查找时,先根据哈希函数计算出元素的散列值,然后根据这个散列值在桶中查找,直到找到查找元素为止。
哈希表:
哈希表(Hash Table)是一种存储数据的数据结构,它由一个固定大小的数组组成,其中每个元素都以键值对保存数据,其中键是一个数字或字符串,而值是任意类型的数据。
哈希表很容易根据键快速查找到对应的值,因此,使用哈希表可以实现快速查找操作。
1、Which of the following is NOT a type of cloud service model?A. Software as a Service (SaaS)B. Platform as a Service (PaaS)C. Infrastructure as a Service (IaaS)D. Data as a Service (DaaS) (答案)2、In computer networking, what does the acronym "FTP" stand for?A. File Transfer ProtocolB. Fast Transfer ProtocolC. File Tracking ProtocolD. Full Transfer Power (答案: A)3、Which programming language is primarily used for web development and is known for its dynamic typing and use of JavaScript?A. PythonB. JavaC. JavaScriptD. C# (答案: C)4、Which of the following is a popular open-source relational database management system?A. OracleB. MySQLC. Microsoft SQL ServerD. IBM Db2 (答案: B)5、What is the primary function of a URL (Uniform Resource Locator)?A. To provide a unique identifier for web pages and other resources on the internetB. To encrypt data sent over the internetC. To control the appearance of web pagesD. To store user preferences for websites (答案: A)6、Which of the following HTML tags is used to create a hyperlink to another webpage?A. <link>B. <a>C. <href>D. <nav> (答案: B)7、In the context of computer security, what does the term "phishing" typically refer to?A. A type of malware that replicates itselfB. The act of attempting to acquire sensitive information through deceptive means, often via emailC. An attack that exploits vulnerabilities in software to gain unauthorized accessD. The process of encrypting data to protect it (答案: B)8、Which of the following is a web development framework primarily associated with the Ruby programming language?A. DjangoB. RailsC. LaravelD. Spring (答案: B)。
pubmed常用Property与Filter介绍在医学科研中,Pubmed是一个重要的文献数据库。
为了更快地找到所需的文献,我们可以使用Pubmed提供的Property和Filter功能。
本文将介绍Pubmed的常用Property和Filter,并且为您提供相关示例,帮助您更好地利用这些工具。
1. Property介绍在Pubmed中,Property是指文献所包含的属性或信息。
您可以使用不同的Property来缩小检索范围,使结果更加准确。
以下是三个常用的Property:- Title(标题): 通过在属性前加上"ti"前缀,可以限制搜索结果仅包含在标题中出现的关键词。
例如,搜索"ti:癌症"将返回仅在标题中包含关键词"癌症"的文献。
- Abstract(摘要): 通过在属性前加上"ab"前缀,可以限制搜索结果仅包含在摘要中出现的关键词。
例如,搜索"ab:治疗方法"将返回仅在摘要中包含关键词"治疗方法"的文献。
- Author(作者): 通过在属性前加上"au"前缀,可以限制搜索结果仅包含特定作者的文献。
例如,搜索"au:张三"将返回仅由作者"张三"撰写的文献。
2. Filter介绍Filter可帮助我们更快地筛选出符合我们需求的文献,从而提高查找的效率。
以下是两个常用的Filter:- Publication Date(发表日期): 通过选择特定的时间范围,我们可以限制搜索结果仅包含在一定时间段内发表的文献。
例如,选择"2010年至今"的时间范围,将返回在2010年以后发表的文献。
- Article Types(文章类型): 通过选择特定的文章类型,我们可以限制搜索结果仅包含某一类别的文献。
Relativistic correction to J=c production at hadron collidersYing Fan,*Yan-Qing Ma,†and Kuang-Ta Chao‡Department of Physics and State Key Laboratory of Nuclear Physics and Technology,Peking University,Beijing100871,China(Received20April2009;published10June2009)Relativistic corrections to the color-singlet J=c hadroproduction at the Tevatron and LHC arecalculated up to Oðv2Þin nonrelativistic QCD(NRQCD).The short-distance coefficients are obtainedby matching full QCD with NRQCD results for the subprocess gþg!J=cþg.The long-distancematrix elements are extracted from observed J=c hadronic and leptonic decay widths up to Oðv2Þ.Usingthe CTEQ6parton distribution functions,we calculate the leading-order production cross sections andrelativistic corrections for the process pþ"pðpÞ!J=cþX at the Tevatron and LHC.Wefind that theenhancement of Oðv2Þrelativistic corrections to the cross sections over a wide range of large transversemomentum p t is negligible,only at a level of about1%.This tiny effect is due to the smallness of thecorrection to short-distance coefficients and the suppression from long-distance matrix elements.Theseresults indicate that relativistic corrections cannot help to resolve the large discrepancy between leading-order prediction and experimental data for J=c production at the Tevatron.DOI:10.1103/PhysRevD.79.114009PACS numbers:12.38.Bx,12.39.St,13.85.Ni,14.40.GxI.INTRODUCTION Nonrelativistic QCD(NRQCD)[1]is an effectivefield theory to describe production and decay of heavy quark-onium.In this formalism,inclusive production cross sec-tions and decay widths of charmonium and bottomonium can be factored into short-distance coefficients,indicating the creation or annihilation of a heavy quark pair,and long-distance matrix elements,representing the evolvement of a free quark pair into a bound state.The short-distance part can be calculated perturbatively in powers of coupling constant s,while the nonperturbative matrix elements, which are scaled as v,the typical velocity of heavy quark or antiquark in the meson rest frame,can be estimated by nonperturbative methods or models,or extracted from experimental data.One important aspect of NRQCD is the introduction of the color-octet mechanism,which allows the intermediate heavy quark pair to exist in a color-octet state at short distances and evolve into the color-singlet bound state at long distances.This mechanism has been applied success-fully to absorb the infrared divergences in P-wave[1–3] and D-wave[4,5]decay widths of heavy quarkonia.In Ref.[6],the color-octet mechanism was introduced to account for the J=c production at the Tevatron,and the theoretical prediction of production ratefits well with experimental data.However,the color-octet gluon frag-mentation predicts that the J=c is transversely polarized at large transverse momentum p t,which is in contradiction with the experimental data[7].(For a review of these issues,one can refer to Refs.[8–10]).Moreover,in Refs.[11,12]it was pointed out that the color-octet long-distance matrix elements of J=c production may be much smaller than previously expected,and accordingly this mayreduce the color-octet contributions to J=c production at the Tevatron.In the past a couple of years,in order to resolve the largediscrepancy between the color-singlet leading-order(LO)predictions and experimental measurements[13–15]ofJ=c production at the Tevatron,the next-to-leading-order (NLO)QCD corrections to this process have been per-formed,and a large enhancement of an order of magnitudefor the cross section at large p t is found[16,17].But thisstill cannot make up the large discrepancy between thecolor-singlet contribution and data.Similarly,the observeddouble charmonium production cross sections in eþeÀannihilation at B factories[18,19]also significantly differfrom LO theoretical predictions[20].Much work has beendone and it seems that those discrepancies could be re-solved by including NLO QCD corrections[21–24]andrelativistic corrections[25,26].One may wonder if therelativistic correction could also play a role to some extentin resolving the long standing puzzle of J=c production at the Tevatron.In this paper we will estimate the effect of relativisticcorrections to the color-singlet J=c production based on NRQCD.The relativistic effects are characterized by therelative velocity v with which the heavy quark or antiquarkmoves in the quarkonium rest frame.According to thevelocity scaling rules of NRQCD[27],the matrix elementsof operators can be organized into a hierarchy in the smallparameter v.We calculate the short-distance part pertur-batively up to Oðv2Þ.In order to avoid model dependence in determining the long-distance matrix elements,we ex-tract the matrix elements of up to dimension-8four fermionoperators from observed decay rates of J=c[28].Wefind that the relativistic effect on the color-singlet J=c produc-tion at both the Tevatron and LHC is tiny and negligible,*ying.physics.fan@†@‡ktchao@PHYSICAL REVIEW D79,114009(2009)and relativistic corrections cannot offer much help to re-solve the puzzle associated with charmonium production at the Tevatron,and other mechanisms should be investigated to clarify the problem.The rest of the paper is organized as follows.In Sec.II, the NRQCD factorization formalism and matching condi-tion of full QCD and NRQCD effectivefield theory at long distances are described briefly,and then detailed calcula-tions are given,including the perturbative calculation of the short-distance coefficient,the long-distance matrix elements extracted from experimental data,and the parton-level differential cross section convolution with the parton distribution functions(PDF).In Sec.III,nu-merical results of differential cross sections over transverse momentum p t at the Tevatron and LHC are given and discussions are made for the enhancement effects of rela-tivistic corrections.Finally the summary of this work is presented.II.PRODUCTION CROSS SECTION IN NRQCDFACTORIZATIONAccording to NRQCD factorization[1],the inclusive cross section for the hadroproduction of J=c can be writ-ten asd dt ðgþg!J=cþgÞ¼XnF nm d nÀ4ch0j O J=c n j0i:(1)The short-distance coefficients F n describe the production of a heavy quark pair Q"Q from the gluons,which come from the initial state hadrons,and are usually expressed in kinematic invariants.m c is the mass of charm quark.Thelong-distance matrix elements h0j O J=cn j0i with mass di-mension d n describe the evolution of Q"Q into J=c.The subscript n represents the configuration in which the c"c pair can be for the J=c Fock state expansion,and it isusually denoted as n¼2Sþ1L½1;8J .Here,S,L,and J standfor spin,orbital,and total angular momentum of the heavy quarkonium,respectively.Superscript1or8means the color-singlet or color-octet state.For the color-singlet3S1c"c production,there are only two matrix elements contributing up to Oðv2Þ:the leading-order term h0j O J=cð3S½1 1Þj0i and the relativistic correction term h0j P J=cð3S½1 1Þj0i.Therefore the differential cross section takes the following form:d dt ðgþg!J=cþgÞ¼Fð3S½11Þm2ch0j O J=cð3S½1 1Þj0iþGð3S½11Þm4ch0j P J=cð3S½1 1Þj0iþOðv4Þ;(2)and the explicit expressions of the matrix elements are[1]h0j O J=cð3S½1 1Þj0i¼h0j y i cða y c a cÞc y i j0i;h0j P J=cð3S½1 1Þj0i¼12y i cða y c a cÞc y iÂÀi2D$2þH:c:;(3)where c annihilates a heavy quark, creates a heavy antiquark,a y c and a c are operators creating and annihilat-ing J=c in thefinal state,and D$¼~DÀD.In order to determine the short-distance coefficients Fð3S½1 1Þand Gð3S½1 1Þ,the matching condition of full QCD and NRQCD is needed:ddtðgþg!J=cþgÞj pert QCD¼Fð3S½11Þm2ch0j O J=cð3S½1 1Þj0iþGð3S½11Þm4ch0j P J=cð3S½1 1Þj0ij pert NRQCD:(4)The differential cross section for the production of char-monium J=c on the left-hand side of Eq.(4)can be calculated in perturbative QCD.On the right-hand side the long-distance matrix elements are extracted from ex-perimental data.Quantities on both sides of the equation are expanded at leading order of s and next-to-leading order of v2.Then the short-distance coefficients Fð3S½1 1Þand Gð3S½1 1Þcan be obtained by comparing the terms with powers of v2on both sides.A.Perturbative short-distance coefficientsWe now present the calculation of relativistic correction to the process gþg!J=cþg.In order to determine the Oðv2Þcontribution in Eq.(2),the differential cross section on the left-hand side of Eq.(4)or equivalently the QCD amplitude should be expanded up to Oðv2Þ.We use FeynArts[29]to generate Feynman diagrams and am-plitudes,FeynCalc[30]to handle amplitudes,and FORTRAN to evaluate the phase space integrations.A typi-cal Feynman diagram for the process is shown in Fig.1.FIG.1.Typical Feynman diagram for3S½1 1c"c hadroproduction at LO.YING FAN,YAN-QING MA,AND KUANG-TA CHAO PHYSICAL REVIEW D79,114009(2009)The momenta of quark and antiquark in the lab frame are [26,31,32]:12Pþq ¼L ð12P rþq r Þ;12PÀq ¼L ð12P rÀq r Þ;(5)where L is the Lorenz boost matrix from the rest frame ofthe J=c to the frame in which it moves with four momen-tum P .P r ¼ð2E q ;0Þ,E q ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffim 2c þj ~qj 2p ,and 2q r ¼2ð0;~qÞis the relative momentum between heavy quark and antiquark in the J=c rest frame.The differential cross section on the left-hand side of Eq.(4)isddtðg þg !J=c þg Þj pert QCD ¼12"X j M ðg þg !J=c þg Þj 2h 0j O J=c ð3S ½1 1Þj 0i ;(6)where h 0j O J=c ð3S ½11Þj 0i is the matrix element evaluated at tree level,and the summation/average of the color and polarization degrees of freedom for the final/initial statehas been implied by the symbol "P.The amplitude for the color-singlet process g ðp 1Þþg ðp 2Þ!J=c ðp 3¼P Þþg ðp 4ÞisM ðg þg !J=c þg Þ¼ffiffiffiffiffiffi1E q s Tr ½C ½1 Åð1ÞM am ;(7)where M am denotes the parton-level amplitude amputatedof the heavy quark spinors.The factor ffiffiffiffi1E q q comes from the normalization of the composite state j 3S ½11i[5].Here the covariant projection operator method [33,34]is adopted.For a color-singlet state,the color projector C ½1 ¼ ijffiffiffiffifficp .The covariant spin-triplet projector Åð1Þin (7)is defined byÅð1Þ¼X s "s v ðs Þ"u ð"s Þ 12;s ;12;"s j 1;S z ;(8)with its explicit formÅð1Þ¼1ffiffiffi2p ðE q þm c Þ P 2Àq Àm cÂP À2E q 4E qP 2þq þm c;(9)where the superscript (1)denotes the spin-triplet state andis the polarization vector of the spin 1meson.The Lorentz-invariant Mandelstam variables are defined bys ¼ðp 1þp 2Þ2¼ðp 3þp 4Þ2;t ¼ðp 1Àp 3Þ2¼ðp 2Àp 4Þ2;u ¼ðp 1Àp 4Þ2¼ðp 2Àp 3Þ2;(10)and they satisfys þt þu ¼P 2¼4E 2q ¼4ðm 2c þj ~qj 2Þ:(11)Furthermore,the covariant spinors are normalized relativ-istically as "uu¼À"vv ¼2m c .Let M be short for the amplitude M ðg þg !J=c þg Þin Eq.(7),and it can be expanded in powers of v orequivalently j ~qj .That is M ¼ M¼ M ð0Þþ1q q @2M @q @q q ¼0þO ðq 4Þ;(12)where high order terms in four momentum q have been omitted.Terms of odd powers in q vanish because the heavy quark pair is in an S-wave configuration.Note that the polarization vector also depends on q ,but it only has even powers of four momentum q ,and their expressions may be found e.g.in the appendix of Ref.[35].Therefore expansion on q 2of can be carried out after amplitude squaring.The following substitute is adopted:q q¼13j ~q j 2 Àg þP P P 213j ~q j 2Á :(13)This substitute should be understood to hold in the inte-gration over relative momentum ~qand in the S-wave case.Here,j ~qj 2can be identified as [33,36]j ~qj 2¼jh 0j y ðÀi2D $Þ2c j c ð3S ½1 1Þijjh 0j y c j c ð3S 1Þij ¼h 0j P J=c ð3S ½1 1Þj 0i h 0j O J=c ð3S ½1 1Þj 0i½1þO ðv 4Þ :(14)Then the amplitude squared defined in Eq.(6)up to O ðv 2Þis Xj M j 2¼Mð0ÞM Ãð0ÞXÃþ16j ~q j 2 Á @2M @q @q q ¼0M Ãð0Þþ Á @2M Ã@q @q q ¼0M ð0Þ X à þO ðv 4Þ:(15)The heavy quark and antiquark are taken to be on shell,which means that P Áq ¼0,and then gauge invariance is maintained.The polarization sum in Eq.(15)isXà ¼Àg þP PP 2:(16)RELATIVISTIC CORRECTION TO J=c PRODUCTION ...PHYSICAL REVIEW D 79,114009(2009)It is clearly seen that the polarization sum above only contains even order powers of four momentum q,therefore it will make a contribution to the relativistic correction at Oðv2Þin thefirst term on the right-hand side of Eq.(15)when the contraction over indices and is carried out. However,since the second term on the right-hand side of Eq.(15)already has a term proportional to q2,i.e.j~q j2,the four momentum q can be set to zero throughout the index contraction.Then we haveXj M j2¼AþB j~q j2þOðv4Þ;(17)where A and B are independent of j~q j.By comparing Eqs.(4)and(6),we obtain the short-distance coefficients shown explicitly below.The leading-order one isFð3S½1 1Þm2c ¼12111c1A¼12111c1ð4 sÞ35120m c½16ðs2þtsþt2Þm4cÀ4ð2s3þ3ts2þ3t2sþ2t3Þm2cþðs2þtsþt2Þ2 =½9ðsÀ4m2cÞ2ðtÀ4m2cÞ2ðsþtÞ2 ;(18)and the relativistic correction term isGð3S½1 1Þm4c ¼116 s21641412N c13B¼116 s21641412N c13ð4 sÞ3ðÀ2560Þ½2048ð3s2þ2tsþ3t2Þm10cÀ256ð5s3À2ts2À2t2sþ5t3Þm8cÀ320ð3s4þ10ts3þ10t2s2þ10t3sþ3t4Þm6cþ16ð21s5þ63ts4þ88t2s3þ88t3s2þ63t4sþ21t5Þm4cÀ4ð7s6þ18ts5þ23t2s4þ28t3s3þ23t4s2þ18t5sþ7t6Þm2cÀstðsþtÞðs2þtsþt2Þ2 =½27m cð4m2cÀsÞ3ð4m2cÀtÞ3ðsþtÞ3 :(19)Each of the factors has its own origin:1=16 s2isproportional to the inverse square of the Møller’s invariantflux factor,1=64and1=4are the color average andspin average of initial two gluons,respectively,1=2N ccomes from the color-singlet long-distance matrix elementdefinition in Eq.(3)with N c¼3,1=3is the spin average for total spin J¼1states,andð4 sÞ3quantifies the coupling in the QCD interaction vertices.Further-more the variable u has been expressed in terms of sand t through Eq.(11).To verify our results,wefindthat those in Ref.[31]discussed for J=c photoproduction are consistent with ours under replacementð4 Þe2c!ð4 sÞ,and the result in Ref.[37]agrees with ours at leading order after performing the polarization summation.B.Nonperturbative long-distance matrix elements The long-distance matrix elements may be determined by potential model[25,36]or lattice calculations[38],and by phenomenological extraction from experimental data. Here wefirst extract the decay matrix elements from experimental data.Up to NLO QCD and v2relativistic corrections,decay widths of the color-singlet J=c to light hadrons(LH)and eþeÀcan be expressed analytically as follows[33]:À½J=c!LH ¼F LHð3S½1 1Þm2ch H j O J=cð3S½1 1Þj H iþG LHð3S½1 1Þm4ch H j P J=cð3S½1 1Þj H i;À½J=c!eþeÀ ¼F eþeÀð3S½1 1Þm2ch H j O J=cð3S½1 1Þj H iþG eþeÀð3S½1 1Þm4ch H j P J=cð3S½1 1Þj H i;(20)where the short-distance coefficients are[33]F LHð3S½1 1Þ¼ðN2cÀ1ÞðN2cÀ4ÞN3cð 2À9Þ3sð2m cÞÂ1þðÀ9:46C Fþ4:13C AÀ1:161N fÞ sþ2 e2QX Nfi¼1Q2i2e1À134C Fs;G LHð3S½1 1Þ¼À5ð19 2À132Þ7293sð2m cÞ;F eþeÀð3S½1 1Þ¼2 e2Q 2e31À4C F sð2m cÞ;G eþeÀð3S½1 1Þ¼À8 e2Q 2e9:(21)YING FAN,YAN-QING MA,AND KUANG-TA CHAO PHYSICAL REVIEW D79,114009(2009)Then,the production matrix elements can be related to the decay matrix elements through vacuum saturation approxi-mationh 0j O J=c ð3S ½1 1Þj 0i ¼ð2J þ1Þh H j OJ=c ð3S ½11Þj H i ½1þO ðv 4Þ ¼3h H j O J=c ð3S ½11Þj H i½1þO ðv 4Þ :(22)Combining the above equations and the experimental data in [28],i.e.,À½J=c !LH ¼81:7KeV and À½J=c !e þe À ¼5:55KeV and excluding the NLO QCD radiative corrections in (21),we get the solutions accurate at leading order in sh 0j O J=c ð3S ½1 1Þj 0i ¼0:868GeV 3;h 0j P J=c ð3S ½1 1Þj 0i ¼0:190GeV 5;(23)and the enhanced matrix elements accurate up to NLO ins can be obtained by including NLO QCD radiative corrections in (21)h 0j O J=c ð3S ½1 1Þj 0i ¼1:64GeV 3;h 0j P J=c ð3S ½11Þj 0i¼0:320GeV 5:(24)The strong coupling constant evaluated at the charm quarkmass scale is s ð2m c Þ¼0:250for m c ¼1:5GeV .The other input parameters are chosen as follows:the QCD scale parameter ÃQCD ¼392MeV ,the number of quarks with mass less than the energy scale m c is N f ¼3,color factor C F ¼4=3and C A ¼3,the electric charge of the charm quark is e Q ¼2=3,Q i are the electric charges of the light quarks and fine structure constant e ¼1=137.Our numerical values for the production matrix elementsh 0j O J=c ð3S ½1 1Þj 0i and h 0j PJ=c ð3S ½11Þj 0i are accurate up to NLO in v 2with uncertainties due to experimental errors and higher order corrections.C.Cross sections for p þ pðp Þ!J=c þX and phase space integration Based on the results obtained for the subprocess g þg !J=c þg we further calculate the production cross sections and relativistic corrections in the process p þ"pðp Þ!J=c þX ,which involves hadrons as the initial states.In order to get the cross sections at the hadron level,the partonic cross section defined in Eq.(6)has to be convoluted with the parton distribution function (PDF)f g=p ðx ÆÞ,where x Ædenotes the fraction of the proton or antiproton beam energy carried by the gluons.We will work in the p "p center-of-mass (CM)frame and denote the p "penergy by ffiffiffiS p ,the rapidity of J=c by y C ,and that of the gluon jet by y D .The differential crosssection of p þ"pðp Þ!J=c þX can be written as [39]d 3 ðp þ"pðp Þ!J=c þX Þdp 2t dy C dy D ¼x þf g=p ðx þÞx Àf g="p ðp Þðx ÀÞd ðg þg !J=c þg Þdt;(25)wherex Ƽm C t exp ðÆy C Þþm Dt exp ðÆy D ÞffiffiffiSp ;(26)with the transverse mass m C;D t ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffim 2C;D þp 2t q ,the mesonmass m C ¼2m c ,the gluon mass m D ¼0,and the trans-verse momentum p t .The Mandelstam variables can be expressed in terms of p t ,y C ,and y Ds ¼x þx ÀS;t ¼Àp 2t Àm C t m Dt exp ðy D Ày C Þ;u ¼Àp 2t Àm C t m D t exp ðy C Ày D Þ:(27)The accessible phase space puts kinetic constraints onvariables p t ,y C ,and y D for a fixed value of two colliding hadron center-of-mass energy ffiffiffiSp 0 p t12ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðS;m 2C ;m 2D ÞSs ;j y C j Arcosh S þm 2C Àm 2D2ffiffiffiS p m C t;Àln ffiffiffiS p Àm C t exp ðÀy CÞm D t y Dln ffiffiffiS p Àm C t exp ðy C Þm D t ;(28)where ðx;y;z Þ¼x 2þy 2þz 2À2ðxy þyz þzx Þ.Thedistribution over p t of the differential cross section can be obtained after phase space integration.III.NUMERICAL RESULTS AND ANALYSIS The CTEQ6PDFs [40]are used in our numerical cal-culation.We present the distribution of J=c production differential cross section d =dp t over p t at the Tevatron with ffiffiffiS p ¼1:96TeV and at the LHC with ffiffiffiS p ¼14TeV in Figs.2–5.The solid line represents the distribution at leading order in O ðv 2Þ,and the dotted line describes the relativistic correction at next-to-leading order in O ðv 2Þ(excluding the leading-order result).The long-distance matrix elements are accurate up to leading order in s from Eq.(23)or next-to-leading order in s from Eq.(24).The variable p t is set to be from 5GeV to 30GeV (50GeV)for the Tevatron (LHC),and the distri-butions are depicted in logarithm unit along the vertical axis.All curves decrease rather rapidly as the transverse momentum p t increases,and the leading-order d =dp t behavior is not changed by the relativistic corrections.It can be seen that the ratio of relativistic correction to leading-order term is 1%or so,and less than 2%,which is insignificant and negligible.RELATIVISTIC CORRECTION TO J=c PRODUCTION ...PHYSICAL REVIEW D 79,114009(2009)The tiny effect of relativistic corrections is partly due to the smallness of the short-distance coefficient correction.In fact,the ratio of the NLO short-distance coefficient to the LO one from Eqs.(18)and (19)can be expanded as a series of the small quantity m c ,as compared with ffiffiffis p ,and this series reduces to a fixed small number 16if only the leading-order term is kept,i.e.,G ð3S ½11ÞF ð3S 1Þ!16;as 2mc ffiffiffis p !0;2m cffiffitp !0:(29)Together with the suppression from long-distance matrix elements,the tiny effect of relativistic corrections can be accounted for.Our results for relativistic corrections in theprocess p þ"pðp Þ!J=c þX are similar to that in the J=c photoproduction process discussed in Ref.[31].These results may indicate that the nonrelativistic approxi-mation in NRQCD is good for charmonium production at high energy collisions,and relativistic corrections are not important.This is in contrast to the case of double char-monium production in e þe Àannihilation at B factories,where relativistic corrections may be significant.IV .SUMMARYIn this paper,relativistic corrections to the color-singlet J=c hadroproduction at the Tevatron and LHC are calcu-lated up to O ðv 2Þin the framework of the NRQCD facto-rization approach.The perturbative short-distance coefficients are obtained by matching the full QCDdiffer-FIG.3.The p t distribution of d ðp þ"p !J=c þX Þ=dp t(with enhanced matrix elements)at the Tevatron with ffiffiffiS p ¼1:96TeV .The O ðv 0Þand O ðv 2Þresults are represented by the solid and dotted lines,respectively.FIG.4.The p t distribution of d ðp þp !J=c þX Þ=dp t at the LHC with ffiffiffiS p¼14TeV .The O ðv 0Þand O ðv 2Þresults are represented by the solid and dotted lines,respectively.FIG.5.The p t distribution of d ðp þp !J=c þX Þ=dp t (with enhanced matrix elements)at the LHC with ffiffiffiS p ¼14TeV .The O ðv 0Þand O ðv 2Þresults are represented by the solid and dotted lines,respectively.FIG.2.The p t distribution of d ðp þ"p !J=c þX Þ=dp t at the Tevatron with ffiffiffiS p ¼1:96TeV .The O ðv 0Þand O ðv 2Þresults are represented by the solid and dotted lines,respectively.YING FAN,YAN-QING MA,AND KUANG-TA CHAO PHYSICAL REVIEW D 79,114009(2009)ential cross section with the NRQCD effectivefield theorycalculation for the subprocess gþg!J=cþg.The nonperturbative long-distance matrix elements are ex-tracted from experimental data for J=c hadronic and leptonic decay widths up to Oðv2Þwith an approximate relation between the production matrix elements and decaymatrix ing the CTEQ6parton distributionfunctions,we then calculate the LO production cross sec-tions and relativistic corrections for the process pþ"pðpÞ!J=cþX at the Tevatron and LHC.Wefind that the Oðv2Þrelativistic corrections to the differential cross sections over a wide range of large transverse momentum p t are tiny and negligible,only at a level of about1%.The tiny effect of relativistic corrections is due to the smallness of the short-distance coefficient correction and the sup-pression from long-distance matrix elements.These results may indicate that the nonrelativistic approximation in NRQCD is good for charmonium production at high en-ergy hadron-hadron collisions,and relativistic corrections cannot offer much help to resolve the large discrepancy between the leading-order prediction and experimental data for J=c production at the Tevatron.Other mecha-nisms such as those suggested in[41–43]may need to be considered,aside from higher order QCD contributions.ACKNOWLEDGMENTSWe would like to thank Dr.Ce Meng for reading the manuscript and helpful discussions.This work was sup-ported by the National Natural Science Foundation of China(No.10675003,No.10721063)and the Ministry of Science and Technology of China(2009CB825200).[1]G.T.Bodwin,E.Braaten,and G.P.Lepage,Phys.Rev.D51,1125(1995);55,5853(E)(1997).[2]Han-Wen Huang and Kuang-Ta Chao,Phys.Rev.D54,3065(1996);56,7472(E)(1997);55,244(1997);54,6850 (1996);56,1821(E)(1997).[3] A.Petrelli,M.Cacciari,M.Greco,F.Maltoni,and M.L.Mangano,Nucl.Phys.B514,245(1998).[4]Zhi-Guo He,Ying Fan,and Kuang-Ta Chao,Phys.Rev.Lett.101,112001(2008).[5]Ying Fan,Zhi-Guo He,Yan-Qing Ma,and Kuang-TaChao,arXiv:0903.4572.[6] E.Braaten and S.Fleming,Phys.Rev.Lett.74,3327(1995).[7]CDF Collaboration,Phys.Rev.Lett.99,132001(2007).[8]Michael Kra¨mer,Prog.Part.Nucl.Phys.47,141(2001).[9]N.Brambilla et al.,arXiv:hep-ph/0412158.[10]nsberg,Int.J.Mod.Phys.A21,3857(2006).[11]Yan-Qing Ma,Yu-Jie Zhang,and Kuang-Ta Chao,Phys.Rev.Lett.102,162002(2009).[12]Bin Gong and Jian-Xiong Wang,Phys.Rev.Lett.102,162003(2009).[13] F.Abe et al.(CDF Collaboration),Phys.Rev.Lett.69,3704(1992).[14] F.Abe et al.(CDF Collaboration),Phys.Rev.Lett.79,572(1997).[15] F.Abe et al.(CDF Collaboration),Phys.Rev.Lett.79,578(1997).[16]J.Campbell,F.Maltoni,and F.Tramontano,Phys.Rev.Lett.98,252002(2007).[17] B.Gong and J.X.Wang,Phys.Rev.Lett.100,232001(2008);Phys.Rev.D78,074011(2008).[18]K.Abe et al.(BELLE Collaboration),Phys.Rev.Lett.89,142001(2002).[19] B.Aubert et al.(BABAR Collaboration),Phys.Rev.D72,031101(2005).[20] E.Braaten and J.Lee,Phys.Rev.D67,054007(2003);72,099901(E)(2005);K.Y.Liu,Z.G.He,and K.T.Chao,Phys.Lett.B557,45(2003);Phys.Rev.D77,014002 (2008);K.Hagiwara,E.Kou,and C.F.Qiao,Phys.Lett.B 570,39(2003).[21]Y.J.Zhang,Y.J.Gao,and K.T.Chao,Phys.Rev.Lett.96,092001(2006).[22]Y.J.Zhang and K.T.Chao,Phys.Rev.Lett.98,092003(2007);B.Gong and J.X.Wang,arXiv:0904.1103. [23] B.Gong and J.X.Wang,Phys.Rev.D77,054028(2008);Phys.Rev.Lett.100,181803(2008);arXiv:0904.1103.[24]Y.J.Zhang,Y.Q.Ma,and K.T.Chao,Phys.Rev.D78,054006(2008).[25]G.T.Bodwin,D.Kang,and J.Lee,Phys.Rev.D74,014014(2006);74,114028(2006).[26]Z.G.He,Y.Fan,and K.T.Chao,Phys.Rev.D75,074011(2007).[27]G.P.Lepage,L.Magnea,C.Nakhleh,U.Magnea,and K.Hornbostel,Phys.Rev.D46,4052(1992).[28] C.Amsler et al.(Particle Data Group),Phys.Lett.B667,1(2008).[29]J.Ku¨blbeck,M.Bo¨hm,and A.Denner,Comput.Phys.Commun.60165(1990);T.Hahn,Comput.Phys.Commun.140,418(2001).[30]R.Mertig,M.Bo¨hm,and A.Denner,Comput.Phys.Commun.64345(1991).[31] C.B.Paranavitane,B.H.J.McKellar,and J.P.Ma,Phys.Rev.D61,114502(2000).[32]Eric Braaten and Yu-Qi Chen,Phys.Rev.D54,3216(1996).[33]G.T.Bodwin and A.Petrelli,Phys.Rev.D66,094011(2002).[34]W.-Y.Keung and I.J.Muzinich,Phys.Rev.D27,1518(1983).[35] C.-H.Chang,J.-X.Wang,and X.-G.Wu,Phys.Rev.D70,114019(2004).[36]G.T.Bodwin,H.S.Chung,D.Kang,J.Lee,and C.Yu,Phys.Rev.D77,094017(2008).[37] A.K.Leibovich,Phys.Rev.D56,4412(1997).RELATIVISTIC CORRECTION TO J=c PRODUCTION...PHYSICAL REVIEW D79,114009(2009)。
a r X i v :h e p -e x /0512028v 1 13 D e c 2005B A B A R -PUB-05/047SLAC-PUB-11520Search for rare quark-annihilation decays,B −→D (∗)−s φB.Aubert,1R.Barate,1D.Boutigny,1F.Couderc,1Y.Karyotakis,1J.P.Lees,1V.Poireau,1V.Tisserand,1A.Zghiche,1E.Grauges,2A.Palano,3M.Pappagallo,3A.Pompili,3J.C.Chen,4N.D.Qi,4G.Rong,4P.Wang,4Y.S.Zhu,4G.Eigen,5I.Ofte,5B.Stugu,5G.S.Abrams,6M.Battaglia,6D.Best,6A.B.Breon,6D.N.Brown,6J.Button-Shafer,6R.N.Cahn,6E.Charles,6C.T.Day,6M.S.Gill,6A.V.Gritsan,6Y.Groysman,6R.G.Jacobsen,6R.W.Kadel,6J.Kadyk,6L.T.Kerth,6Yu.G.Kolomensky,6G.Kukartsev,6G.Lynch,6L.M.Mir,6P.J.Oddone,6T.J.Orimoto,6M.Pripstein,6N.A.Roe,6M.T.Ronan,6W.A.Wenzel,6M.Barrett,7K.E.Ford,7T.J.Harrison,7A.J.Hart,7C.M.Hawkes,7S.E.Morgan,7A.T.Watson,7M.Fritsch,8K.Goetzen,8T.Held,8H.Koch,8B.Lewandowski,8M.Pelizaeus,8K.Peters,8T.Schroeder,8M.Steinke,8J.T.Boyd,9J.P.Burke,9W.N.Cottingham,9T.Cuhadar-Donszelmann,10B.G.Fulsom,10C.Hearty,10N.S.Knecht,10T.S.Mattison,10J.A.McKenna,10A.Khan,11P.Kyberd,11M.Saleem,11L.Teodorescu,11A.E.Blinov,12V.E.Blinov,12A.D.Bukin,12V.P.Druzhinin,12V.B.Golubev,12E.A.Kravchenko,12A.P.Onuchin,12S.I.Serednyakov,12Yu.I.Skovpen,12E.P.Solodov,12A.N.Yushkov,12M.Bondioli,13M.Bruinsma,13M.Chao,13S.Curry,13I.Eschrich,13D.Kirkby,nkford,13P.Lund,13M.Mandelkern,13R.K.Mommsen,13W.Roethel,13D.P.Stoker,13C.Buchanan,14B.L.Hartfiel,14S.D.Foulkes,15J.W.Gary,15O.Long,15B.C.Shen,15K.Wang,15L.Zhang,15D.del Re,16H.K.Hadavand,16E.J.Hill,16D.B.MacFarlane,16H.P.Paar,16S.Rahatlou,16V.Sharma,16J.W.Berryhill,17C.Campagnari,17A.Cunha,17B.Dahmes,17T.M.Hong,17M.A.Mazur,17J.D.Richman,17W.Verkerke,17T.W.Beck,18A.M.Eisner,18C.J.Flacco,18C.A.Heusch,18J.Kroseberg,18W.S.Lockman,18G.Nesom,18T.Schalk,18B.A.Schumm,18A.Seiden,18P.Spradlin,18D.C.Williams,18M.G.Wilson,18J.Albert,19E.Chen,19G.P.Dubois-Felsmann,19A.Dvoretskii,19D.G.Hitlin,19J.S.Minamora,19I.Narsky,19T.Piatenko,19F.C.Porter,19A.Ryd,19A.Samuel,19R.Andreassen,20G.Mancinelli,20B.T.Meadows,20M.D.Sokoloff,20F.Blanc,21P.C.Bloom,21S.Chen,21W.T.Ford,21J.F.Hirschauer,21A.Kreisel,21U.Nauenberg,21A.Olivas,21W.O.Ruddick,21J.G.Smith,21K.A.Ulmer,21S.R.Wagner,21J.Zhang,21A.Chen,22E.A.Eckhart,22A.Soffer,22W.H.Toki,22R.J.Wilson,22F.Winklmeier,22Q.Zeng,22D.Altenburg,23E.Feltresi,23A.Hauke,23B.Spaan,23T.Brandt,24J.Brose,24M.Dickopp,24V.Klose,cker,24R.Nogowski,24S.Otto,24A.Petzold,24J.Schubert,24K.R.Schubert,24R.Schwierz,24J.E.Sundermann,24D.Bernard,25G.R.Bonneaud,25P.Grenier,tour,25S.Schrenk,25Ch.Thiebaux,25G.Vasileiadis,25M.Verderi,25D.J.Bard,26P.J.Clark,26W.Gradl,26F.Muheim,26S.Playfer,26Y.Xie,26M.Andreotti,27D.Bettoni,27C.Bozzi,27R.Calabrese,27G.Cibinetto,27E.Luppi,27M.Negrini,27L.Piemontese,27F.Anulli,28R.Baldini-Ferroli,28A.Calcaterra,28R.de Sangro,28G.Finocchiaro,28P.Patteri,28I.M.Peruzzi,28,∗M.Piccolo,28A.Zallo,28A.Buzzo,29R.Capra,29R.Contri,29M.Lo Vetere,29M.M.Macri,29M.R.Monge,29S.Passaggio,29C.Patrignani,29E.Robutti,29A.Santroni,29S.Tosi,29G.Brandenburg,30K.S.Chaisanguanthum,30M.Morii,30J.Wu,30R.S.Dubitzky,ngenegger,31J.Marks,31S.Schenk,31U.Uwer,31W.Bhimji,32D.A.Bowerman,32P.D.Dauncey,32U.Egede,32R.L.Flack,32J.R.Gaillard,32J .A.Nash,32M.B.Nikolich,32W.Panduro Vazquez,32X.Chai,33M.J.Charles,33W.F.Mader,33U.Mallik,33V.Ziegler,33J.Cochran,34H.B.Crawley,34L.Dong,34V.Eyges,34W.T.Meyer,34S.Prell,34E.I.Rosenberg,34A.E.Rubin,34J.I.Yi,34G.Schott,35N.Arnaud,36M.Davier,36X.Giroux,36G.Grosdidier,36A.H¨o cker,36F.LeDiberder,36V.Lepeltier,36A.M.Lutz,36A.Oyanguren,36T.C.Petersen,36S.Plaszczynski,36S.Rodier,36P.Roudeau,36M.H.Schune,36A.Stocchi,36W.F.Wang,36G.Wormser,36C.H.Cheng,nge,37D.M.Wright,37A.J.Bevan,38C.A.Chavez,38I.J.Forster,38J.R.Fry,38E.Gabathuler,38R.Gamet,38K.A.George,38D.E.Hutchcroft,38R.J.Parry,38D.J.Payne,38K.C.Schofield,38C.Touramanis,38F.Di Lodovico,39W.Menges,39R.Sacco,39C.L.Brown,40G.Cowan,40H.U.Flaecher,40M.G.Green,40D.A.Hopkins,40P.S.Jackson,40T.R.McMahon,40S.Ricciardi,40F.Salvatore,40D.N.Brown,41C.L.Davis,41J.Allison,42N.R.Barlow,42R.J.Barlow,42Y.M.Chia,42C.L.Edgar,42M.C.Hodgkinson,42M.P.Kelly,fferty,42M.T.Naisbit,42J.C.Williams,42C.Chen,43W.D.Hulsbergen,43A.Jawahery,43D.Kovalskyi,43e,43D.A.Roberts,43G.Simi,43G.Blaylock,44C.Dallapiccola,44S.S.Hertzbach,44R.Kofler,44X.Li,44T.B.Moore,44S.Saremi,44H.Staengle,44S.Y.Willocq,44R.Cowan,45K.Koeneke,45G.Sciolla,45S.J.Sekula,45M.Spitznagel,45F.Taylor,45R.K.Yamamoto,45H.Kim,46P.M.Patel,46S.H.Robertson,46zzaro,47V.Lombardo,47F.Palombo,47J.M.Bauer,48L.Cremaldi,48V.Eschenburg,48R.Godang,48 R.Kroeger,48J.Reidy,48D.A.Sanders,48D.J.Summers,48H.W.Zhao,48S.Brunet,49D.Cˆo t´e,49P.Taras,49F.B.Viaud,49H.Nicholson,50N.Cavallo,51,†G.De Nardo,51F.Fabozzi,51,†C.Gatto,51L.Lista,51D.Monorchio,51P.Paolucci,51D.Piccolo,51C.Sciacca,51M.Baak,52H.Bulten,52G.Raven,52H.L.Snoek,52L.Wilden,52C.P.Jessop,53J.M.LoSecco,53T.Allmendinger,54G.Benelli,54K.K.Gan,54K.Honscheid,54D.Hufnagel,54P.D.Jackson,54H.Kagan,54R.Kass,54T.Pulliam,54A.M.Rahimi,54R.Ter-Antonyan,54Q.K.Wong,54N.L.Blount,55J.Brau,55R.Frey,55O.Igonkina,55M.Lu,55C.T.Potter,55R.Rahmat,55 N.B.Sinev,55D.Strom,55J.Strube,55E.Torrence,55F.Galeazzi,56M.Margoni,56M.Morandin,56M.Posocco,56 M.Rotondo,56F.Simonetto,56R.Stroili,56C.Voci,56M.Benayoun,57J.Chauveau,57P.David,57L.Del Buono,57Ch.de la Vaissi`e re,57O.Hamon,57M.J.J.John,57Ph.Leruste,57J.Malcl`e s,57J.Ocariz,57L.Roos,57 G.Therin,57P.K.Behera,58L.Gladney,58Q.H.Guo,58J.Panetta,58M.Biasini,59R.Covarelli,59S.Pacetti,59 M.Pioppi,59C.Angelini,60G.Batignani,60S.Bettarini,60F.Bucci,60G.Calderini,60M.Carpinelli,60R.Cenci,60F.Forti,60M.A.Giorgi,60A.Lusiani,60G.Marchiori,60M.Morganti,60N.Neri,60E.Paoloni,60M.Rama,60G.Rizzo,60J.Walsh,60M.Haire,61D.Judd,61D.E.Wagoner,61J.Biesiada,62N.Danielson,62P.Elmer,62u,62C.Lu,62J.Olsen,62A.J.S.Smith,62A.V.Telnov,62F.Bellini,63G.Cavoto,63A.D’Orazio,63E.Di Marco,63R.Faccini,63F.Ferrarotto,63F.Ferroni,63M.Gaspero,63L.Li Gioi,63M.A.Mazzoni,63S.Morganti,63G.Piredda,63F.Polci,63F.Safai Tehrani,63C.Voena,63H.Schr¨o der,64R.Waldi,64T.Adye,65 N.De Groot,65B.Franek,65G.P.Gopal,65E.O.Olaiya,65F.F.Wilson,65R.Aleksan,66S.Emery,66A.Gaidot,66S.F.Ganzhur,66G.Graziani,66G.Hamel de Monchenault,66W.Kozanecki,66M.Legendre,66G.W.London,66B.Mayer,66G.Vasseur,66Ch.Y`e che,66M.Zito,66M.V.Purohit,67A.W.Weidemann,67J.R.Wilson,67T.Abe,68M.T.Allen,68D.Aston,68R.Bartoldus,68N.Berger,68A.M.Boyarski,68 O.L.Buchmueller,68R.Claus,68J.P.Coleman,68M.R.Convery,68M.Cristinziani,68J.C.Dingfelder,68 D.Dong,68J.Dorfan,68D.Dujmic,68W.Dunwoodie,68S.Fan,68R.C.Field,68T.Glanzman,68S.J.Gowdy,68T.Hadig,68V.Halyo,68C.Hast,68T.Hryn’ova,68W.R.Innes,68M.H.Kelsey,68P.Kim,68M.L.Kocian,68 D.W.G.S.Leith,68J.Libby,68S.Luitz,68V.Luth,68H.L.Lynch,68H.Marsiske,68R.Messner,68D.R.Muller,68C.P.O’Grady,68V.E.Ozcan,68A.Perazzo,68M.Perl,68B.N.Ratcliff,68A.Roodman,68A.A.Salnikov,68 R.H.Schindler,68J.Schwiening,68A.Snyder,68J.Stelzer,68D.Su,68M.K.Sullivan,68K.Suzuki,68S.K.Swain,68 J.M.Thompson,68J.Va’vra,68N.van Bakel,68M.Weaver,68A.J.R.Weinstein,68W.J.Wisniewski,68 M.Wittgen,68D.H.Wright,68A.K.Yarritu,68K.Yi,68C.C.Young,68P.R.Burchat,69A.J.Edwards,69 S.A.Majewski,69B.A.Petersen,69C.Roat,69M.Ahmed,70S.Ahmed,70M.S.Alam,70R.Bula,70J.A.Ernst,70 M.A.Saeed,70F.R.Wappler,70S.B.Zain,70W.Bugg,71M.Krishnamurthy,71S.M.Spanier,71R.Eckmann,72 J.L.Ritchie,72A.Satpathy,72R.F.Schwitters,72J.M.Izen,73I.Kitayama,73X.C.Lou,73S.Ye,73F.Bianchi,74M.Bona,74F.Gallo,74D.Gamba,74M.Bomben,75L.Bosisio,75C.Cartaro,75F.Cossutti,75G.Della Ricca,75S.Dittongo,75S.Grancagnolo,nceri,75L.Vitale,75V.Azzolini,76F.Martinez-Vidal,76R.S.Panvini,77,‡Sw.Banerjee,78B.Bhuyan,78C.M.Brown,78D.Fortin,78K.Hamano,78R.Kowalewski,78 I.M.Nugent,78J.M.Roney,78R.J.Sobie,78J.J.Back,79P.F.Harrison,tham,79G.B.Mohanty,79H.R.Band,80X.Chen,80B.Cheng,80S.Dasu,80M.Datta,80A.M.Eichenbaum,80K.T.Flood,80M.T.Graham,80J.J.Hollar,80J.R.Johnson,80P.E.Kutter,80H.Li,80R.Liu,80B.Mellado,80A.Mihalyi,80A.K.Mohapatra,80Y.Pan,80M.Pierini,80R.Prepost,80P.Tan,80S.L.Wu,80Z.Yu,80and H.Neal81(The B A B A R Collaboration)1Laboratoire de Physique des Particules,F-74941Annecy-le-Vieux,France2IFAE,Universitat Autonoma de Barcelona,E-08193Bellaterra,Barcelona,Spain3Universit`a di Bari,Dipartimento di Fisica and INFN,I-70126Bari,Italy4Institute of High Energy Physics,Beijing100039,China5University of Bergen,Institute of Physics,N-5007Bergen,Norway6Lawrence Berkeley National Laboratory and University of California,Berkeley,California94720,USA7University of Birmingham,Birmingham,B152TT,United Kingdom8Ruhr Universit¨a t Bochum,Institut f¨u r Experimentalphysik1,D-44780Bochum,Germany9University of Bristol,Bristol BS81TL,United Kingdom10University of British Columbia,Vancouver,British Columbia,Canada V6T1Z111Brunel University,Uxbridge,Middlesex UB83PH,United Kingdom12Budker Institute of Nuclear Physics,Novosibirsk630090,Russia13University of California at Irvine,Irvine,California92697,USA14University of California at Los Angeles,Los Angeles,California90024,USA15University of California at Riverside,Riverside,California92521,USA16University of California at San Diego,La Jolla,California92093,USA17University of California at Santa Barbara,Santa Barbara,California93106,USA 18University of California at Santa Cruz,Institute for Particle Physics,Santa Cruz,California95064,USA 19California Institute of Technology,Pasadena,California91125,USA20University of Cincinnati,Cincinnati,Ohio45221,USA21University of Colorado,Boulder,Colorado80309,USA22Colorado State University,Fort Collins,Colorado80523,USA23Universit¨a t Dortmund,Institut f¨u r Physik,D-44221Dortmund,Germany24Technische Universit¨a t Dresden,Institut f¨u r Kern-und Teilchenphysik,D-01062Dresden,Germany25Ecole Polytechnique,LLR,F-91128Palaiseau,France26University of Edinburgh,Edinburgh EH93JZ,United Kingdom27Universit`a di Ferrara,Dipartimento di Fisica and INFN,I-44100Ferrara,Italy28Laboratori Nazionali di Frascati dell’INFN,I-00044Frascati,Italy29Universit`a di Genova,Dipartimento di Fisica and INFN,I-16146Genova,Italy30Harvard University,Cambridge,Massachusetts02138,USA31Universit¨a t Heidelberg,Physikalisches Institut,Philosophenweg12,D-69120Heidelberg,Germany32Imperial College London,London,SW72AZ,United Kingdom33University of Iowa,Iowa City,Iowa52242,USA34Iowa State University,Ames,Iowa50011-3160,USA35Universit¨a t Karlsruhe,Institut f¨u r Experimentelle Kernphysik,D-76021Karlsruhe,Germany36Laboratoire de l’Acc´e l´e rateur Lin´e aire,F-91898Orsay,France37Lawrence Livermore National Laboratory,Livermore,California94550,USA38University of Liverpool,Liverpool L6972E,United Kingdom39Queen Mary,University of London,E14NS,United Kingdom40University of London,Royal Holloway and Bedford New College,Egham,Surrey TW200EX,United Kingdom41University of Louisville,Louisville,Kentucky40292,USA42University of Manchester,Manchester M139PL,United Kingdom43University of Maryland,College Park,Maryland20742,USA44University of Massachusetts,Amherst,Massachusetts01003,USA 45Massachusetts Institute of Technology,Laboratory for Nuclear Science,Cambridge,Massachusetts02139,USA46McGill University,Montr´e al,Qu´e bec,Canada H3A2T847Universit`a di Milano,Dipartimento di Fisica and INFN,I-20133Milano,Italy48University of Mississippi,University,Mississippi38677,USA49Universit´e de Montr´e al,Physique des Particules,Montr´e al,Qu´e bec,Canada H3C3J750Mount Holyoke College,South Hadley,Massachusetts01075,USA51Universit`a di Napoli Federico II,Dipartimento di Scienze Fisiche and INFN,I-80126,Napoli,Italy52NIKHEF,National Institute for Nuclear Physics and High Energy Physics,NL-1009DB Amsterdam,The Netherlands 53University of Notre Dame,Notre Dame,Indiana46556,USA54Ohio State University,Columbus,Ohio43210,USA55University of Oregon,Eugene,Oregon97403,USA56Universit`a di Padova,Dipartimento di Fisica and INFN,I-35131Padova,Italy 57Universit´e s Paris VI et VII,Laboratoire de Physique Nucl´e aire et de Hautes Energies,F-75252Paris,France 58University of Pennsylvania,Philadelphia,Pennsylvania19104,USA59Universit`a di Perugia,Dipartimento di Fisica and INFN,I-06100Perugia,Italy 60Universit`a di Pisa,Dipartimento di Fisica,Scuola Normale Superiore and INFN,I-56127Pisa,Italy61Prairie View A&M University,Prairie View,Texas77446,USA62Princeton University,Princeton,New Jersey08544,USA63Universit`a di Roma La Sapienza,Dipartimento di Fisica and INFN,I-00185Roma,Italy64Universit¨a t Rostock,D-18051Rostock,Germany65Rutherford Appleton Laboratory,Chilton,Didcot,Oxon,OX110QX,United Kingdom66DSM/Dapnia,CEA/Saclay,F-91191Gif-sur-Yvette,France67University of South Carolina,Columbia,South Carolina29208,USA68Stanford Linear Accelerator Center,Stanford,California94309,USA69Stanford University,Stanford,California94305-4060,USA70State University of New York,Albany,New York12222,USA71University of Tennessee,Knoxville,Tennessee37996,USA72University of Texas at Austin,Austin,Texas78712,USA73University of Texas at Dallas,Richardson,Texas75083,USA74Universit`a di Torino,Dipartimento di Fisica Sperimentale and INFN,I-10125Torino,Italy75Universit`a di Trieste,Dipartimento di Fisica and INFN,I-34127Trieste,Italy76IFIC,Universitat de Valencia-CSIC,E-46071Valencia,Spain77Vanderbilt University,Nashville,Tennessee37235,USA78University of Victoria,Victoria,British Columbia,Canada V8W 3P679Department of Physics,University of Warwick,Coventry CV47AL,United Kingdom80University of Wisconsin,Madison,Wisconsin 53706,USA 81Yale University,New Haven,Connecticut 06511,USAWe report on searches for B −→D −s φand B −→D ∗−s φ.In the context of the Standard Model,these decays are expected to be highly suppressed since they proceed through annihilation of the b and ¯u quarks in the B −meson.Our results are based on 234million Υ(4S )→BFIG.1:Feynman diagram for B −→D (∗)−sφ.In the SM,B annihilation amplitudes are highly sup-pressed.Calculations of the B −→D −s φbranching frac-tion givepredictions of 3×10−7using a perturbativeQCD approach [2],1.9×10−6using factorization [3],and 7×10−7using QCD-improved factorization [3].Since the current experimental limits are about three orders of magnitude higher than the SM expectations,searches for B −→D (∗)−s φcould be sensitive to new physics contributions.Reference [3]argues that thebranching fraction for B −→D −s φcould be as high as8×10−6in a two-Higgs doublet model and 3×10−4in the minimal supersymmetric model with R -parity violation,depending on the details of the new physics parameters.Our results are based on 234×106Υ(4S )→BB ,B −→D (∗)−s φhypothesis.Backgrounds,mostly from contin-uum events,are suppressed using a likelihood constructed from a number of kinematical and event shape variables.For each candidate satisfying all selection criteria,we cal-culate the energy-substituted mass,m ES ,defined later in this article.The m ES distribution of these events is then fit to a signal plus background hypothesis to extract the final signal yield.All kaon candidate tracks in the reconstructed decay chains must satisfy a set of loose kaon identification cri-teria based on the response of the internally-reflecting ring-imaging Cherenkov radiation detector and the ion-ization measurements in the drift chamber and the silicon vertex tracker.The kaon selection efficiency is a func-tion of momentum and polar angle,and is typically 95%.These requirements provide a rejection factor of order 10against pion backgrounds.No particle identification requirements are imposed on pion candidate tracks.We select φ,K 0S ,and K ∗0candidates from pairs of oppositely-charged tracks with invariant masses consis-tent with the parent particle decay hypothesis and con-sistent with originating from a common vertex.The in-variant mass requirements are ±10MeV (∼2.4Γ)for theφ,±9MeV (∼3σ)for the K 0S ,and ±75MeV (∼1.5Γ)for the K ∗0,where σand Γare the experimental and natural width,respectively,of these particles.(Here,and throughout the paper,we use natural units where c =1).We then form D −s candidates in the three modes listedabove by combining φ,K 0S ,or K ∗0candidates with anadditional track.The invariant mass of the D−s candi-date must be within15MeV(∼3σ)of the known D−smass.In the D−s→φπ−and D−s→K∗0K−modes, all three charged tracks are required to originate from acommon vertex.In the D−s→K0S K−mode,the K0S and D−s vertices are required to be separated by at least3 mm.This last requirement is very effective in rejecting combinatorial background and is94%efficient for sig-nal.We select D∗−s candidates from D−s and photon candidates.The photon candidates are constructed from calorimeter clusters with lateral profiles consistent with photon showers and with energy above60MeV in the laboratory frame.We require that the mass difference ∆M≡M(D∗−s)−M(D−s)be between130and156MeV. The∆M resolution is about5MeV.At each stage in the reconstruction,the measurement of the momentum vector of an intermediate particle is improved by refitting the momenta of the decay prod-ucts with kinematical constraints.These constraints are based on the known mass[6]of the intermediate particle and on the fact that the decay products must originate from a common point in space.Finally,we select B−candidates by combining D(∗)−sand bachelorφcandidates.A B−candidate is char-acterized kinematically by the energy-substituted mass m ES≡ 2s+ p0· p B)2/E20− p2B and energy differ-ence∆E≡E∗B−1s,where E and p are energy and momentum,the asterisk denotes the CM frame,the sub-scripts0and B refer to the initialΥ(4S)and B candi-date,respectively,and s is the square of the CM energy. In the CM frame,m ES reduces to m ES= 4s− p∗2B. For signal events we expect m ES∼M B,the known B−mass,and∆E∼0.The resolutions of m ES and∆E are approximately2.6MeV and10MeV,respectively.For events with more than one B−candidate,we retain the candidate with the lowestχ2computed from the mea-sured values,known values,and resolutions for the D−s mass,the bachelorφmass,and,where applicable,∆M. This analysis was performed blind:GEANT4simu-lated data[7]or data samples outside thefit region were used for background studies and selection criteria opti-mization.Most of the backgrounds to the B−→D(∗)−sφsignal were determined to be from continuum events.To reduce these backgrounds we make two additional re-quirements.First,we require|cosθT|<0.9,whereθT is the angle between the thrust axis of the B−candi-date and the rest of the tracks and neutral clusters in the event,calculated in the CM frame.The distribution of|cosθT|is essentially uniform for signal events and strongly peaked near unity for continuum events.Sec-ond,for each event we define a relative likelihood for signal and background based on a number of kinematical quantities.The relative likelihood is defined as the ratio of the likelihoods for signal and background.The sig-nal(background)likelihood is defined as the product of the probability density functions,PDFs,for the various kinematical quantities in signal(background)events. The kinematical quantities used in the likelihood are reconstructed masses,helicity angles,and a Fisher dis-criminant designed to distinguish between continuum and B6TABLE I:Efficiencies(ǫ),branching fractions(B),and prod-ucts of efficiency and branching fractions for the modes used in the B−→D(∗)−sφsearch.The uncertainties on theǫand B are discussed in the text.Here B is the product of branchingfractions for the secondary and tertiary decays in the specifieddecay mode.B Mode D−s ModeǫB(10−3)ǫ×B(10−3)B−→D∗−sφD−s→φπ−0.10910.91.19D−s→K−K0S0.1007.70.77D−s→K∗0K−0.08313.61.14varies between71%and83%,while providing a rejec-tion factor of between4and7against continuum back-grounds.After applying the requirements on relative likelihood and|cosθT|,we also demand that∆E fall inside the sig-nal region:within30MeV(∼3σ)of its expected mean value for signal events.This mean value is determined from simulation,and varies between−3and0MeV,de-pending on the mode.The efficiencies of our selection requirements,shown in Table I,are determined from simulations.For the B−→D∗−sφmode,we take the average of the efficien-cies calculated assuming fully longitudinal or transverse polarization for the two vector mesonfinal state.These efficiencies are found to be the same to within1%.The quantities B in Table I are the product of the known branching fractions for the secondary and tertiary decay modes.These are taken from the compilation of the Par-ticle Data Group[6],with the exception of the branching fraction for D−s→φπ−,for which we use the latest,most precise measurement B(D−s→φπ−)=(4.8±0.6)%[9]. Since the branching fractions for the other two D−s modes are measured with respect to the D−s→φπ−mode,we have rescaled their tabulated values from the Particle Data Group accordingly.The systematic uncertainties on the products of effi-ciency and branching ratio for the secondary decays in the decay chain of interest are summarized in Table II. The largest systematic uncertainty is associated with the uncertainty on the D−s→φπ−branching ratio,which is only known to13%[9],and which is used to normalize all other D−s branching ratios.The experimental systematic uncertainties all relate to the determination of the efficiency,ǫ.The dominant source of error is the uncertainty in the efficiency of the kaon identification requirements.The efficiency of these requirements is calibrated using a sample of kine-matically identified D∗0→D0π+,D0→K−π+de-TABLE II:Systematic uncertainties on iǫi·B i,where the index i runs over the three D−s modes used in this analysis,ǫi are the experimental efficiencies,and B i are the branching fractions for the i th mode.Source B−→D−sφB−→D∗−sφTotal20%21%cays,where track quality selection differences between this sample and our analysis sample have been taken into account.The efficiency of the kaon identification re-quirements is assigned a3.6%systematic error.This re-sults in a systematic uncertainty of14%for the efficiency of the modes with four charged kaons(D−s→K∗0K−, D−s→φπ−),and11%for the mode with three charged kaons(D−s→K0S K−).A second class of uncertainties is associated with the detection efficiency for tracks and clusters.From studies of a variety of control samples, the tracking efficiency is understood at the level of1.4% (0.6%)for transverse momenta below(above)200MeV. There is also a1.9%uncertainty associated wih the re-construction of K0S→π+π−which can occur a few cen-timeters away from the interaction point.Given the mul-tiplicity and momentum spectrum of tracks in the decay modes of interest,the uncertainty on the efficiency of re-constructing tracks in the B-decay chain is estimated to be3.7%.In the B−→D∗−sφsearch,there is an addi-tional uncertainty of1.8%due to the uncertainty on the efficiency to reconstruct the photon in D∗−s→D sγ,and a1%uncertainty from the unknown polarization in the final state.Finally,to ascertain the systematic uncer-tainty due to the efficiency of the other event selection requirements,we examine the variation of the efficiency under differing conditions:shifting the∆E by3MeV (0.3%);shifting the mean of the D−s andφmasses and ∆M by1MeV(0.2%,0.1%,0.2%,respectively);in-creasing the width of the D−s andφmasses and∆M by 1MeV(1.5%,0.4%,1.5%,respectively);using a Fisher distribution obtained from the data sample of a similar analysis,B→Dπwith D→Kπ(3%).Thus we as-sign a5%systematic on the combined efficiency of these selection criteria.We determine the yield of signal events from an un-binned extended maximum-likelihoodfit to the m ES distribution of B−candidates satisfying all of the re-quirements listed above.Wefit simultaneously in two7|∆E |regions:in the signal region the distribution isparametrized as a Gaussian and the combinatorial back-ground as a threshold function [11];in a sideband of ∆E (|∆E |<200MeV,excluding the signal region)we fit solely for the threshold function parameter.In our fit,the amplitude of the Gaussian is allowed to fluctuate to neg-ative values,but,for reasons of numerical stability,the sum of the Gaussian and the threshold function is con-strained to be positive over the full m ES fit range.The mean and the standard deviations of the Gaussian are constrained to the values determined from Monte Carlo simulation.The fitting procedure was extensively tested with sets of simulated data,and was found to provide an unbiased estimate of the signal yield.Figure 2shows the m ES distribution of the selectedcandidates.We see no evidence for B −→D (∗)−s φ.Thefitted event yields are N =−1.6+0.7−0.0and N =3.4+2.8−2.1forthe B −→D −s φand B −→D ∗−s φmodes,respectively,where the quoted uncertainties correspond to changes of 0.5in the log-likelihood for the fit.The likelihood curves are shown in Figure 3.The requirement that the sum of the Gaussian and the threshold function be always positive results in an effective constraint N >−1.6in theB −→D −s φmode.This is the source of the sharp edge at N =−1.6in the likelihood distribution of Figure 3(a).We use a Bayesian approach with a flat prior to set 90%confidence level upper limits on the branching fractionsfor the B −→D −s φand B −→D ∗−s φmodes.In a given mode,the upper limit on the number of observed eventsL (N )is the likelihood,determined from the m ES fit de-tailed above,as a function of the number of signal events,N .The upper limit B on the branching fraction isB <N UL B iǫi ×B i .(2)N B B events,the index i runs over the three D −s decay modes,ǫi is the efficiency in the i th mode,and B i is the product of all secondary and tertiary branching fractions (see Table I).We account for systematic uncertainties by numeri-cally convolving L (N )with a Gaussian distribution with its width determined by the total systematic uncertain-ties (Table II)in the two modes,including the 1.1%un-certainty in N B8 Using the calculation of Reference[3],we can use ourresults to set bounds on new physics contributions.Thesebounds are obtained in the framework of factorization,neglecting systematic uncertainties associated with thecalculation of hadronic effects.In the type II Two-Higgs-Doublet model we extract a tree-level90%C.L.limit tanβ/M H<0.37/GeV,where tanβis the ratio ofthe vacuum expectation values for the two Higgs dou-blets and M H is the mass of the charged Higgs boson.This limit is not quite as stringent as the limit thatcan be obtained from the B−→τ−ντdecay mode,tanβ/M H<0.29/GeV[12].In the context of supersymmetric models with R-parityviolation(RPV)[3,13],the new physics contribution to,B→D sφdepends on the quantityλ2M2iwhereλ′jkl is the coupling between the j th generationdoublet lepton superfield,the k th generation doubletquark superfield,and the l th generation singlet down-type quark superfield;M i is the mass of the i th generationcharged super-lepton.Conservatively assuming maximaldestructive interference between the SM and RPV am-plitudes,wefind|λ21−x2exp[−ζ(1−x2)],√where x=2m ES/。