A Simple Demodulation Method for FBG
- 格式:pdf
- 大小:123.33 KB
- 文档页数:3
摘要光纤传感技术以其独特的优势,成为目前智能结构健康监测技术中研究较为广泛的技术。
针对大型结构、复合材料内部裂纹、金属结构腐蚀等主要损伤类型,由于其具有隐蔽性强、结构失效机理复杂、结构破坏程度难以判断等特点,需进行超高空间分辨率、复用容量大、精度高的传感检测。
本文采用间距极小的超短弱反射的光纤光栅(Fiber Bragg Grating,FBG)构筑的光纤光栅法布里珀罗(Fiber Bragg Grating Fabry-Perot,FBG-FP)阵列搭建传感网络,基于光频域反射技术搭建传感光路,通过对解调原理、解调算法和实验验证等相关问题的研究,实现一种具有超高空间分辨率、超大容量、高精度的全分布式光纤传感新方法与新技术。
主要研究内容如下:(1)FBG-FP阵列的传感机理与复用容量研究。
以FBG的耦合模式方程为基础推导FBG-FP的光谱数学表达式,并分析其温度和应变的传感机理。
数值模拟多重反射效应和光谱阴影效应对FBG-FP传感阵列的复用极限的制约,证明降低反射率可抑制上述两种效应,并进一步提出采用光栅间隔不小于栅长和中心波长随机分布的传感阵列可分别抑制多径反射效应和光谱阴影效应,其中波长随机分布对传感没有坏的影响。
(2)FBG-FP阵列的分布式传感解调系统的研究。
提出基于光频域反射(Optical Frequency-domain Reflectometry,OFDR)技术的FBG-FP阵列的分布式解调系统。
一方面研究传感单元高空间分辨率的定位方法,通过对可调谐光源的非线性调谐效应进行补偿,在50m的传感距离内实现82μm内的超高空间分辨率;通过计算等效光频域调谐速率和可调谐光源的时间波长转换轴,提高系统的定位稳定度和波长解调精度。
另一方面研究传感单元的波长解调方法,推导FBG-FP光谱重构的数学表达式,提出FBG-FP阵列的分布式传感解调算法。
(3)裂纹尖端检测。
温度实验测试系统解调性能,实现8557个长度为400μm、间隔为440μm、反射率约为-42dB的FBG构成的超短弱反射的FBG-FP阵列传感,传感解调空间分辨率达到840μm,温度解调精度小于0.65℃。
三步教你轻松搞定Sampling Method解题辨析!本文由高顿ACCA整理发布,转载请注明出处Step 1:看到题目,仔细分析题干,标出关键字Step 2:题目要求选出的是false,然后看各可选内容:(i)A simple random sample is a sample selected in such a way that every item in the population has an equal chance of being included.If a sample is selected using random sampling, it will be free from bias (since every item will have an equal chance of being selected). Once the sample has been selected, valid inferences about the population being sampled can be made.所以,符合random sampling,(i) is correct.(ii)A sampling frame is a numbered list of all items in a population.Sampling frame 名单抽样框:列出所有在总体中的项目名单,并依次编号。
注意Sample(样本)是在population(总体)中抽出来的,此处概念错误。
所以,(ii) is false.(iii)Cluster sampling 整群抽样Cluster sampling is a non-random sampling method that involves selecting one definable subsection of the population as the sample, that subsection taken to be representative of the population in question.Cluster sampling是非随机抽样,方法是选定一个具有代表性的群体作为样本,来解释总体。
Example-Based Metonymy Recognition for Proper NounsYves PeirsmanQuantitative Lexicology and Variational LinguisticsUniversity of Leuven,Belgiumyves.peirsman@arts.kuleuven.beAbstractMetonymy recognition is generally ap-proached with complex algorithms thatrely heavily on the manual annotation oftraining and test data.This paper will re-lieve this complexity in two ways.First,it will show that the results of the cur-rent learning algorithms can be replicatedby the‘lazy’algorithm of Memory-BasedLearning.This approach simply stores alltraining instances to its memory and clas-sifies a test instance by comparing it to alltraining examples.Second,this paper willargue that the number of labelled trainingexamples that is currently used in the lit-erature can be reduced drastically.Thisfinding can help relieve the knowledge ac-quisition bottleneck in metonymy recog-nition,and allow the algorithms to be ap-plied on a wider scale.1IntroductionMetonymy is afigure of speech that uses“one en-tity to refer to another that is related to it”(Lakoff and Johnson,1980,p.35).In example(1),for in-stance,China and Taiwan stand for the govern-ments of the respective countries:(1)China has always threatened to use forceif Taiwan declared independence.(BNC) Metonymy resolution is the task of automatically recognizing these words and determining their ref-erent.It is therefore generally split up into two phases:metonymy recognition and metonymy in-terpretation(Fass,1997).The earliest approaches to metonymy recogni-tion identify a word as metonymical when it vio-lates selectional restrictions(Pustejovsky,1995).Indeed,in example(1),China and Taiwan both violate the restriction that threaten and declare require an animate subject,and thus have to be interpreted metonymically.However,it is clear that many metonymies escape this characteriza-tion.Nixon in example(2)does not violate the se-lectional restrictions of the verb to bomb,and yet, it metonymically refers to the army under Nixon’s command.(2)Nixon bombed Hanoi.This example shows that metonymy recognition should not be based on rigid rules,but rather on statistical information about the semantic and grammatical context in which the target word oc-curs.This statistical dependency between the read-ing of a word and its grammatical and seman-tic context was investigated by Markert and Nis-sim(2002a)and Nissim and Markert(2003; 2005).The key to their approach was the in-sight that metonymy recognition is basically a sub-problem of Word Sense Disambiguation(WSD). Possibly metonymical words are polysemous,and they generally belong to one of a number of pre-defined metonymical categories.Hence,like WSD, metonymy recognition boils down to the auto-matic assignment of a sense label to a polysemous word.This insight thus implied that all machine learning approaches to WSD can also be applied to metonymy recognition.There are,however,two differences between metonymy recognition and WSD.First,theo-retically speaking,the set of possible readings of a metonymical word is open-ended(Nunberg, 1978).In practice,however,metonymies tend to stick to a small number of patterns,and their la-bels can thus be defined a priori.Second,classic 71WSD algorithms take training instances of one par-ticular word as their input and then disambiguate test instances of the same word.By contrast,since all words of the same semantic class may undergo the same metonymical shifts,metonymy recogni-tion systems can be built for an entire semantic class instead of one particular word(Markert and Nissim,2002a).To this goal,Markert and Nissim extracted from the BNC a corpus of possibly metonymical words from two categories:country names (Markert and Nissim,2002b)and organization names(Nissim and Markert,2005).All these words were annotated with a semantic label —either literal or the metonymical cate-gory they belonged to.For the country names, Markert and Nissim distinguished between place-for-people,place-for-event and place-for-product.For the organi-zation names,the most frequent metonymies are organization-for-members and organization-for-product.In addition, Markert and Nissim used a label mixed for examples that had two readings,and othermet for examples that did not belong to any of the pre-defined metonymical patterns.For both categories,the results were promis-ing.The best algorithms returned an accuracy of 87%for the countries and of76%for the orga-nizations.Grammatical features,which gave the function of a possibly metonymical word and its head,proved indispensable for the accurate recog-nition of metonymies,but led to extremely low recall values,due to data sparseness.Therefore Nissim and Markert(2003)developed an algo-rithm that also relied on semantic information,and tested it on the mixed country data.This algo-rithm used Dekang Lin’s(1998)thesaurus of se-mantically similar words in order to search the training data for instances whose head was sim-ilar,and not just identical,to the test instances. Nissim and Markert(2003)showed that a combi-nation of semantic and grammatical information gave the most promising results(87%). However,Nissim and Markert’s(2003)ap-proach has two major disadvantages.Thefirst of these is its complexity:the best-performing al-gorithm requires smoothing,backing-off to gram-matical roles,iterative searches through clusters of semantically similar words,etc.In section2,I will therefore investigate if a metonymy recognition al-gorithm needs to be that computationally demand-ing.In particular,I will try and replicate Nissim and Markert’s results with the‘lazy’algorithm of Memory-Based Learning.The second disadvantage of Nissim and Mark-ert’s(2003)algorithms is their supervised nature. Because they rely so heavily on the manual an-notation of training and test data,an extension of the classifiers to more metonymical patterns is ex-tremely problematic.Yet,such an extension is es-sential for many tasks throughout thefield of Nat-ural Language Processing,particularly Machine Translation.This knowledge acquisition bottle-neck is a well-known problem in NLP,and many approaches have been developed to address it.One of these is active learning,or sample selection,a strategy that makes it possible to selectively an-notate those examples that are most helpful to the classifier.It has previously been applied to NLP tasks such as parsing(Hwa,2002;Osborne and Baldridge,2004)and Word Sense Disambiguation (Fujii et al.,1998).In section3,I will introduce active learning into thefield of metonymy recog-nition.2Example-based metonymy recognition As I have argued,Nissim and Markert’s(2003) approach to metonymy recognition is quite com-plex.I therefore wanted to see if this complexity can be dispensed with,and if it can be replaced with the much more simple algorithm of Memory-Based Learning.The advantages of Memory-Based Learning(MBL),which is implemented in the T i MBL classifier(Daelemans et al.,2004)1,are twofold.First,it is based on a plausible psycho-logical hypothesis of human learning.It holds that people interpret new examples of a phenom-enon by comparing them to“stored representa-tions of earlier experiences”(Daelemans et al., 2004,p.19).This contrasts to many other classi-fication algorithms,such as Naive Bayes,whose psychological validity is an object of heavy de-bate.Second,as a result of this learning hypothe-sis,an MBL classifier such as T i MBL eschews the formulation of complex rules or the computation of probabilities during its training phase.Instead it stores all training vectors to its memory,together with their labels.In the test phase,it computes the distance between the test vector and all these train-ing vectors,and simply returns the most frequentlabel of the most similar training examples.One of the most important challenges inMemory-Based Learning is adapting the algorithmto one’s data.This includesfinding a represen-tative seed set as well as determining the rightdistance measures.For my purposes,however, T i MBL’s default settings proved more than satis-factory.T i MBL implements the IB1and IB2algo-rithms that were presented in Aha et al.(1991),butadds a broad choice of distance measures.Its de-fault implementation of the IB1algorithm,whichis called IB1-IG in full(Daelemans and Van denBosch,1992),proved most successful in my ex-periments.It computes the distance between twovectors X and Y by adding up the weighted dis-tancesδbetween their corresponding feature val-ues x i and y i:∆(X,Y)=ni=1w iδ(x i,y i)(3)The most important element in this equation is theweight that is given to each feature.In IB1-IG,features are weighted by their Gain Ratio(equa-tion4),the division of the feature’s InformationGain by its split rmation Gain,the nu-merator in equation(4),“measures how much in-formation it[feature i]contributes to our knowl-edge of the correct class label[...]by comput-ing the difference in uncertainty(i.e.entropy)be-tween the situations without and with knowledgeof the value of that feature”(Daelemans et al.,2004,p.20).In order not“to overestimate the rel-evance of features with large numbers of values”(Daelemans et al.,2004,p.21),this InformationGain is then divided by the split info,the entropyof the feature values(equation5).In the followingequations,C is the set of class labels,H(C)is theentropy of that set,and V i is the set of values forfeature i.w i=H(C)− v∈V i P(v)×H(C|v)2This data is publicly available and can be downloadedfrom /mnissim/mascara.73P F86.6%49.5%N&M81.4%62.7%Table1:Results for the mixed country data.T i MBL:my T i MBL resultsN&M:Nissim and Markert’s(2003)results simple learning phase,T i MBL is able to replicate the results from Nissim and Markert(2003;2005). As table1shows,accuracy for the mixed coun-try data is almost identical to Nissim and Mark-ert’sfigure,and precision,recall and F-score for the metonymical class lie only slightly lower.3 T i MBL’s results for the Hungary data were simi-lar,and equally comparable to Markert and Nis-sim’s(Katja Markert,personal communication). Note,moreover,that these results were reached with grammatical information only,whereas Nis-sim and Markert’s(2003)algorithm relied on se-mantics as well.Next,table2indicates that T i MBL’s accuracy for the mixed organization data lies about1.5%be-low Nissim and Markert’s(2005)figure.This re-sult should be treated with caution,however.First, Nissim and Markert’s available organization data had not yet been annotated for grammatical fea-tures,and my annotation may slightly differ from theirs.Second,Nissim and Markert used several feature vectors for instances with more than one grammatical role andfiltered all mixed instances from the training set.A test instance was treated as mixed only when its several feature vectors were classified differently.My experiments,in contrast, were similar to those for the location data,in that each instance corresponded to one vector.Hence, the slightly lower performance of T i MBL is prob-ably due to differences between the two experi-ments.Thesefirst experiments thus demonstrate that Memory-Based Learning can give state-of-the-art performance in metonymy recognition.In this re-spect,it is important to stress that the results for the country data were reached without any se-mantic information,whereas Nissim and Mark-ert’s(2003)algorithm used Dekang Lin’s(1998) clusters of semantically similar words in order to deal with data sparseness.This fact,togetherAcc RT i MBL78.65%65.10%76.0%—Figure1:Accuracy learning curves for the mixed country data with and without semantic informa-tion.in more detail.4Asfigure1indicates,with re-spect to overall accuracy,semantic features have a negative influence:the learning curve with both features climbs much more slowly than that with only grammatical features.Hence,contrary to my expectations,grammatical features seem to allow a better generalization from a limited number of training instances.With respect to the F-score on the metonymical category infigure2,the differ-ences are much less outspoken.Both features give similar learning curves,but semantic features lead to a higherfinal F-score.In particular,the use of semantic features results in a lower precisionfig-ure,but a higher recall score.Semantic features thus cause the classifier to slightly overgeneralize from the metonymic training examples.There are two possible reasons for this inabil-ity of semantic information to improve the clas-sifier’s performance.First,WordNet’s synsets do not always map well to one of our semantic la-bels:many are rather broad and allow for several readings of the target word,while others are too specific to make generalization possible.Second, there is the predominance of prepositional phrases in our data.With their closed set of heads,the number of examples that benefits from semantic information about its head is actually rather small. Nevertheless,myfirst round of experiments has indicated that Memory-Based Learning is a sim-ple but robust approach to metonymy recogni-tion.It is able to replace current approaches that need smoothing or iterative searches through a the-saurus,with a simple,distance-based algorithm.Figure3:Accuracy learning curves for the coun-try data with random and maximum-distance se-lection of training examples.over all possible labels.The algorithm then picks those instances with the lowest confidence,since these will contain valuable information about the training set(and hopefully also the test set)that is still unknown to the system.One problem with Memory-Based Learning al-gorithms is that they do not directly output prob-abilities.Since they are example-based,they can only give the distances between the unlabelled in-stance and all labelled training instances.Never-theless,these distances can be used as a measure of certainty,too:we can assume that the system is most certain about the classification of test in-stances that lie very close to one or more of its training instances,and less certain about those that are further away.Therefore the selection function that minimizes the probability of the most likely label can intuitively be replaced by one that max-imizes the distance from the labelled training in-stances.However,figure3shows that for the mixed country instances,this function is not an option. Both learning curves give the results of an algo-rithm that starts withfifty random instances,and then iteratively adds ten new training instances to this initial seed set.The algorithm behind the solid curve chooses these instances randomly,whereas the one behind the dotted line selects those that are most distant from the labelled training exam-ples.In thefirst half of the learning process,both functions are equally successful;in the second the distance-based function performs better,but only slightly so.There are two reasons for this bad initial per-formance of the active learning function.First,it is not able to distinguish between informativeandFigure4:Accuracy learning curves for the coun-try data with random and maximum/minimum-distance selection of training examples. unusual training instances.This is because a large distance from the seed set simply means that the particular instance’s feature values are relatively unknown.This does not necessarily imply that the instance is informative to the classifier,how-ever.After all,it may be so unusual and so badly representative of the training(and test)set that the algorithm had better exclude it—something that is impossible on the basis of distances only.This bias towards outliers is a well-known disadvantage of many simple active learning algorithms.A sec-ond type of bias is due to the fact that the data has been annotated with a few features only.More par-ticularly,the present algorithm will keep adding instances whose head is not yet represented in the training set.This entails that it will put off adding instances whose function is pp,simply because other functions(subj,gen,...)have a wider variety in heads.Again,the result is a labelled set that is not very representative of the entire training set.There are,however,a few easy ways to increase the number of prototypical examples in the train-ing set.In a second run of experiments,I used an active learning function that added not only those instances that were most distant from the labelled training set,but also those that were closest to it. After a few test runs,I decided to add six distant and four close instances on each iteration.Figure4 shows that such a function is indeed fairly success-ful.Because it builds a labelled training set that is more representative of the test set,this algorithm clearly reduces the number of annotated instances that is needed to reach a given performance.Despite its success,this function is obviously not yet a sophisticated way of selecting good train-76Figure5:Accuracy learning curves for the organi-zation data with random and distance-based(AL) selection of training examples with a random seed set.ing examples.The selection of the initial seed set in particular can be improved upon:ideally,this seed set should take into account the overall dis-tribution of the training examples.Currently,the seeds are chosen randomly.Thisflaw in the al-gorithm becomes clear if it is applied to another data set:figure5shows that it does not outper-form random selection on the organization data, for instance.As I suggested,the selection of prototypical or representative instances as seeds can be used to make the present algorithm more robust.Again,it is possible to use distance measures to do this:be-fore the selection of seed instances,the algorithm can calculate for each unlabelled instance its dis-tance from each of the other unlabelled instances. In this way,it can build a prototypical seed set by selecting those instances with the smallest dis-tance on average.Figure6indicates that such an algorithm indeed outperforms random sample se-lection on the mixed organization data.For the calculation of the initial distances,each feature re-ceived the same weight.The algorithm then se-lected50random samples from the‘most proto-typical’half of the training set.5The other settings were the same as above.With the present small number of features,how-ever,such a prototypical seed set is not yet always as advantageous as it could be.A few experiments indicated that it did not lead to better performance on the mixed country data,for instance.However, as soon as a wider variety of features is taken into account(as with the organization data),the advan-pling can help choose those instances that are most helpful to the classifier.A few distance-based al-gorithms were able to drastically reduce the num-ber of training instances that is needed for a given accuracy,both for the country and the organization names.If current metonymy recognition algorithms are to be used in a system that can recognize all pos-sible metonymical patterns across a broad variety of semantic classes,it is crucial that the required number of labelled training examples be reduced. This paper has taken thefirst steps along this path and has set out some interesting questions for fu-ture research.This research should include the investigation of new features that can make clas-sifiers more robust and allow us to measure their confidence more reliably.This confidence mea-surement can then also be used in semi-supervised learning algorithms,for instance,where the clas-sifier itself labels the majority of training exam-ples.Only with techniques such as selective sam-pling and semi-supervised learning can the knowl-edge acquisition bottleneck in metonymy recogni-tion be addressed.AcknowledgementsI would like to thank Mirella Lapata,Dirk Geer-aerts and Dirk Speelman for their feedback on this project.I am also very grateful to Katja Markert and Malvina Nissim for their helpful information about their research.ReferencesD.W.Aha, D.Kibler,and M.K.Albert.1991.Instance-based learning algorithms.Machine Learning,6:37–66.W.Daelemans and A.Van den Bosch.1992.Generali-sation performance of backpropagation learning on a syllabification task.In M.F.J.Drossaers and A.Ni-jholt,editors,Proceedings of TWLT3:Connection-ism and Natural Language Processing,pages27–37, Enschede,The Netherlands.W.Daelemans,J.Zavrel,K.Van der Sloot,andA.Van den Bosch.2004.TiMBL:Tilburg Memory-Based Learner.Technical report,Induction of Linguistic Knowledge,Computational Linguistics, Tilburg University.D.Fass.1997.Processing Metaphor and Metonymy.Stanford,CA:Ablex.A.Fujii,K.Inui,T.Tokunaga,and H.Tanaka.1998.Selective sampling for example-based wordsense putational Linguistics, 24(4):573–597.R.Hwa.2002.Sample selection for statistical parsing.Computational Linguistics,30(3):253–276.koff and M.Johnson.1980.Metaphors We LiveBy.London:The University of Chicago Press.D.Lin.1998.An information-theoretic definition ofsimilarity.In Proceedings of the International Con-ference on Machine Learning,Madison,USA.K.Markert and M.Nissim.2002a.Metonymy res-olution as a classification task.In Proceedings of the Conference on Empirical Methods in Natural Language Processing(EMNLP2002),Philadelphia, USA.K.Markert and M.Nissim.2002b.Towards a cor-pus annotated for metonymies:the case of location names.In Proceedings of the Third International Conference on Language Resources and Evaluation (LREC2002),Las Palmas,Spain.M.Nissim and K.Markert.2003.Syntactic features and word similarity for supervised metonymy res-olution.In Proceedings of the41st Annual Meet-ing of the Association for Computational Linguistics (ACL-03),Sapporo,Japan.M.Nissim and K.Markert.2005.Learning to buy a Renault and talk to BMW:A supervised approach to conventional metonymy.In H.Bunt,editor,Pro-ceedings of the6th International Workshop on Com-putational Semantics,Tilburg,The Netherlands. G.Nunberg.1978.The Pragmatics of Reference.Ph.D.thesis,City University of New York.M.Osborne and J.Baldridge.2004.Ensemble-based active learning for parse selection.In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics(HLT-NAACL).Boston, USA.J.Pustejovsky.1995.The Generative Lexicon.Cam-bridge,MA:MIT Press.78。
动态F-P腔对波长的解调方法范刘静;韩道福;马力;戚小平【摘要】提出一种动态非本征法布里-珀罗(F-P)腔对波长的解调方法.利用PZT构建的动态非本征F-P腔调制FBG反射光,理论分析得到调制光强随F-P腔长呈类余弦变化.经数值模拟,当PZT在一正弦电压驱动下,F-p腔调制输出的类余弦信号因FBG波长漂移而产生峰值偏移,且偏移量与FBG波长变化量成线性关系,此关系可用于FBG波长的解调.研究结果表明,动态非本征F-P腔波长解调方法,在FBG传感测量中简化了信号处理过程中复杂的算法问题.%A demodulation method of wavelength with dynamic extrinsic Fabry-Perot (F-P) cavity was proposed. The reflected light of FBG was modulated by the F-P cavity which was constructed using a PZT. The modulated intensity was theoretically analyzed and found to vary with the length of F-P cavity as cosine-like. When PZT vibrated with a sine voltage,the cosine-like signal would move to produce a peak migration, which had a linear relationship with the FBG wavelength offset. This relationship could be used for FBG wavelength of demodulation. Our results furthermore showed that the new signal demodulation method could simplify the signal processing algorithm of complex problems.【期刊名称】《南昌大学学报(理科版)》【年(卷),期】2012(036)005【总页数】4页(P478-481)【关键词】动态非本征F-P腔;类余弦信号;峰值偏移;FBG波长解调【作者】范刘静;韩道福;马力;戚小平【作者单位】南昌大学物理学系,江西南昌 330031;南昌大学物理实验中心,江西南昌 330031;南昌大学物理实验中心,江西南昌 330031;南昌大学物理实验中心,江西南昌 330031【正文语种】中文【中图分类】O436现代测量技术的发展日渐趋于高精度,高分辨率的方向,光纤布拉格光栅(FBG)做为当前光纤无源器件的代表,在传感测量方面占有举足轻重的地位。
simple方法
SIMPLE算法全称为Semi-Implicit Method for Pressure-Linked Equations,意思是压力耦合方程组的半隐式方法,是一种计算流体力学中常用的数值方法,用于求解不可压缩流动的Navier-Stokes方程。
SIMPLE算法的原理是将动量方程和连续性方程分别离散化为矩阵形式,然后通过对角线分解和逆运算,得到压力泊松方程和速度修正方程。
通过迭代求解这两个方程,可以得到压力场和速度场。
SIMPLE算法自1972年问世以来在世界各国计算流体力学及计算传热学界得到了广泛的应用,这种算法提出不久很快就成为计算不可压流场的主要方法,随后这一算法以及其后的各种改进方案成功的推广到可压缩流场计算中,已成为一种可以计算任何流速的流动的数值方法。
第24卷㊀第2期2019年4月㊀哈尔滨理工大学学报JOURNALOFHARBINUNIVERSITYOFSCIENCEANDTECHNOLOGY㊀Vol 24No 2Apr 2019㊀㊀㊀㊀㊀㊀DFB激光器扫描的FBG波长解调算法张嘉楠ꎬ㊀熊燕玲ꎬ㊀吴明泽ꎬ㊀李伟博(哈尔滨理工大学应用科学学院ꎬ黑龙江哈尔滨150080)摘㊀要:光纤布拉格光栅(FBG)传感器是通过观测光纤光栅反射谱中心波长漂移来判断待测量变化ꎬ准确寻找光纤光栅反射谱峰值信息成为研究重点ꎮ依据分布反馈式激光器(DFB)动态扫描输出波长与时间的规律ꎬ以标准法布理-珀罗透射谱为标准谱来直接获取光纤光栅中心波长信息ꎬ并采用高斯函数和洛仑兹函数两种拟合算法ꎬ对法布理-珀罗透射谱和FBG反射谱进行研究ꎮ采用曲线拟合度作为标准ꎬ运用编程语言编写寻峰算法并优化ꎮ实验结果表明ꎬ动态调谐的分布式反馈激光器光纤光栅波长解调系统高斯拟合算法优于洛伦兹拟合算法ꎬ拟合度可达到97%以上ꎬ系统测量分辨率达1pm㊁测量范围为1547~1552nmꎮ关键词:波长解调ꎻ高斯拟合ꎻ洛伦兹拟合ꎻ算法DOI:10.15938/j.jhust.2019.02.021中图分类号:TN253ꎻTP31文献标志码:A文章编号:1007-2683(2019)02-0139-05WavelengthDemodulatingAlgorithmofFBGDynamicTunedbyDFBLaserZHANGJia ̄nanꎬ㊀XIONGYan ̄lingꎬ㊀WUMing ̄zeꎬ㊀LIWei ̄bo(SchoolofAppliedScienceꎬHarbinUniversityofScienceandTechnologyꎬHarbin150080ꎬChina)Abstract:ThecentralwavelengthofthereflectingspectrainFiberBragggrating(FBG)varieswiththemeasuringparametersofexternalenvironmentꎬwhichisthekeypointtoconfirmthereflectingpeakinformationoftheFBG.ThecentralwavelengthofthereflectingspectrainFBGwasobtainedbythestandardFabry ̄PerottransmissionspectrabasedonthelawbetweenthedynamicscanningwavelengthandscanningtimeofDistributedFeedbacklaser(DFB).TheGaussandLorenzarithmeticfunctionwereusedtoinvestigatetheFabry ̄PerottransmissionspectraandFBGreflectingspectra.Thepeak ̄detectionalgorithmwaswrittenandoptimizedbyprogramminglanguagebasedontheaccuracyofcurvefitting.TheresultsshowthattheGaussfittingalgorithmisbetterthanLorenzfittingalgorithmfortheDFBdynamicscanningwavelengthdemodulationsystem.Thefittingdegreeismorethan97%ꎬtheresolutionofthemeasuringsystemisabout1pmꎬandthemeasurementrangesfrom1547nmto1552nm.Keywords:wavelengthdemodulationꎻGaussianfittingꎻLorentzfittingꎻalgorithm㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀收稿日期:2017-04-07基金项目:黑龙江省自然科学基金(F2017012) 作者简介:张嘉楠(1995 )ꎬ男ꎬ硕士研究生ꎻ吴明泽(1992 )ꎬ男ꎬ硕士研究生通信作者:熊燕玲(1964 )ꎬ女ꎬ教授ꎬE ̄mail:xyling1964@163 com0㊀引㊀言近20年来ꎬ利用光纤布拉格光栅(fiberbragggratingꎬFBG)进行传感的技术成为可靠性高㊁实用性强的光纤传感技术[1-2]ꎮFBG反射谱的中心波长只对栅区内温度与应变敏感ꎬ而不受FBG传感系统中有源和无源器件波动及损耗影响ꎮ由于FBG还具有波长编码独特优势和准分布式的特点ꎬ广泛应用于大型结构安全监测中[3-5]ꎬ因此ꎬ如何准确获取FBG反射峰波长信息成为研究重点ꎮ目前ꎬ直接获取FBG波长信息一种方法是利用光谱分析仪ꎬ光谱分析仪ꎬ虽然能够对有源或无源器件的谱信息进行获取和分析ꎬ但波长分辨率只有10pmꎬ难以满足光纤光栅传感器涉足到的高分辨率检测领域要求[6]ꎬ另一种方法是利用光纤光栅波长解调仪去读取FBG反射谱中心波长[7]ꎬ虽能达到pm量级分辨率ꎬ但大多都基于宽带自发放大辐射光源和波长可调滤波器设计的[8]ꎬ多通道同步扫描窄带光功率低ꎬ且只有微瓦量级ꎬ并且这些仪器价格昂贵㊁体积大㊁不利于后续开发ꎮ而分布式反馈(distributedfeedBackꎬDFB)激光器具有输出功率较高㊁输出光谱线宽窄㊁输出波长可调谐㊁价格适中等突出优点[9-10]ꎬ因此ꎬ本文设计可调分布反馈式谐激光器扫描光纤光栅反射波长方法ꎬ来识别FBG反射谱中心波长的传感与解调系统ꎬ对法布里-珀罗的透射谱和光纤光栅反射高斯谱数据实现曲线拟合ꎮ对于光纤布拉格光栅的反射功率谱密度曲线[11]ꎬ理论上其强度最大值位于中心波长处ꎬ并以中心波长为轴左右对称ꎬ而高斯函数和洛伦兹函数的图像特点与光纤布拉格光栅的反射功率谱密度曲线相近ꎬ因此在光线传感解调中可用高斯函数及洛伦兹函数来近似表达ꎬ通过FBG反射谱与高斯函数图像和洛伦兹函数图像逼近ꎬ即可求得该反射谱的中心波长值[12]ꎮ但多数研究者直接给出高斯拟合或高斯拟合与多项式拟合方式进行计算[13-16]ꎬ很少有关于FBG反射谱洛伦兹拟合的研究报道ꎬ本文将针对高斯拟合和洛伦兹拟合函数识别FBG反射谱峰值信息展开研究ꎮ1㊀分布反馈式激光器扫描光纤布拉格光栅调制原理㊀㊀分布反馈式激光器扫描光纤布拉格光栅调制系统框图如图1ꎮ该调制系统由波长可调谐DFB激光器㊁Fabry ̄Perot(F ̄P)标准具㊁光纤光栅传感器及数据处理部分组成ꎮ在一个扫描周期内ꎬ分布反馈式激光器发出不同波长窄带光ꎬ且扫描时间与波长呈线性关系ꎮ光源发出的光被分成两路ꎬ其中一路进入F ̄P标准具ꎬ另一路进入光纤光栅传感器被反射后其反射谱再和F ̄P标准具的透射谱一同进入光电转换系统[17]ꎬ如图2所示ꎮ把F ̄P透射谱为标准谱ꎬ其透射谱峰值波长已知ꎬ采用寻峰算法给出F ̄P时域信号中不同透射谱峰值波长与时间点数组ꎬ再用二项式拟合对数组数据运算ꎬ确定出扫描激光的波长与时间关系曲线ꎻ由于F ̄P标准具透射谱与FBG传感器反射谱具有相同的波长时间函数曲线ꎬ只要知道时间点tꎬ依据波长与时间关系曲线计算出对应的波长λꎬ完成对FBG传感器反射谱峰值波长的寻峰识别[18-20]ꎮ图1㊀分布反馈式激光器扫描光纤布拉格光栅调制框图Fig 1㊀ModulationdiagramofdistributedfeedbacklaserscanningfiberPraguegrating图2㊀F ̄P与FBG的时域信号Fig 2㊀TimedomainsignalofF ̄PandFBG2㊀波长寻峰算法光纤不拉格光栅反射谱峰值中心波长寻峰算法如图3所示流程ꎮ首先对Fabry ̄Perot标准具的透射谱的有效峰值进行高斯拟合ꎬ建立扫描波长与扫描时间的函数表达式ꎬ再对光纤光栅反射谱中心波长峰值信息进行高斯(或洛伦兹)拟合ꎬ得到光纤光栅反射谱的峰值横坐标时间数据ꎬ再代入由Fabry ̄Per ̄ot标准具透射谱得到的扫描波长与扫描时间的函数表达式ꎬ计算出光纤布拉格光栅反射谱峰值波长ꎮ为了更精确地获得F ̄P透射谱和FBG反射谱各波峰所对应横坐标时间点ꎬ采用直接寻峰与多种041哈㊀尔㊀滨㊀理㊀工㊀大㊀学㊀学㊀报㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀第24卷㊀曲线拟合寻峰相结合的方式ꎮ由于光谱存在的噪声使得主峰两侧出现旁瓣峰值最高点超出规定阈值线时将作为一个峰被识别ꎮ为剔除这些干扰峰ꎬ采用了如图4所示的初步寻峰法ꎬ先对光谱数据粗略寻峰ꎬ由阈值识别得到的谱峰是否为有效峰ꎬ再基于最小二乘法原理对有效峰值数据用进行高斯型和洛仑兹型拟合来精确寻峰ꎮ图3㊀解调算法流程Fig 3㊀Demodulationalgorithmflow图4㊀初步寻峰法Fig 4㊀Initialpeaksearchmethod3㊀实验分析3 1㊀F ̄P透射谱寻峰搭建了如图1所示调谐激光器扫描光纤布拉格光栅调制实验系统ꎬ实验中采集到F ̄P透射谱分布规律如图5所示ꎮ图5中横坐标为相对采样时间点ꎬ纵坐标为相对强度ꎮ激光器扫描范围为1547nm至1552nmꎬF ̄P标准具的通道间距为100GHzꎬ在激光器扫描范围内出现6个F ̄P标准具透射谱波峰ꎬ对6个波峰分别用高斯拟合算法和洛仑兹拟合算法计算其拟合度ꎬ结果如图6所示ꎮ图5㊀Fabry ̄Perot标准具透射谱Fig 5㊀TransmissionspectrumofFabry ̄Perotstandardtool图6㊀F-P透射谱峰号拟合Fig 6㊀F-Ptransmissionspectrumpeakfitting由图6可以看出ꎬ高斯拟合下多次测量数据的平均拟合度在0 985至0 998之间ꎬ洛仑兹拟合下的平均拟合度在0 915至0 97之间ꎬ高斯拟合算法优于洛伦兹拟合ꎬ拟合度达到98%以上ꎮ3 2㊀FBG反射谱寻峰本文分别对5个峰值反射率不同的FBG传感器反射谱做算法处理ꎬ实验测得的反射谱如图7所示ꎮ141第2期张嘉楠等:DFB激光器扫描的FBG波长解调算法图7㊀五个不同反射率FBG传感器反射谱Fig 7㊀ReflectancespectraoffiveFBGsensorswithdifferentreflectivity㊀㊀图7中横坐标是相对采样时间点ꎬ纵坐标是相对强度ꎮ对图7中5个FBG传感器反射谱分别采用高斯拟合和洛伦兹拟合进行寻峰ꎬ其拟合结果图8所示ꎮ图8㊀高斯分布与洛伦兹分布拟合度对比图Fig 8㊀GaussandLorenzComparisonoffit图8纵坐标分别用高斯拟合与洛伦兹拟合寻峰法方法的拟合值ꎬ其中横坐标数字1㊁2㊁3㊁4㊁5分别代表5个不同FBG传感器ꎬ可以看出高斯拟合算法下拟合度可以达到95%以上ꎬ1号FBG的相对拟合度值接近1ꎬ洛伦兹拟合算法下拟合度在78%~94%ꎮ高斯拟合寻峰算法优于洛伦兹拟合寻峰算法ꎮ再对这5个不同带宽㊁不同中心波长和反射率的FBG传感器ꎬ利用高斯拟合和洛伦兹拟合寻峰算法计算FBG波长值ꎬ与光谱分析仪采集的FBG标准值比较如表1所示ꎮ表1㊀不同算法下的波长比较表Tab.1㊀Tableofthecomparisonofdifferentwavelengthsunderdifferentalgorithmsmm组数频谱分析仪测量值高斯拟合测量值和高斯拟合的差异洛伦兹拟合测量值和洛伦兹拟合的差异11548.30001548.30110.00111548.34990.049921548.33301548.33580.00281548.33760.004631547.03401530.69730.00231547.04300.009041547.99001547.99380.00381547.98950.019551548.05101548.05320.00221548.06080.0098㊀㊀可以从表1看出ꎬ高斯算法拟合计算值与实际测量值的差值在5pm以内ꎬ最大3 8pmꎬ最小1 1pmꎬ洛仑兹算法拟合计算值与实际测量值的差值在50pm以内ꎬ最大49 9pmꎬ最小4 6pmꎬ差值是高斯拟合的10倍ꎬ高斯拟合明显由于洛仑兹拟合ꎮ所以本系统采用高斯算法拟合进行FBG的解调ꎮ241哈㊀尔㊀滨㊀理㊀工㊀大㊀学㊀学㊀报㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀第24卷㊀3 3㊀系统的分辨率测量参照图1搭建调谐激光器动态扫描FBG系统ꎬ对系统进行温度定标ꎬ并将温度传感器置于分辨率为0 1ħ的温控箱中ꎬ对系统采集到的FBG中心波长随温度数据进行斯拟合处理ꎬ结果如图9所示ꎬFBG传感器的中心波长与外界温度变化呈线性关系ꎬ并得到系统的分辨率约1 0pmꎮ图9㊀系统的分辨率测试曲线Fig 9㊀Resolutiontestcurveofsystem4㊀结㊀语在分布反馈式激光器扫描的光纤布拉格光栅波长解调系统中ꎬ分别采用高斯函数和洛伦兹函数拟合对F ̄P透射谱和FBG反射谱编程寻峰ꎬ得到高斯函数拟合优于洛伦兹拟合的结论ꎬ并与FBG反射波长标准值进行合理比较ꎮ运用高斯函数拟合寻峰对FBG温度传感实验定标ꎬ得到了解调系统的分辨率1 0pmꎬ测试范围1547nm~1552nmꎬ该系统可与商业化的FBG波长解调仪相比ꎮ参考文献:[1]㊀何慧灵ꎬ赵春梅ꎬ陈丹ꎬ等.光纤传感器现状[J].激光与光电子学进展ꎬ2004ꎬ41(3):39.[2]㊀王鹏ꎬ赵洪ꎬ刘杰陈ꎬ等.基于可调谐F ̄P滤波器的FBG波长解调系统的动态实时校准方法[J].光学学报ꎬ2015ꎬ35(8):85.[3]㊀刘德明ꎬ孙琪真.分布式光纤传感技术及其应用[J].激光与光电子学进展ꎬ2009ꎬ46(11):29.[4]㊀杜志泉ꎬ倪锋ꎬ肖发新.光纤传感技术的发展与应用[J].光电技术应用ꎬ2014ꎬ29(6):1.[5]㊀陈虹ꎬ刘山亮.基于FPGA的时延辅助定位FBG传感系统的研究与设计[J].光电子 激光ꎬ2015ꎬ26(9):1658.[6]㊀张淑芳.光纤光栅传感器解调技术研究[D].郑州:河南大学ꎬ2013ꎬ8(24):582.[7]㊀张浔.基于高双折射光纤Sagnac干涉仪的光纤光栅波长解调系统研究[D].北京:北京工业大学ꎬ2016ꎬ15(12):115.[8]㊀陈磊.基于多波长光源的可调谐微波光子滤波器研究[D].天津:天津理工大学ꎬ2016ꎬ12(6):852.[9]㊀赵强ꎬ王永杰ꎬ徐团伟ꎬ等.分布反馈式光纤激光器的光热调谐方法[J].强激光与粒子束ꎬ2013ꎬ25(2):355.[10]YANLIanshanꎬYiAnllinꎬPANWeiꎬetal.ASimpleDemodula ̄tionMethodforFBGTemperatureSensorsUsingaNarrowbandWavelengthTunableDFBLaser[J].IEEEPhotonicsTechnologyLettersꎬ2010ꎬ22(18):1391.[11]胡正文ꎬ庞成鑫ꎬ程冯宇.LM算法在FBG反射光谱寻峰中的应用研究[J/OL].激光与光电子学进展ꎬ2017(1):1.[12]吴付岗ꎬ张庆山ꎬ姜德生ꎬ等.光纤光栅Bragg波长的高斯曲线拟合求法[J].武汉理工大学学报ꎬ2007(12):116.[13]尹成群ꎬ王梓蒴ꎬ何玉钧ꎬ等.FBG反射谱中心波长检测算法仿真与实验分析[J].红外与激光工程ꎬ2011(2):322.[14]陈勇ꎬ杨雪ꎬ刘焕淋ꎬ杨凯ꎬ张玉兰ꎬ等.指数修正高斯拟合寻峰算法处理FBG传感信号[J].光谱学与光谱分析ꎬ2016ꎬ36(5):1526.[15]陈海鹏ꎬ申铉京ꎬ龙建武.采用高斯拟合的全局阈值算法阈值优化框架[J].计算机研究与展ꎬ2016ꎬ53(4):892.[16]陈彬彬ꎬ张强ꎬ陆耀东ꎬ宋金鹏.三维高斯拟合激光光强快速衰减算法[J].强激光与粒子束ꎬ2015ꎬ27(4):10.[17]YEHCHꎬCHOWCWꎬWUYFꎬetal.MultiwavelengthErbi ̄um ̄dopedFiberRingLaserEmployingFabry ̄PerotEtaloncavityOperatinginRoomTemperature[J].OpticalFiberTechnologyꎬ2009ꎬ15(4):344.[18]熊燕玲ꎬ任乃奎ꎬ梁欢ꎬ等.分布式反馈激光器动态扫描光纤布拉格光栅波长解调系统[J].强激光与粒子束ꎬ2015ꎬ27(1):37.[19]李乔艺.可调谐DFB激光器动态扫描的FBG解调系统[D].哈尔滨:哈尔滨理工大学ꎬ2012.[20]XIONGYanlingꎬLIQiaoyiIꎬYANGWenlongꎬetal..StudyonFBGWavelengthDemodulationSystemwiththeContinuousDy ̄namicScanningofTunableDFBLaser[J].InternationalJournalofSignalProcessingꎬImageProcessingandPatternRecognitionꎬ2014ꎬ7(3):339.(编辑:温泽宇)341第2期张嘉楠等:DFB激光器扫描的FBG波长解调算法。
A Boundary-Fragment-Model for ObjectDetectionAndreas Opelt1,Axel Pinz1,and Andrew Zisserman21Vision-based Measurement Group,Inst.of El.Measurement and Meas.Sign.Proc.Graz,University of Technology,Austria{opelt,axel.pinz}@tugraz.at2Visual Geometry Group,Department of Engineering ScienceUniversity of Oxfordaz@Abstract.The objective of this work is the detection of object classes,such as airplanes or horses.Instead of using a model based on salient im-age fragments,we show that object class detection is also possible usingonly the object’s boundary.To this end,we develop a novel learning tech-nique to extract class-discriminative boundary fragments.In addition totheir shape,these“codebook”entries also determine the object’s centroid(in the manner of Leibe et al.[19]).Boosting is used to select discrim-inative combinations of boundary fragments(weak detectors)to forma strong“Boundary-Fragment-Model”(BFM)detector.The generativeaspect of the model is used to determine an approximate segmentation.We demonstrate the following results:(i)the BFM detector is able torepresent and detect object classes principally defined by their shape,rather than their appearance;and(ii)in comparison with other publishedresults on several object classes(airplanes,cars-rear,cows)the BFMdetector is able to exceed previous performances,and to achieve thiswith less supervision(such as the number of training images).1Introduction and ObjectiveSeveral recent papers on object categorization and detection have explored the idea of learning a codebook of appearance parts or fragments from a corpus of images.A particular instantiation of an object class in an image is then composed from codebook entries,possibly arising from different source images.Examples include Agarwal&Roth[1],Vidal-Naquet&Ullman[27],Leibe et al.[19],Fer-gus et al.[12,14],Crandall et al.[9],Bar-Hillel et al.[3].The methods differ on the details of the codebook,but more fundamentally they differ in how strictly the geometry of the configuration of parts constituting an object class is con-strained.For example,Csurka et al.[10],Bar-Hillel et al.[3]and Opelt et al.[22] simply use a“bag of visual words”model(with no geometrical relations between the parts at all),Agarwal&Roth[1],Amores et al.[2],and Vidal-Naquet and Ullman[27]use quite loose pairwise relations,whilst Fergus et al.[12]have a strongly parametrized geometric model consisting of a joint Gaussian over the2Andreas Opelt et al.centroid position of all the parts.The approaches using no geometric relations are able to categorize images(as containing the object class),but generally do not provide location information(no detection).Whereas the methods with even loose geometry are able to detect the object’s location.The method of Leibe et al.([19],[20])has achieved the best detection per-formance to date on various object classes(e.g.cows,cars-rear(Caltech)).Their representation of the geometry is algorithmic–all parts vote on the object cen-troid as in a Generalized Hough transform.In this paper we explore a similar geometric representation to that of Leibe et al.[19]but use only the boundaries of the object,both internal and external(silhouette).In our case the codebook consists of boundary-fragments,with an additional entry recording the location of the object’s centroid.Figure1overviews the idea.The boundary represents the shape of the object class quite naturally without requiring the appearance (e.g.texture)to be learnt.For certain categories(bottles,cups)where the surface markings are very variable,approaches relying on consistency of these appear-ances may fail or need considerable training data to succeed.Our method,with its stress on boundary representation,is highly suitable for such objects.The intention is not to replace appearance fragments but to develop complementary features.As will be seen,in many cases the boundary alone performs as well as or better than the appearance and segmentation masks(mattes)used by other authors(e.g.[19,27])–the boundary is responsible for much of the success.Original Image All matched boundaryfragments Centroid Voting on a subset of the matched fragmentsSegmentation / DetectionBackprojected MaximumFig.1.An overview of applying the BF model detector.The areas of novelty in the paper include:(i)the manner in which the boundary-fragment codebook is learnt–fragments(from the boundaries of the training objects)are selected to be highly class-distinctive,and are stable in their prediction of the object centroid;and(ii)the construction of a strong detector (rather than a classifier)by Boosting[15]over a set of weak detectors built on boundary fragments.This detector means that it is not necessary to scan the image with a sliding window in order to localize the object.Boundaries have been used in object recognition to a certain extent:Ku-mar et al.[17]used part outlines in their application of pictorial structures[11]; Fergus et al.[13]used boundary curves between bitangent points in their exten-sion of the constellation model;and,Jurie and Schmid[16]detected circular arcLecture Notes in Computer Science 3features from boundary curves.However,in all these cases the boundary features are segmented independently in individual images.They are not flexibly selected to be discriminative over a training set,as they are here.Bernstein and Amit [4]do use discriminative edge maps.However,theirs is only a very local representa-tion of the boundary;in contrast we capture the global geometry of the object category.Recently,and independently,Shotton et al.[24]presented a method quite related to the Boundary-Fragment-Model presented here.The principal dif-ferences are:the level of segmentation required in training ([24]requires more);the number of boundary fragments employed in each weak detector (a single fragment in [24],and a variable number here);and the method of localizing the detected centroid (grid in [24],mean shift here).We will illustrate BFM classification and detection for a running example,namely the object class cows.For this we selected cow images as [7,19]which originate from the videos of Magee and Boyle [21].The cows appear at various positions in the image with just moderate scale changes.Figure 2shows some ex-ample images.Figure 3shows detections using the BFM detector on additional,more complex,cow images obtained from Google image search.Fig.2.Example training images for the cows cat-egory.Fig.3.Examples of detecting multiple objects in one test image.2Learning boundary fragmentsIn a similar manner to [19],we require the following data to train the model:–A training image set with the object delineated by a bounding box.–A validation image set labelled with whether the object is absent or present,and the object’s centroid (but the bounding box is not necessary).The training images provide the candidate boundary fragments,and these can-didates are optimized over the validation set as described below.For the results of this section the training set contains 20images of cows,and the validation set contains 25cow images (the positive set)and 25images of other objects (motorbikes and cars –the negative set).Given the outlines of the training images we want to identify boundary frag-ments that:(i)discriminate objects of the target category from other objects,and (ii)give a precise estimate of the object centroid.4Andreas Opelt et al.A candidate boundary fragment is required to(i)match edge chains often in the positive images but not in the negative,and(ii)have a good localization of the centroid in the positive images.These requirements are illustrated infigure4. The idea of using validation images for discriminative learning is motivated by Sali and Ullman[23].However,in their work they only consider requirement(i), the learning of class-discriminate parts,but not the second requirement which is a geometric relation.In the following wefirst explain how to score a boundary fragment according to how well it satisfies these two requirements,and then how this score is used to select candidate fragments from the training images.Fig.4.Scoring boundary fragments.Thefirst row shows an example of a boundary fragment that matches often on the positive images of the validation set,and less often on the negative images.Additionally it gives a good estimate of the centroid position on the positive images.In contrast,the second row shows an example of an unsuitable boundary fragment.The cross denotes the estimate of the centroid and the asterisk the correct object centroid.2.1Scoring a boundary fragmentLinked edges are obtained in the training and validation set using a Canny edge detector with hysteresis.We do not obtain perfect segmentations–there may be gaps and false edges.A linked edge in the training image is then considered as a candidate boundary fragmentγi,and scoring cost C(γi)is a product of two factors:1.c match(γi):the matching cost of the fragment to the edge chains in thevalidation images using a Chamfer distance[5,6],see(1).This is described in more detail below.2.c loc(γi):the distance(in pixels)between the true object centroid and thecentroid predicted by the boundary fragmentγi averaged over all the positive validation images.with C(γi)=c match(γi)c loc(γi).The matching cost is computed asLecture Notes in Computer Science5c match(γi)= L+i=1distance(γi,P vi)/L+L−i=1distance(γi,N vi)/L−(1)where L−denotes the number of negative validation images N viand L+thenumber of positive validation images P vi ,and distance(γi,I vi)is the distance tothe best matching edge chain in image I vi:distance(γi,I vi )=1|γi|minγi⊂I vit∈γiDT Iv i(t)(2)where DT Iv i is the distance transform.The Chamfer distance[5]is implementedusing8orientation planes with an overlap of5degrees.The orientation of the edges is averaged over a length of7pixels by orthogonal regression.Because of background clutter the best match is often located on highly textured back-ground clutter,i.e.it is not correct.To solve this problem we use the N=10 best matches(with respect to(2)),and from these we take the one with the best centroid prediction.Note,images are scale normalized for training.2.2Selecting boundary fragmentsHaving defined the cost,we now turn to selecting candidate fragments.This is accomplished by optimization.For this purpose seeds are randomly distributed on the boundary of each training image.Then at each seed we extract boundary fragments.We let the size of each fragment grow and at every step we calculate the cost C(γi)on the validation set.Figure5(a)shows three examples of this growing of boundary fragments(the length varies from20pixels in steps of 30pixels in both directions up to a length of520pixels).The cost is minimized over the varying length of the boundary fragment to choose the best fragment. If no length variation meets some threshold of the cost we reject this fragment and proceed with the next ing this procedure we obtain a codebook of boundary fragments each having the geometric information to vote for an object centroid.To reduce redundancy in the codebook the resulting boundary fragment set is merged using agglomerative clustering on medoids.The distance function isdistance(γi,γj)(where I vi in(2)is replaced by the binary image of fragmentγj)and we cluster with a threshold of th cl=0.2.Figure5(b)shows some examples of resulting clusters.This optimized codebook forms the basis for the next stage in learning the BFM.3Training an object detector using BoostingAt this stage we have a codebook of optimized boundary fragments each carrying additional geometric information on the object centroid.We now want to com-bine these fragments so that their aggregated estimates determine the centroid and increase the matching precision.In the case of image fragments,a single6Andreas Opelt et al.(a)Fig.5.Learning boundary fragments.(a)Each row shows the growing of a differ-ent random seed on a training image.(b)Clusters from the optimized boundary frag-ments.Thefirst column shows the chosen codebook entries.The remaining columns show the boundary fragments that also lie in that cluster.region can be used to determine a unique correspondence(e.g.see[19]).In con-trast,boundary fragments are not so discriminating,but a combination of several such fragments(for example distributed around the actual object boundary)is characteristic for an object class.We combine boundary fragments to form a weak detector by learning com-binations whichfit well on all the positive validation images.We then learn a strong detector from these weak detectors using a standard Boosting framework which is adapted to learn detection rather than classification.This learning of a strong detector chooses boundary fragments which model the whole distribu-tion of the training data(whereas the method of the previous section can score fragments highly if they have low costs on only a subset of the validation images).3.1Weak detectorsA weak detector is composed of k(typically2or3)boundary fragments.We want a detector tofire(h i(I)=1)if(i)the k boundary fragments match image edge chains,(ii)the centroid estimates concur,and,in the case of positive images,(iii) the centroid estimate agrees with the true object centroid.Figure6(a)illustrates a positive detection of an image(with k=2and the boundary fragments named γa andγb).The classification output h i(I)of detector h i on an image I is definedas:h i(I)=1if D(h i,I)<th hi 0otherwisewith th hi the learnt threshold of each detector(see section3.2),and where thedistance D(h i,I)of h i(consisting of k boundary fragmentsγj)to an image I is defined as:D(h i,I)=1m2s·kj=1distance(γj,I)(3)Lecture Notes in Computer Science7(a)Fig.6.Learning a weak detector.(a)The combination of boundary fragments to form a weak detector.Details in the text.(b)Examples of matching the weak detector to the validation set.Top:a weak detector with k=2,thatfires on a positive validation image because of highly compact centre votes close enough to the true object centre (black circle).Middle:a negative validation image where the same weak detector does notfire(votings do not concur).Bottom:the same as the top with k=3.In the implementation r=10and d c=15.The distance(γj,I)is defined in(2)and m s is explained below.Any weak de-tector where the centroid estimate misses the true object centroid by more than d c(in our case15pixels),is rejected.Figure6(b)shows examples of matches of weak detectors on positive and negative validation images.At these positions as shown in column2offigure6(a) each fragment also estimates a centroid by a circular uncertainty window.Here the radius of the window is r=10.The compactness of the centroid estimate is measured by m s(shown in the third column offigure6(a)).m s=k if the circular uncertainty regions overlap,and otherwise a penalty of m s=0.5is allocated.Note,to keep the search for weak detectors tractable,the number of used codebook entries(before clustering,to reduce the effort already in the clustering procedure)is restricted to the top500for k=2and200for k= 3(determined by the ranked costs C(γi)).Also,each boundary fragment is matched separately and only those for which distance(γj,I)<0.2are used. 3.2Strong detectorHaving defined a weak detector consisting of k boundary fragments and a thresh-old th hi ,we now explain how we learn this threshold and form a strong detectorH out of T weak detectors h i using AdaBoost.First we calculate the distances D(h i,I j)of all combinations of our boundary fragments(using k elements for one combination)on all(positive and negative)images of our validation set8Andreas Opelt et al.I1...I v.Then in each iteration1...T we search for the weak detector that ob-tains the best detection result on the current image weighting(for details see AdaBoost[15]).This selects weak detectors which generally(depending on the weighting)“fire”often on positive validation images(classify them as correct and estimate a centroid closer than d c to the true object centroid)and not on the negative ones.Figure7shows examples of learnt weak detectors that con-tribute to the strong detector.Each of these weak detectors also has a weight w hi.The output of a strong detector on a whole test image is then:H(I)=sign(Ti=1h i(I)·w hi).(4)The sign function is replaced in the detection procedure by a threshold t det, where an object is detected in the image I if H(I)>t det and no evidence for the occurrence of an object if H(I)≤t det(the standard formulation uses t det=0).Fig.7.Examples of weak detectors,left for k=2,and right for k=3.4Object DetectionDetection algorithm and segmentation:The steps of the detection algo-rithm are now described and qualitatively illustrated infigure8.First the edges are detected(step1)then the boundary fragments of the weak detectors,that form the strong detector,are matched to this edge image(step2).In order to de-tect(one or more)instances of the object(instead of classifying the whole image)each weak detector h i votes with a weight w hi in a Hough voting space(step3).Votes are then accumulated in a circular search window(W(x n))with radius d c around candidate points x n(represented by a Mean-Shift-Mode estimation[8]). The Mean-Shift modes that are above a threshold t det are taken as detections of object instances(candidate points).The confidence in detections at these candidate points x n is calculated using probabilistic scoring(see below).The segmentation is obtained by backprojection of the boundary fragments(step 3)of weak detectors which contributed to that centre to a binary pixel map. Typically,the contour of the object is over-represented by these fragments.We obtain a closed contour of the object,and additional,spurious contours(seen in figure8,step3).Short segments(<30pixels)are deleted,the contour isfilled (using Matlab’s‘filled area’in regionprops),and thefinal segmentation matte is obtained by a morphological opening,which removes thin structures(votes from outliers that are connected to the object).Finally,each of the objects obtained by this procedure is represented by its bounding box.Lecture Notes in Computer Science 9Fig.8.Examples of processing test images with the BFM detector.Probabilistic scoring:At candidate points x n for instances of an object category c ,found by the strong detector in the test image I T we sum up the (probabilistic)votings of the weak detectors h i in a 2D Hough voting space which gives us the probabilistic confidence:conf (x n )=T i p (c,h i )=T i p (h i )p (c |h i )(5)where p (h i )=1 Mq =1score (h q ,I T )·score (h i ,I T )describes the pdf of the effectivematching of the weak detector with score (h i ,I T )=1/D (h i ,I T )(see (3)).The second term of this vote is the confidence we have in each specific weak detector and is computed as:p (c |h i )=#fires correct #fires total(6)where #fires correct is the number of positive and #fires total is the number of positive and negative validation images the weak detector fires on.Finally our confidence of an object appearing at position x n is computed by using a Mean-Shift algorithm [8](circular window W (x n ))in the Hough voting space defined as:conf (x n |W (x n ))= X j ∈W (x n )conf (X j ).5Detection ResultsIn this section we compare the performance of the BFM detector to published state-of-the-art results,and also give results on new data sets.Throughout we use fixed parameters (T =200,k =2,t det =8)for our training and testing procedure unless stated otherwise.An object is deemed correctly detected if the overlap of the bounding boxes (detection vs ground truth)is greater than 50%.10Andreas Opelt et al.Cows:First we give quantitative results on the cow dataset.We used 20training images (validation set 25positive/25negative)and tested on 80un-seen images,half belonging to the category cows and half to counter examples (cars and motorbikes).In table 2we compare our results to those reported by Leibe et al.[19]and Caputo et al.[7](Images are from the same test set –though the authors do not specify which ones they used).We perform as well as the result in [19],clearly demonstrating that in some cases the contour alone is sufficient for excellent detection performance.Kumar et al.[17]also give an RPC curve for cow detection with an ROC-equal-error rate of 10%(though they use different test images).Note,that the detector can identify multiple instances in an image,as shown in figure 3.Variation in performance with number of training images:The re-sults on the cow dataset reported above have been achieved using 20training images.Figure 9shows how the number of training images influences the perfor-mance of the BFM detector.Even with five images our model achieves detection results of better than 10%RPC-equal-error rate.The performance saturates at twenty in this case,but this number is dependent on the degree of within class variation (e.g.see fig.10).05101520051015202530Number of training images R P C −e q u a l e r r o r Fig.9.Error dependingon the number of train-ing images for the cowdataset.2040608010022.533.544.555.5Number of training imagesR P C −e q u a l e r r o r Fig.10.Error depend-ing on the number of training images for Cars-Rear.Fig.11.Example of BFM detections for horses showing com-puted bounding boxes and segmentations.Caltech datasets:From the widely used Caltech datasets we performed experiments on the category Cars-Rear and Airplanes.Table 1shows our results compared with other state of the art approaches on the same test images as reported in [12].First we give the detection results (BFM-D)and compare them to the best (as far as we know)results on detection by Leibe et.al [18–20](scale changes are handled as described in section 6).We achieve superior results –even though we only require the bounding boxes in the training images (and not foreground segmentation as in [24],for example).For the classification results an image is classified,in the manner of [12],if it contains the object,but localization by a bounding box is not pared to recently published results on this data we again achieve the best results.Note that the amount of supervision varies over the methods where e.g.[26]use labels and bounding boxes (as we do);[2,3,12,22]use just the object labels;and Sivic et al.[25]use no supervision.Lecture Notes in Computer Science11 It should be pointed out,that we use just50training images and50validation images for each category,which is less than the other approaches use.Figure 10shows the error rate depending on the number of training images(again,the same number of positive and negative validation images are used).However,it is known that the Caltech images are now not sufficiently demanding,so we next consider further harder situations.Cat.BFM-D[18]BFM-C[12][22][25][2][3][14][26][28] Cars-Rear 2.25 6.10.058.88.921.43.1 2.3 1.89.8-Airplanes7.4- 2.6 6.311.1 3.4 4.510.3-17.15.6parison of the BFM detector to other published results on the Caltech dataset(Cars-Rear and Airplanes).Thefirst two columns give the actual object de-tection error(BFM-D)and the remaining columns the categorization of the images (BFM-C)given by the ROC-equal error rates.Horses and Cow/horse discrimination:To address the topic of how well our method performs on categories that consist of objects that have a similar boundary shape we attempt to detect and discriminate horses and cows.We use the horse data from[16](no quantitative comparison as the authors could not report their exact test set because of lost data).In the following we compare three models.In each case they are learnt on20training images of the category and a validation set of25positive and25negative images that is different for each model.Thefirst model for cows(cow-BFM)is learnt using no horses in the negative validation set(13cars,12motorbikes).The second model for horses (horse1-BFM)is learnt using also cows in the negative validation set(8cars, 10cows,7motorbikes).Finally we train a model(horse2-BFM)which uses just cow images as negative validation images(25cows).We now apply all three models on the same test set,containing40images of cows and40images of horses(figure11shows example detection results).Table3shows the failures and the RPC-equal error rate of each of these three models on this test set.The cow model is very strong(no failures)because it needs no knowledge of another object class even if its boundary shape is similar.Horse1-BFM is a weaker model (this is a consequence of greater variations of the horses in the training and test images).The model horse2-BFM obviously gains from the cows in the negative validation images,as it does not have any false positive detections.Overall this means our models are good at discriminating classes of similar boundary shapes. Still,categories with higher intra-class variability(like horses compared to cows) are harder to learn and might need more training data to generalize over the whole distribution.Bottles:To show the advantage of an approach relying on the shape of an ob-ject category we set up a new dataset of bottle images.This consists of118images collected using Google Image Search.Negative images are provided by the Cal-tech background image set.We separated the images in test/training/validation-12Andreas Opelt et al.Method RPC-err.Caputo et al.[7] 2.9%Leibe et al.[19]0.0%Our approach 0.0%Table parison of the BFM detector to other pub-lished results on the cows.-cow horse1horse2FP 030FN 01312M12RPC-err 0%23%19%Table 3.The first 3rows show the failures madeby the three different models (FP=false positive,FN=false negative,M=multiple detection).The lastrow shows the RPC-equal-error rate for each model.Fig.12.Example of BFM detections for bottles.The first row shows the bounding box of the detection and the second row shows the backprojected boundary fragments for these detections.set (64/24/30)and added the same amount of negative images in each case.We achieve an RPC-equal error rate of 9%.Figure 12shows some detection exam-ples.6Invariance to Scale,Rotation and ViewpointThis section briefly discusses the topic of invariance of the BFM with respect to scale,rotation and changes in viewpoint.Search over scale:A scaled codebook representation is used.Additionally we normalize the parameters in the detection procedure with respect to scale,for example the radius for centroid estimation,in the obvious way.The Mean-Shift modes are then aggregated over the set of scales,and the maxima explored as in the single scale case.Results on Cars-rear,airplanes and bottles of section 5were obtained by this method.Rotation:To achieve in-plane rotation invariance we use rotated versions of the codebook (see figure 12second column for an example).The BFM is invariant to small rotations in plane due to the orientation planes used in the Chamfer-matching.This is a consequence of the nature of our matching pro-cedure.For many categories the rotation invariance up to this degree may be sufficient (e.g.cars,cows)because they have a favoured orientation where other occurrences are quite unnatural.Lecture Notes in Computer Science13 Changes in viewpoint:For natural objects(e.g.cows)the perceived bound-ary is the visual rim.The position of the visual rim on the object will vary with pose but the shape of the associated boundary fragment will be valid over a range of poses.We performed experiments under controlled conditions on the ETH-80database.With a BFM learnt for a certain aspect we could still de-tect a prominent mode in the Hough voting space up to45degrees rotation in both directions(horizontal and vertical).Thus,to extend the BFM to various aspects this invariance to small viewpoint changes reduces the number of neces-sary positions on the view-sphere to a handful of aspects that have to be trained separately.Our probabilistic formulation can be straightforwardly extended to multiple aspects.7Discussion and ConclusionsWe have described a Boundary Fragment Model for detecting instances of object categories.The method is able to deal with the partial boundaries that typically are recovered by an edge detector.Its performance is similar to or outperforms state-of-the-art methods that include image appearance region fragments.For classes where the texture is very variable(e.g.bottles,mugs)a BFM may be preferable.In other cases a combination of appearance and boundary will have superior performance.It is worth noting that the BFM once learnt can be implemented very effi-ciently using the low computational complexity method of Felzenszwalb&Hut-tenlocher[11].Currently our research is focusing on extending the BFM to multi-class and multiple aspects of one class.AcknowledgementsThis work was supported by the Austrian Science Foundation FWF,project S9103-N04,ECVision and Pascal Network of Excellence.References1.S.Agarwal,A.Awan,and D.Roth.Learning to detect objects in images via asparse,part-based representation.IEEE PAMI,26(11):1475–1490,Nov.2004.2.J.Amores,N.Sebe,and P.Radeva.Fast spatial pattern discovery integratingboosting with constellations of contextual descriptors.In Proc.CVPR,volume2, pages769–774,CA,USA,June2005.3. A.Bar-Hillel,T.Hertz,and D.Weinshall.Object class recognition by boosting apart-based model.In Proc.CVPR,volume2,pages702–709,June2005.4. E.J.Bernstein and Y.Amit.Part-based statistical models for object classificationand detection.In Proc.CVPR,volume2,pages734–740,2005.5.G.Borgefors.Hierarchical chamfer matching:A parametric edge matching algo-rithm.IEEE PAMI,10(6):849–865,1988.。
A Simple Demodulation Method for FBG Temperature Sensors Using a Narrow Band Wavelength Tunable DFB LaserLian Shan Yan,Senior Member,IEEE,Anlin Yi,Student Member,IEEE,Wei Pan,Member,IEEE,andBin Luo,Member,IEEEAbstract—A simple demodulation method for afiber Bragg grating(FBG)temperature sensor is proposed and demonstrated, which uses a low cost commercial temperature-controlled wave-length scanning distributed-feedback(DFB)laser,a sensing FBG and a low-speed photodetector(PD).Relative high resolution is achieved by utilizing an FBG with enhanced temperature sen-sitivity design.Such a method may be suitable for applications requiring moderate resolution and low cost implementations. Index Terms—Distributed-feedback(DFB)laser,fiber Bragg grating(FBG),temperature sensor.I.I NTRODUCTIONF IBER BRAGG grating(FBG)sensors have attracted con-siderable attention for years,mainly due to their compact packages,high sensitivity,and immunity to electromagnetic in-terference,as well as their compatibility with existingfiber op-tical communication systems and feasibility of distributed mea-surements[1]–[3].Most FBG sensors are based on the principle that the center wavelength of FBG shifts as some perturbations are applied,such as temperature,strain,etc[4].In order to utilize FBGs as the sensing elements,it is neces-sary to develop some cost-effective wavelength demodulation schemes to measure the strain,temperature,or some other perturbation-induced Bragg wavelength shifts of sensors.So far,many demodulation schemes for wavelength-shift-based FBG sensors have been developed,including opticalfiltering [5]–[7],interferometry[4],[8],[9],signal processing in the electronic domain,such as Fourier analysis[10],[11]or dis-persion detection[12].Some of these techniques have obtained very high wavelength resolutions,while some may suffer from relatively complicated and expensive optical or electrical devices and complex demodulation processes.In addition, precisely controlled tunable laser sources are also deployed for wavelength-division-multiplexing(WDM)or time-divi-sion-multiplexing(TDM)FBG sensor systems[13],[14].Manuscript received May04,2010;revised July08,2010;accepted July16, 2010.Date of publication July23,2010;date of current version September01, 2010.The work was supported by the National Natural Science Foundation of China under Grant60972003,by the Program for New Century Excellent Tal-ents in University(NCET-08-0821),Ministry of Education,China.The authors are with the Center for Information Photonics and Communica-tions,School of Information Science and Technology,Southwest Jiaotong Uni-versity,Chengdu610031,Sichuan,China(e-mail:lsyan@; anlinyi@;wpan@;bluo@). Color versions of one or more of thefigures in this paper are available online at .Digital Object Identifier10.1109/LPT.2010.2060478However,for some applications where only moderate sen-sitivity and measurement speed are required,cost effective approaches are highly desired,such as electrical power stations, permafrost areas,etc.In this letter,we propose and demonstrate a simple demod-ulation method for FBG sensors,which only require a low cost commercial available distributed-feedback(DFB)laser source,a low-speed photodetector(PD),and other standard components.The DFB works as both the light source and the key demodulation component for the whole sensor setup:as we control the temperature of the DFB laser,the output wavelength of the DFB may match the reflected FBG center wavelength; therefore,through simple optical power detection,we can de-rive the temperature applied on the FBG sensor.We verify the effectiveness of our method within0C to80C temperature range with two different FBG sensors for comparison in terms of the measurement resolution.II.C ONFIGURATION AND P RINCIPLEThe schematic diagram of the proposed demodulation method for FBG temperature sensor is shown in Fig.1(a).The sensor system consists of a temperature-controlled wavelength scanning DFB laser,a circulator,a low-speed PD,a temperature controller(TC),and a sensing FBG.The straightforward but effective concept of DFB-based wavelength-shift demodulation of the FBG temperature sensor can be briefly explained as follows.As shown in Fig.1(b),the temperature dependence of the FBG output wavelength and the DFB output wavelength versus control temperature can be expressedas(1)(2)whereand are wavelength-shift parameters of the sensing FBG and the DFB laser caused by the temperature, respectively.At onetemperature,the DFB laser emits a lightat(much narrower bandwidth compared to that of the FBG)into the sensing FBG through the circulator.If matches with the center wavelength of sensingFBG[Fig.1(a)],the majority of light would be reflected by the sensing FBG and we will get a power peak at the PD.As the temperature of the DFB laser can be readily obtained from the TC,we can calcu-late the temperature of the sensing FBG from(1)and(2)easily [Fig.1(b)].If has a mismatchwith[Fig.1(a)],no or very weak power will be received at the detector,then we change1041-1135/$26.00©2010IEEEFig. 1.Schematic diagram and operation principle of proposed method:(a)FBG temperature sensor using temperature-controlled wavelength scanning DFB laser.(b)Acquiring corresponding temperature from relationship among wavelength of DFB,control temperature of DFB,and center wavelength of FBG;PD:photodetector;TC:temperature control block;OSA:optical spectrum analyzer.the output wavelength by adjusting the temperature of the DFB laser until the two wavelengths match with each other again [Fig.1(a)],i.e.,through automatically scanning the temperature of the DFB laser.III.E XPERIMENT AND R ESULTSOur experimental setup is similar to Fig.1(a)with the MS9710C optical spectrum analyzer (OSA)inserted to monitor the optical spectrum reflected by the sensing FBG as a refer-ence.A JDSU CQF series DFB laser and a low speed (200-MHz bandwidth)photodetector (PD)are used.The precise TC unit and its interface are provided by our vendor while the control function (manually adjust or automatically scan)is achieved through a computer data-acquisition (DAQ)card.Therefore,the output wavelength of the DFB laser can be tuned or scanned by adjusting its operation temperature through the control circuits.Note that the whole configuration can be further packaged into a single module (without OSA)for practical applications with all-parameters optimized.In our experiment,we tune the wavelength of the DFB laser by controlling the laser operation temperature (either manually or automatically through the control circuit).The wavelength tuning range is from 1554to 1558nm with the sensitivity about 0.071nm/C,and the temperature control accuracy in our setup is claimed to be higher than 0.05C from our vendor (not limited by the DAQ card though).For comparison,we select two FBGs and put one into the conventional thermally optimized package (Fig.2(a),called FBG1).For the other FBG,we use a special procedure to enhance its temperature sensitivity with corre-sponding package (Fig.2(b),called FBG2).Both FBGs are used to demonstrate the efficiency of our scheme.The 3-dB bandwidth and temperature sensitivity are 0.1nm and 0.013nm/C for FBG1,while 0.25nm and 0.052nm C for FBG2.Fig.3shows the reflection spectra of both FBGs with a slight difference in terms of the center wavelength and the 3-dB bandwidth.Fig.4shows the measured relationship between the output wavelength and control temperature of our DFB laser aftertheFig.2.Pictures of sensing FBGs:(a)FBG with conventional package.(b)FBG with enhancedsensitivity.Fig.3.Reflection spectra of sensing FBGs (FBG1:conventional FBG;FBG2:FBG with enhancedsensitivity).Fig.4.Measured characteristics in terms of wavelength versus temperature of DFB laser and sensing FBGs (FBG1and FBG2).calibration (i.e.,wavelength versus temperature),as well as the relationship between the center wavelength and the tempera-ture for FBG sensors.The FBG sensors are attached to different liquid conditions to get high temperature range (i.e.,cool ice to hot water).According to Fig.4,the center wavelength of the DFB laser and the sensing FBG can be explicitly expressedas(3)(4)(5)YAN et al.:SIMPLE DEMODULATION METHOD FOR FBG TEMPERATURE SENSORS1393Fig.5.Measured power spectra reflected by sensing FBG2(a)wavelength matched between DFB and FBG2and (b)wavelength mismatched.It is clearly indicated that the temperature resolution can be improved by the FBG with enhanced sensitively design,as an example,the gradient of FBG20.052nm C is steeper than that of FBG10.013nm C .Fig.5(a)and (b)show the power spectra reflected by the sensing FBG (FBG2)when the DFB laser is operated at 29.5C and 23.0C,respectively.In the case of Fig.5(a),the output wavelength of the DFB laser is well matched with the center wavelength of the sensing FBG (we get an output peak in the PD as well).According to Fig.4and the relationship expressed by (3)and (5),we can calculate that the temperature of sensing point is 20.0C,which is very close to the directly measured temperature as 20.4C using a commercial thermometer.(In fact,the difference between the FBG and the thermometer mea-sured temperature iswithin 1C for all the points during our experiments.)IV .C ONCLUSIONThe proposed sensor scheme is relatively simple and low cost,while there are two issues that need to be addressed:1)the mea-surement speed is limited by the temperature scanning of theDFB laser,typically from tens of milliseconds to seconds;2)the measurement resolution is limited by the temperature coefficient and bandwidth of the FBG,and the temperature control resolu-tion or accuracy of the DFB laser.Therefore,such an approach may be appropriate for applications with reasonable sensing re-quirements for speed and accuracy.In summary,we demonstrated a simple demodulation method for the FBG temperature sensor utilizing a narrow band wave-length tunable DFB laser.Higher measurement resolution can be obtained by using a large temperature coefficient and narrow bandwidth FBG,as well as more precise temperature-controlled DFB lasers with wider tuning wavelength range.R EFERENCES[1]M.G.Kuzyk ,Polymer Fiber Optics:Materials,Physics,and Applica-tions .Boca Raton,FL:CRC,2006.[2]A.D.Kersey,M.A.Davis,H.J.atrick,M.L.Blanc,K.P.Koo,C.G.Askins,M.A.Putnam,and E.J.Friebele,“Fiber grating sensors,”J.Lightw.Technol.,vol.15,no.8,pp.1442–1463,Aug.1997.[3]D.Zhao,X.Shu,and L.Y.Zhang,“Fiber Bragg grating sensor inter-rogation using chirped fiber grating based Sagnac loop,”IEEE Sensors J.,vol.3,no.4,pp.734–738,Apr.2003.[4]A.D.Kersey,T.A.Berkoff,and W.W.Morey,“Multiplexed fiberBragg grating strain-sensor system with a fiber Fabry-Perot wavelength filter,”Opt.Lett.,vol.18,no.16,pp.1370–1372,1993.[5]S.M.Melle,K.Liu,and R.M.Measures,“A passive wavelengthdemodulation system for guided-wave Bragg grating sensor,”IEEE Photon.Technol.Lett.,vol.14,no.4,pp.516–518,Apr.1992.[6]M.A.Davis and A.D.Kersey,“All-fiber Bragg grating strain-sensordemodulation technique using a wavelength division coupler,”Elec-tron.Lett.,vol.30,no.1,pp.75–77,1994.[7]A.D.Kersey,T.A.Berkoff,and W.W.Morey,“Multiplexed fiberBragg grating strain-sensor system with a fiber Fabry-Perot wavelength filter,”Opt.Lett.,vol.18,no.16,pp.1370–1372,1993.[8]K.P.Koo and A.D.Kersy,“Bragg grating based laser sensor systemwith interferometric interrogation and wavelength division multi-plexing,”J.Lightw.Technol.,vol.13,no.7,pp.1243–1249,Jul.1995.[9]M.Song,S.Yin,and P.B.Ruffin,“Fiber Bragg grating strain sensordemodulation with quadrature sampling of a Mach-Zehnder interfer-ometer,”Appl.Opt.,vol.39,no.7,pp.1106–1111,2000.[10]M.M.Ohn,S.Y.Huang,R.M.Measures,and J.Chwang,“Arbi-trary strain profile measurement within fiber gratings using interfer-ometric Fourier transform technique,”Electron.Lett.,vol.33,no.14,pp.1242–1243,1997.[11]M.A.Davis and A.D.Kersey,“Application of a fiber Fourier trans-form spectrometer to the detection of wavelength encoded signals from Bragg grating sensors,”J.Lightw.Technol.,vol.13,no.7,pp.1289–1295,Jul.1995.[12]S.W.James,M.L.Dockney,and R.P.Tatam,“Photorefractivevolume holographic demodulation of in-fiber Bragg grating sensors,”IEEE Photon.Technol.Lett.,vol.8,no.5,pp.664–666,May 1996.[13]C.C.Chan,W.Jin,H.L.Ho,and M.S.Demokan,“Performance anal-ysis of a time-division-multiplexed fiber Bragg grating sensor array by use of a tunable laser source,”IEEE J.Sel.Top.Quant.Electron.,vol.6,no.5,pp.741–749,May 2000.[14]B.Dong,S.Y.He,S.Y.Hu,D.W.Tian,J.F.Lv,and Q.D.Zhao,“Time-division multiplexing fiber grating sensor with a tunable pulsed laser,”IEEE Photon.Technol.Lett.,vol.18,no.11,pp.2620–2622,Nov.2006.。