Kernel-based multiple cue algorithm for object segmentation
- 格式:pdf
- 大小:262.60 KB
- 文档页数:11
MATLAB Toolboxestop Audio - Astronomy - BioMedicalInformatics - Chemometrics - Chaos - Chemistry - Coding - Control - Communications - Engineering - Excel - FEM - Finance - GAs - Graphics - Images - ICA - Kernel - Markov - Medical - MIDI - Misc. - MPI - NNets - Oceanography - Optimization - Plot - Signal Processing - Optimization - Statistics - SVM - etc ...NewZSM (zero sum multinomial)/zsmcode.htmlBinaural-modeling software for MATLAB/Windows/home/Michael_Akeroyd/download2.ht mlStatistical Parametric Mapping (SPM)/spm/ext/BOOTSTRAP MATLAB TOOLBOX.au/downloads/bootstrap_toolbox.htmlThe DSS package for MATLABDSS Matlab package contains algorithms for performing linear, deflation and symmetric DSS.http://www.cis.hut.fi/projects/dss/package/Psychtoolbox/download.htmlMultisurface Method Tree with MATLAB/~olvi/uwmp/msmt.htmlA Matlab Toolbox for every single topic !/~baum/toolboxes.htmleg. BrainStorm - MEG and EEG data visualization and processing CLAWPACK is a software package designed to compute numerical solutionsto hyperbolic partial differential equations using a wave propagation approach/~claw/DIPimage - Image Processing ToolboxPRTools - Pattern Recognition Toolbox (+ Neural Networks)NetLab - Neural Network ToolboxFSTB - Fuzzy Systems ToolboxFusetool - Image Fusion Toolboxhttp://www.metapix.de/toolbox.htmWAVEKIT - Wavelet ToolboxGat - Genetic Algorithm ToolboxTSTOOL is a MATLAB software package for nonlinear time series analysis. TSTOOL can be used for computing: Time-delay reconstruction, Lyapunov exponents, Fractal dimensions, Mutual information, Surrogate data tests, Nearest neighbor statistics, Return times, Poincare sections, Nonlinear predictionhttp://www.physik3.gwdg.de/tstool/MATLAB / Data description toolboxA Matlab toolbox for data description, outlier and novelty detection March 26, 2004 - D.M.J. Taxhttp://www-ict.ewi.tudelft.nl/~davidt/dd_tools/dd_manual.htmlMBEhttp://www.pmarneffei.hku.hk/mbetoolbox/Betabolic network toolbox for Matlabhttp://www.molgen.mpg.de/~lieberme/pages/network_matlab.htmlPharmacokinetics toolbox for Matlabhttp://page.inf.fu-berlin.de/~lieber/seiten/pbpk_toolbox.htmlThe SpiderThe spider is intended to be a complete object orientated environment for machine learning in Matlab. Aside from easy use of base learning algorithms, algorithms can be plugged together and can be comparedwith, e.g model selection, statistical tests and visual plots. This gives all the power of objects (reusability, plug together, share code) but also all the power of Matlab for machine learning research. http://www.kyb.tuebingen.mpg.de/bs/people/spider/index.htmlSchwarz-Christoffel Toolbox/matlabcentral/fileexchange/loadFile.do?o bjectId=1316&objectType=file#XML Toolbox/matlabcentral/fileexchange/loadFile.do?o bjectId=4278&objectType=fileFIR/TDNN Toolbox for MATLABBeta version of a toolbox for FIR (Finite Impulse Response) and TD (Time Delay) Neural Networks./interval-comp/dagstuhl.03/oish.pdfMisc.http://www.dcsc.tudelft.nl/Research/Software/index.htmlAstronomySaturn and Titan trajectories ... MALTAB astronomy/~abrecht/Matlab-codes/AudioMA Toolbox for Matlab Implementing Similarity Measures for Audio http://www.oefai.at/~elias/ma/index.htmlMAD - Matlab Auditory Demonstrations/~martin/MAD/docs/mad.htmMusic Analysis - Toolbox for Matlab : Feature Extraction from Raw Audio Signals for Content-Based Music Retrievalhttp://www.ai.univie.ac.at/~elias/ma/WarpTB - Matlab Toolbox for Warped DSPBy Aki Härmä and Matti Karjalainenhttp://www.acoustics.hut.fi/software/warp/MATLAB-related Softwarehttp://www.dpmi.tu-graz.ac.at/~schloegl/matlab/Biomedical Signal data formats (EEG machine specific file formats with Matlab import routines)http://www.dpmi.tu-graz.ac.at/~schloegl/matlab/eeg/MPEG Encoding library for MATLAB Movies (Created by David Foti)It enables MATLAB users to read (MPGREAD) or write (MPGWRITE) MPEG movies. That should help Video Quality project.Filter Design packagehttp://www.ee.ryerson.ca:8080/~mzeytin/dfp/index.htmlOctave by Christophe COUVREUR (Generates normalized A-weigthing, C-weighting, octave and one-third-octave digital filters)/matlabcentral/fileexchange/loadFile.do?o bjectType=file&objectId=69Source Coding MATLAB Toolbox/users/kieffer/programs.htmlBio Medical Informatics (Top)CGH-Plotter: MATLAB Toolbox for CGH-data AnalysisCode: http://sigwww.cs.tut.fi/TICSP/CGH-Plotter/Poster:http://sigwww.cs.tut.fi/TICSP/CSB2003/Posteri_CGH_Plotter.pdfThe Brain Imaging Software Toolboxhttp://www.bic.mni.mcgill.ca/software/MRI Brain Segmentation/matlabcentral/fileexchange/loadFile.do?o bjectId=4879Chemometrics (providing PCA) (Top)Matlab Molecular Biology & Evolution Toolbox(Toolbox Enables Evolutionary Biologists to Analyze and View DNA and Protein Sequences)James J. Caihttp://www.pmarneffei.hku.hk/mbetoolbox/Toolbox provided by Prof. Massart research grouphttp://minf.vub.ac.be/~fabi/publiek/Useful collection of routines from Prof age smilde research group http://www-its.chem.uva.nl/research/pacMultivariate Toolbox written by Rune Mathisen/~mvartools/index.htmlMatlab code and datasetshttp://www.acc.umu.se/~tnkjtg/chemometrics/dataset.htmlChaos (Top)Chaotic Systems Toolbox/matlabcentral/fileexchange/loadFile.do?o bjectId=1597&objectType=file#HOSA Toolboxhttp://www.mathworks.nl/matlabcentral/fileexchange/loadFile.do?ob jectId=3013&objectType=fileChemistry (Top)MetMAP - (Metabolical Modeling, Analysis and oPtimization alias Met. M. A. P.)http://webpages.ull.es/users/sympbst/pag_ing/pag_metmap/index.htmDoseLab - A set of software programs for quantitative comparison of measured and computed radiation dose distributionsGenBank Overview/Genbank/GenbankOverview.htmlMatlab:/matlabcentral/fileexchange/loadFile.do?o bjectId=1139CodingCode for the estimation of Scaling Exponentshttp://www.cubinlab.ee.mu.oz.au/~darryl/secondorder_code.html Control (Top)Control Tutorial for Matlab/group/ctm/AnotherCommunications (Top)Channel Learning Architecture toolbox(This Matlab toolbox is a supplement to the article "HiperLearn: A High Performance Learning Architecture")http://www.isy.liu.se/cvl/Projects/hiperlearn/Source Coding MATLAB Toolbox/users/kieffer/programs.htmlTCP/UDP/IP Toolbox 2.0.4/matlabcentral/fileexchange/loadFile.do?o bjectId=345&objectType=fileHome Networking Basis: Transmission Environments and Wired/Wireless ProtocolsWalter Y. Chen/support/books/book5295.jsp?category=new& language=-1MATLAB M-files and Simulink models/matlabcentral/fileexchange/loadFile.do?o bjectId=3834&objectType=fileEngineering (Top)OPNML/MATLAB Facilities/OPNML_Matlab/Mesh Generation/home/vavasis/qmg-home.htmlOpenFEM : An Open-Source Finite Element Toolbox/CALFEM is an interactive computer program for teaching the finite element method (FEM)http://www.byggmek.lth.se/Calfem/frinfo.htmThe Engineering Vibration Toolbox/people/faculty/jslater/vtoolbox/vtoolbox .htmlSaGA - Spatial and Geometric Analysis Toolboxby Kirill K. Pankratov/~glenn/kirill/saga.htmlMexCDF and NetCDF Toolbox For Matlab-5&6/staffpages/cdenham/public_html/MexCDF/nc4ml5.htmlCUEDSID: Cambridge University System Identification Toolbox/jmm/cuedsid/Kriging Toolbox/software/Geostats_software/MATLAB_KRIG ING_TOOLBOX.htmMonte Carlo (Dr Nando)http://www.cs.ubc.ca/~nando/software.htmlRIOTS - The Most Powerful Optimal Control Problem Solver/~adam/RIOTS/ExcelMATLAB xlsheets/matlabcentral/fileexchange/loadFile.do?o bjectId=4474&objectType=filewrite2excel/matlabcentral/fileexchange/loadFile.do?o bjectId=4414&objectType=fileFinite Element Modeling (FEM) (Top)OpenFEM - An Open-Source Finite Element Toolbox/NLFET - nonlinear finite element toolbox for MATLAB ( framework for setting up, solving, and interpreting results for nonlinear static and dynamic finite element analysis.)/GetFEM - C++ library for finite element methods elementary computations with a Matlab interfacehttp://www.gmm.insa-tlse.fr/getfem/FELIPE - FEA package to view results ( contains neat interface to MATLA /~blstmbr/felipe/Finance (Top)A NEW MATLAB-BASED TOOLBOX FOR COMPUTER AIDED DYNAMIC TECHNICAL TRADING Stephanos Papadamou and George StephanidesDepartment of Applied Informatics, University Of Macedonia Economic & Social Sciences, Thessaloniki, Greece/fen31/one_time_articles/dynamic_tech_trade_ matlab6.htmPaper::8089/eps/prog/papers/0201/0201001.pdfCompEcon Toolbox for Matlab/~pfackler/compecon/toolbox.htmlGenetic Algorithms (Top)The Genetic Algorithm Optimization Toolbox (GAOT) for Matlab 5 /mirage/GAToolBox/gaot/Genetic Algorithm ToolboxWritten & distributed by Andy Chipperfield (Sheffield University, UK) /uni/projects/gaipp/gatbx.htmlManual: /~gaipp/ga-toolbox/manual.pdfGenetic and Evolutionary Algorithm Toolbox (GEATbx)Evolutionary Algorithms for MATLAB/links/ea_matlab.htmlGenetic/Evolutionary Algorithms for MATLABhttp://www.systemtechnik.tu-ilmenau.de/~pohlheim/EA_Matlab/ea_mat lab.htmlGraphicsVideoToolbox (C routines for visual psychophysics on Macs by Denis Pelli)/VideoToolbox/Paper: /pelli/pubs/pelli1997videotoolbox.pdf4D toolbox/~daniel/links/matlab/4DToolbox.htmlImages (Top)Eyelink Toolbox/eyelinktoolbox/Paper: /eyelinktoolbox/EyelinkToolbox.pdfCellStats: Automated statistical analysis of color-stained cell images in Matlabhttp://sigwww.cs.tut.fi/TICSP/CellStats/SDC Morphology Toolbox for MATLAB (powerful collection of latest state-of-the-art gray-scale morphological tools that can be applied to image segmentation, non-linear filtering, pattern recognition and image analysis)/Image Acquisition Toolbox/products/imaq/Halftoning Toolbox for MATLAB/~bevans/projects/halftoning/toolbox/ind ex.htmlDIPimage - A Scientific Image Processing Toolbox for MATLABhttp://www.ph.tn.tudelft.nl/DIPlib/dipimage_1.htmlPNM Toolboxhttp://home.online.no/~pjacklam/matlab/software/pnm/index.html AnotherICA / KICA and KPCA (Top)ICA TU Toolboxhttp://mole.imm.dtu.dk/toolbox/menu.htmlMISEP Linear and Nonlinear ICA Toolboxhttp://neural.inesc-id.pt/~lba/ica/mitoolbox.htmlKernel Independant Component Analysis/~fbach/kernel-ica/index.htmMatlab: kernel-ica version 1.2KPCA- Please check the software section of kernel machines.KernelStatistical Pattern Recognition Toolboxhttp://cmp.felk.cvut.cz/~xfrancv/stprtool/MATLABArsenal A MATLAB Wrapper for Classification/tmp/MATLABArsenal.htmMarkov (Top)MapHMMBOX 1.1 - Matlab toolbox for Hidden Markov Modelling using Max. Aposteriori EMPrerequisites: Matlab 5.0, Netlab. Last Updated: 18 March 2002. /~parg/software/maphmmbox_1_1.tarHMMBOX 4.1 - Matlab toolbox for Hidden Markov Modelling using Variational BayesPrerequisites: Matlab 5.0,Netlab. Last Updated: 15 February 2002.. /~parg/software/hmmbox_3_2.tar/~parg/software/hmmbox_4_1.tarMarkov Decision Process (MDP) Toolbox for MatlabKevin Murphy, 1999/~murphyk/Software/MDP/MDP.zipMarkov Decision Process (MDP) Toolbox v1.0 for MATLABhttp://www.inra.fr/bia/T/MDPtoolbox/Hidden Markov Model (HMM) Toolbox for Matlab/~murphyk/Software/HMM/hmm.htmlBayes Net Toolbox for Matlab/~murphyk/Software/BNT/bnt.htmlMedical (Top)EEGLAB Open Source Matlab Toolbox for Physiological Research (formerly ICA/EEG Matlab toolbox)/~scott/ica.htmlMATLAB Biomedical Signal Processing Toolbox/Toolbox/Powerful package for neurophysiological data analysis ( Igor Kagan webpage)/Matlab/Unitret.htmlEEG / MRI Matlab Toolbox/Microarray data analysis toolbox (MDAT): for normalization, adjustment and analysis of gene expression data.Knowlton N, Dozmorov IM, Centola M. Department of Arthritis andImmunology, Oklahoma Medical Research Foundation, Oklahoma City, OK, USA 73104. We introduce a novel Matlab toolbox for microarray data analysis. This toolbox uses normalization based upon a normally distributed background and differential gene expression based on 5 statistical measures. The objects in this toolbox are open source and can be implemented to suit your application. AVAILABILITY: MDAT v1.0 is a Matlab toolbox and requires Matlab to run. MDAT is freely available at:/publications/2004/knowlton/MDAT.zip MIDI (Top)MIDI Toolbox version 1.0 (GNU General Public License)http://www.jyu.fi/musica/miditoolbox/Misc. (Top)MATLAB-The Graphing Tool/~abrecht/matlab.html3-D Circuits The Circuit Animation Toolbox for MATLAB/other/3Dcircuits/SendMailhttp://carol.wins.uva.nl/~portegie/matlab/sendmail/Coolplothttp://www.reimeika.ca/marco/matlab/coolplots.htmlMPI (Matlab Parallel Interface)Cornell Multitask Toolbox for MATLAB/Services/Software/CMTM/Beolab Toolbox for v6.5Thomas Abrahamsson (Professor, Chalmers University of Technology, Applied Mechanics, Göteborg, Sweden)http://www.mathworks.nl/matlabcentral/fileexchange/loadFile.do?ob jectId=1216&objectType=filePARMATLABNeural Networks (Top)SOM Toolboxhttp://www.cis.hut.fi/projects/somtoolbox/Bayes Net Toolbox for Matlab/~murphyk/Software/BNT/bnt.htmlNetLab/netlab/Random Neural Networks/~ahossam/rnnsimv2/ftp: ftp:///pub/contrib/v5/nnet/rnnsimv2/NNSYSID Toolbox (tools for neural network based identification of nonlinear dynamic systems)http://www.iau.dtu.dk/research/control/nnsysid.htmlOceanography (Top)WAFO. Wave Analysis for Fatigue and Oceanographyhttp://www.maths.lth.se/matstat/wafo/ADCP toolbox for MATLAB (USGS, USA)Presented at the Hydroacoustics Workshop in Tampa and at ADCP's in Action in San Diego/operations/stg/pubs/ADCPtoolsSEA-MAT - Matlab Tools for Oceanographic AnalysisA collaborative effort to organize and distribute Matlab tools for the Oceanographic Community/Ocean Toolboxhttp://www.mar.dfo-mpo.gc.ca/science/ocean/epsonde/programming.htmlEUGENE D. GALLAGHER(Associate Professor, Environmental, Coastal & Ocean Sciences) /edgwebp.htmOptimization (Top)MODCONS - a MATLAB Toolbox for Multi-Objective Control System Design /mecheng/jfw/modcons.htmlLazy Learning Packagehttp://iridia.ulb.ac.be/~lazy/SDPT3 version 3.02 -- a MATLAB software for semidefinite-quadratic-linear programming.sg/~mattohkc/sdpt3.htmlMinimum Enclosing Balls: Matlab Code/meb/SOSTOOLS Sum of Squares Optimization Toolbox for MATLAB User’s guide /sostools/sostools.pdfPSOt - a Particle Swarm Optimization Toolbox for use with MatlabBy Brian Birge ... A Particle Swarm Optimization Toolbox (PSOt) for use with the Matlab scientific programming environment has been developed. PSO isintroduced briefly and then the use of the toolbox is explained with some examples. A link to downloadable code is provided.Plot/software/plotting/gbplot/Signal Processing (Top)Filter Design with Motorola DSP56Khttp://www.ee.ryerson.ca:8080/~mzeytin/dfp/index.htmlChange Detection and Adaptive Filtering Toolboxhttp://www.sigmoid.se/Signal Processing Toolbox/products/signal/ICA TU Toolboxhttp://mole.imm.dtu.dk/toolbox/menu.htmlTime-Frequency Toolbox for Matlabhttp://crttsn.univ-nantes.fr/~auger/tftb.htmlVoiceBox - Speech Processing Toolbox/hp/staff/dmb/voicebox/voicebox.htmlLeast Squared - Support Vector Machines (LS-SVM)http://www.esat.kuleuven.ac.be/sista/lssvmlab/WaveLab802 : the Wavelet ToolboxBy David Donoho, Mark Reynold Duncan, Xiaoming Huo, Ofer Levi /~wavelab/Time-series Matlab scriptshttp://wise-obs.tau.ac.il/~eran/MATLAB/TimeseriesCon.htmlUvi_Wave Wavelet Toolbox Home Pagehttp://www.gts.tsc.uvigo.es/~wavelets/index.htmlAnotherSupport Vector Machine (Top)MATLAB Support Vector Machine ToolboxDr Gavin CawleySchool of Information Systems, University of East Anglia/~gcc/svm/toolbox/LS-SVM - SISTASVM toolboxes/dmi/svm/LSVM Lagrangian Support Vector Machine/dmi/lsvm/Statistics (Top)Logistic regression/SAGA/software/saga/Multi-Parametric Toolbox (MPT) A tool (not only) for multi-parametric optimization.http://control.ee.ethz.ch/~mpt/ARfit: A Matlab package for the estimation of parameters and eigenmodes of multivariate autoregressive modelshttp://www.mat.univie.ac.at/~neum/software/arfit/The Dimensional Analysis Toolbox for MATLABHome: http://www.sbrs.de/Paper:http://www.isd.uni-stuttgart.de/~brueckner/Papers/similarity2002. pdfFATHOM for Matlab/personal/djones/PLS-toolboxMultivariate analysis toolbox (N-way Toolbox - paper)http://www.models.kvl.dk/source/nwaytoolbox/index.aspClassification Toolbox for Matlabhttp://tiger.technion.ac.il/~eladyt/classification/index.htmMatlab toolbox for Robust Calibrationhttp://www.wis.kuleuven.ac.be/stat/robust/toolbox.htmlStatistical Parametric Mapping/spm/spm2.htmlEVIM: A Software Package for Extreme Value Analysis in Matlabby Ramazan Gençay, Faruk Selcuk and Abdurrahman Ulugulyagci, 2001. Manual (pdf file) evim.pdf - Software (zip file) evim.zipTime Series Analysishttp://www.dpmi.tu-graz.ac.at/~schloegl/matlab/tsa/Bayes Net Toolbox for MatlabWritten by Kevin Murphy/~murphyk/Software/BNT/bnt.htmlOther: /information/toolboxes.htmlARfit: A Matlab package for the estimation of parameters and eigenmodes of multivariate autoregressive models/~tapio/arfit/M-Fithttp://www.ill.fr/tas/matlab/doc/mfit4/mfit.htmlDimensional Analysis Toolbox for Matlab/The NaN-toolbox: A statistic-toolbox for Octave and Matlab® ... handles data with and without MISSING VALUES.http://www-dpmi.tu-graz.ac.at/~schloegl/matlab/NaN/Iterative Methods for Optimization: Matlab Codes/~ctk/matlab_darts.htmlMultiscale Shape Analysis (MSA) Matlab Toolbox 2000p.br/~cesar/projects/multiscale/Multivariate Ecological & Oceanographic Data Analysis (FATHOM) From David Jones/personal/djones/glmlab (Generalized Linear Models in MATLA.au/staff/dunn/glmlab/glmlab.html Spacial and Geometric Analysis (SaGA) toolboxInteresting audio links with FAQ, VC++, on the topicMATLAB Toolboxes(C) 2004 - SPMC / SoCCE / UoP。
一种基于量子密钥与混沌映射的图像加密新方法张克;高会新【摘要】According to the certifiable security of quantum key distribution protocol and pseudo random characteristics of the chaotic sequence, a new image encryption method that combined quantum key and chaotic mapping was proposed. Firstly, initial quantum key was obtained withBB84 protocol, then quantum key was formed after error checking for the initial key with data consulting approach, and the last key stream was achieved by combining the quantum key and chaotic sequence from Logistic mapping. Lastly, pixel values were replaced with XOR operation of pixel values and the last key stream, and image was encrypted. Simulation results and analysis show that, the new cryptographic algorithm can more effectively resist statistics-based and plain text attack, while ensuring the security of key transformation; the key stream results from the new method is highly sensitive to initial parameters.%依据量子密钥分配协议具有可证明的安全性及混沌序列的伪随机性,提出一种基于量子密钥与混沌映射相结合的图像加密新方法。
内联时延混沌映射耦合Lorenz系统的图像加密算法宋鑫超;苏庆堂;赵永升【摘要】为解决当前图像加密算法采用独立的置乱与扩散操作,降低算法内联性,且忽略混沌序列生成存在的时延因素,使其难以抵御明文攻击等不足,提出一种内联时延混沌映射耦合 Lorenz系统的图像加密算法。
将时间延迟引入 Logistic 映射中,生成 Arnold映射的初值;基于明文像素点,构造 Arnold映射迭代次数计算模型;根据 Arnold映射的迭代次数,建立其映射控制参数的计算函数,生成一组随机序列,利用位置集合,完成图像置乱;迭代超混沌 Lorenz 系统,生成4D序列组;引入密钥流,修正4D序列组;构造像素扩散机制,完成图像加密。
实验结果表明,与当前加密结构相比,该算法拥有更高的保密性能与更强的密钥敏感性。
%Using current image encryption algorithm is difficult to resist plaintext attacks induced by independently scrambling and diffusion operating resulting in reducing the algorithm’s inline,also the existing time delay factor of the chaos sequence is ig-nored,so the image encryption algorithm based on inline time delay chaotic map coupled with Lorenz system was proposed.The initial value of Arnold map was generated by introducing the time delay into the Logistic map.The iteration number model of Ar-nold map was constructed based on the pixels of plaintext.The initial value of Arnold map was generated by introducing the time delay into the Logistic map and basing on iteration number.The calculation function of the mapping control parameters wases-tablished based on the initial value for iterating to produce the random sequence and the image was permutated by position set. The 4D sequencegroup was generated by iterating the hyper-chaotic system.The pixel diffusion mechanism was constructed by the modified 4D sequence group with key stream to finish image encryption.Experimental results show that this algorithm has better security performance and stronger key sensitivity compared with the existing encryption structures.【期刊名称】《计算机工程与设计》【年(卷),期】2016(037)007【总页数】5页(P1757-1761)【关键词】图像加密;时间延迟;内联混沌映射;位置集合;Lorenz系统【作者】宋鑫超;苏庆堂;赵永升【作者单位】鲁东大学信息与电气工程学院,山东烟台 264025;鲁东大学信息与电气工程学院,山东烟台 264025;鲁东大学现代技术教学部,山东烟台 264025【正文语种】中文【中图分类】TP391传统的加密算法没有考虑到图像具有大数据容量、较高的冗余度等特点,难以用于图像加密[1-4]。
专利名称:防攻击高级加密标准的加密芯片的算法专利类型:发明专利
发明人:周玉洁,陈志敏,秦晗,谭咏伟
申请号:CN200610119238.1
申请日:20061207
公开号:CN101196965A
公开日:
20080611
专利内容由知识产权出版社提供
摘要:本发明公开一种防攻击高级加密标准的加密芯片的算法,涉及信息安全技术领域;该算法的机理是通过把输入的初始数据与一个随机数异或运算而把DPA需要使用到的中间数据掩盖;而Masking的关键在于中间的所有数据都是被修改过的但最终可以把数据再恢复还原输出。
因此这个设计需要两个数据通道,一个用于被修改的所需加密数据的正常加密处理,一个用于随机数的处理,使得最后可以将两个通道的数据通过简单异或而还原真实输出。
本发明具有相对安全而易实现的,无统计分析规律,并能最终把输出数据恢复还原的特点。
申请人:上海安创信息科技有限公司
地址:201204 上海市浦东新区张江高科技园区毕升路299弄6号202
国籍:CN
代理机构:上海申汇专利代理有限公司
代理人:吴宝根
更多信息请下载全文后查看。
基于复合离散混沌迭代系统的半脆弱水印算法
吴向东
【期刊名称】《中南林业科技大学学报》
【年(卷),期】2010(030)007
【摘要】针对一般傅立叶域水印算法中水印嵌入点固定、易受攻击的特点,利用复合离散混沌迭代系统,选取非固定的频率点作为水印信息嵌入点,大大提高了水印信息的安全性.同时,为保证水印算法的鲁棒性,采取双重嵌入的策略,这样即便一个嵌入点受到攻击,还可以从另一点提取水印信息.实验表明,本算法安全性高,鲁棒性也好于基于傅立叶变换的单重嵌入方法.
【总页数】5页(P176-180)
【作者】吴向东
【作者单位】中南林业科技大学,计算机科学学院,湖南,长沙,410004
【正文语种】中文
【中图分类】TP393
【相关文献】
1.一种基于混沌的彩色图像空域半脆弱水印算法 [J], 丁文霞;卢焕章;王浩;谢剑斌
2.基于复合离散混沌系统的图像加密算法 [J], 程甲;赵怀勋;朱建杨
3.一种基于复合离散混沌系统的对称图像加密算法 [J], 杨华千;张伟;韦鹏程;王永;杨德刚
4.基于复合离散混沌动力系统的序列密码算法 [J], 李红达;冯登国
5.基于Logistic混沌序列和奇异值分解的半脆弱水印算法 [J], 李剑;李生红;孙锬锋
因版权原因,仅展示原文概要,查看原文内容请购买。
一种基于时滞混沌的加密算法(英文)
徐淑奖;王继志;杨素香
【期刊名称】《计算物理》
【年(卷),期】2008(25)6
【摘要】基于半导体激光时滞混沌映射,提出一种新的加密算法.用Ikeda方程产生的二进制序列掩盖明文,对明文块做依赖于密钥的置换,并用传统的混沌加密方法加密.在每一轮加密过程中,都会用一个与混沌映射、明文和密文相关的随机数对时滞项做微扰,以提高算法的安全性;状态转移函数不仅与密钥相关,而且与本轮输入的明文符号以及上一轮输出的密文符号相关,有效地防止了选择明文/密文攻击.仿真实验表明,该算法可行、有效.
【总页数】8页(P749-756)
【关键词】混沌;加密;混沌密码;安全
【作者】徐淑奖;王继志;杨素香
【作者单位】山东省计算中心,山东济南250014;山东经济学院统计与数学学院,山东济南250014
【正文语种】中文
【中图分类】TP309.7
【相关文献】
1.基于一般时滞Lur’e系统混沌同步与时滞相关的M-矩阵法 [J], 池自英
2.结点含时滞的具有零和非零时滞耦合的复杂网络混沌同步 [J], 梁义;王兴元
3.基于LMI的参数未知时变时滞混沌系统模糊自适应H_∞同步 [J], 杨东升;张化光;赵琰;宋崇辉;王迎春
4.基于时滞反馈控制的混沌时滞神经网络的自适应同步 [J], 罗宇婧;李小林
5.一种基于忆阻时滞混沌系统的图像加密方法 [J], 林周彬;罗松江;高俊杰
因版权原因,仅展示原文概要,查看原文内容请购买。
Jian Wang and Ze-Nian Lijwangc,li@cs.sfu.caSchool of Computing ScienceSimon Fraser UniversityBurnaby,BC V5A1S6,CanadaABSTRACTThis paper proposes a novel algorithm to solve the problem of segmenting foreground-moving objects from the background scene.The major cue used for object segmentation is the motion information,which is initially extracted from MPEG motion vectors.Since the MPEG motion vectors are generated for simple video compression without any consideration of visual objects,they may not correspond to the true motion of the macroblocks.We propose a Kernel-based Multiple Cue(KMC) algorithm to deal with the above inconsistency of MPEG motion vectors and use multiple cues to segment moving objects. KMC detects and calibrates camera movements;and thenfinds the kernels of moving objects.The segmentation starts from these kernels,which are textured regions with credible motion vectors.Beside motion information,it also makes use of color and texture to help achieving a better segmentation.Moreover,KMC can keep track of the segmented objects over multiple frames,which is useful for object-based coding.Experimental results show that KMC combines temporal and spatial information in a graceful way,which enables it to segment and track the moving objects under different camera motions.Future work includes object segmentation in compressed domain,motion estimation from raw video,etc.Keywords:Motion vector,Locale,Kernel,Object segmentation,Multiple cues,Tracking1.INTRODUCTIONThe segmentation of foreground moving objects from the background scene in digital video has seen a high degree of interest in recent years.Many object segmentation algorithms have been proposed in the literature.QBIC and VideoQ use color as the major cue for segmentation.ASSET2detects corners in each frame to establish the feature correspondence between consecutive frames,estimate motion information,and segments moving objects based on this motion information.Polana and Nelson use Fourier Transform to detect and recognize objects with repetitive motion pattern,such as walking people. Malassiotis and Strintzis use the difference map between consecutive frames and apply an active contour model(snake) algorithm to segment moving objects.These algorithms can be grouped into color and texture based or motion based.MPEG-4is currently examining two temporal algorithms,and one spatial algorithm that is based on a watershed algorithm for object segmentation.Li proposes feature localization instead of segmentation and introduces a new concept locale,which is a set of tiles(or pixels)that can capture a certain feature.Locales are different from the segmented regions in three ways.Figure1shows the two locales in an image.The tiles forming the locales may not be connected.The locales can be overlapping,because a tile may belong to multiple locales.Moreover,the union of all locales in an image does not have to be the entire image.This paper proposes a Kernel-based Multiple Cue(KMC)algorithm,which uses MPEG motion vectors as the major cue to solve the problem of object segmentation.The KMC algorithm can deal with the inconsistency of the MPEG motion vectors and use multiple cues to refine the segmentation result.Moreover,a spatial segmentation algorithm is applied to extract the object shape and an object tracking algorithm is proposed to keep track of the segmented objects over multiple frames.2.OBJECT SEGMENTATION AND TRACKING2.1.DefinitionsIn this paper,kernel is defined as a group of neighboring macroblocks with credible motion vectors.In order to ensure that,the kernel must meet three criteria,which are motion consistency,low residue,and texture constraint.The detection of kernel is intended tofind regions with credible motion vectors,and subsequent object segmentation will start from these regions so that there exists a better chance to get good result.blueL red blueredL Figure 1.Localization vs.segmentation locale 3locale 1locale NFigure 2.Motion localeThis paper introduces a concept motion locale ,which is similar to locale .Motion locales can be non-connected,in-complete,and overlapping and they contain spatiotemporal information about a segmented moving object such as object size,centroid position,color distribution,texture,and a link to its previous occurrence.Figure 2shows the procedure of keeping track of a motion locale over multiple frames.Each rectangle represents a video frame and the irregular polygon stands for a moving object.The dashed line describes the motion trajectory of the centroid of the moving object.The information about a segmented object is saved in a motion locale,which represents a spatiotemporal entity.The multiple occurrences of the same object are reflected by the links between locales.The trajectory of a moving object can be seen clearly from the motion locale.2.2.Basic AssumptionsThe KMC algorithm has several assumptions,which are near orthographic camera projection assumption,rigid object assump-tion,and global affine motion assumption.The paper assumes that the distance from the scene to the camera is far compared to the variation of depth in the scene.Therefore,the depth estimation can be neglected.The rigid object is assumed so that the macroblocks composing the object can be clustered and segmented from the background based on motion information.The global motion caused by camera is assumed to be affine motion and a 2D affine motion model is used to describe the camera motion.2.3.Flow ChartFigure 3shows the overall structure of the KMC algorithm.At first,different camera motions,such as still (no motion),pan/tilt,and zoom,are detected and calibrated.After that,a kernel detection and merge procedure is applied to deal with the inconsistency of the MPEG motion vectors and to find the regions with reliable motion vectors,and subsequent object segmentation will start from the detected kernels.The kernel detection and merge procedure segments a video frame into the background,the object kernels,and the undefined region.However,sometimes the motion cue itself is not enough to recover the whole object region.The detected object kernels may miss some object macroblocks or add some background macroblocks.In order to deal with that,a region growing and refinement process is applied to add the object macroblocks and remove the background ones.This process is based on the color,texture,and other cues by considering the similarity between neighboringFigure3.Overall structure of KMCmacroblocks to reassign a macroblock to the background or an object kernel.Moreover,a spatial segmentation algorithm is performed to extract the object shape and an object tracking algorithm is designed to detect the segmented objects and their motion trajectory over multiple frames.2.4.Camera Motion DetectionA six-parameter2D affine motion model is used to detect background motion in this paper.The background macroblocks are assumed to conform to an affine motion model,which has six parameters.The three pairs of parameters can be resolved by using the centroid coordinates and motion vectors of at least three macroblocks.The estimation of the affine motion model is to solve the equations(1)(2) where to are the affine parameters to be estimated.2.4.1.Camera Pan/TiltWhen the camera is panning/tilting,there exists a major motion group moving towards a certain direction that corresponds to the camera motion.Therefore,pan/tilt can be detected by checking if there is a motion group that occupies a large part of the scene.In this paper,the threshold is set to0.6,which means a pan/tilt is detected if the largest motion group occupies more than 60percent of the entire scene.However,the method can make mistakes if the moving objects occupy a large part of the scene and move with similar motions.In order to avoid that,the motion of outmost macroblocks are checked to see if the boundary of the scene is moving or not.If the boundary of the scene is also moving,then the camera is moving;otherwise,the camera is still.Figure4.Symmetrical Pair of Macroblocks under Zoom2.4.2.Camera ZoomCamera zoom is modeled using a six-parameter affine motion model.When the camera is zooming,the motion pattern shows some symmetry around the center.It is detected by checking the number of symmetrical pair of macroblocks.Figure4shows some macroblocks under camera zoom.The shaded macroblocks are located symetrically around the center with opposite motion vectors.They are linked by dash lines.When the camera is zooming,the motion vectors will display some degree of symmetry around the center of the frame.When the camera is zooming in,all the pixels should move away from the center;when the camera is zooming out,all the pixels should move towards the center.Let denotes the center of the frame.For each macroblock A in the frame,there exists a symmetrical macroblock B. The centers of A and B are represented by and respectively.The motion vectors of A and B are and. The following relationships hold for A and B.(3)The affine motion parameters are estimated using the centroid coordinates and motion vectors of three macroblocks that show the symmetrical motion pattern illustrated in Figure4.Suppose the centroid of the macroblocks is,, and and their motion vectors are,,.The estimation of the affine motion model is to solve the equations(4)(5)(6)(7)(8)(9)where to are the affine parameters to be estimated.In order to improve the result,all the data that conform to the zoom pattern are used to estimate the affine parameters and their averages are used as thefinal affine motion parameters.In this paper,video clips are used for experiments and at least six pairs of macroblocks with the above symmetrical motion pattern are needed to estimate the affine motion parameters.The average of the estimated parameters are used as the affine parameters for the camera motion.2.5.Kernel Detection and MergeKernel detection is intended tofind the regions with credible motion vectors.The neighboring macroblocks with similar motion and small motion estimation error are merged to form a kernel.A kernel can be any size and any shape.Once a kernel is formed, it is checked to see if the kernel has some texture.If the kernel has no texture,then it is considered to be unreliable and removed from the kernel list.2.5.1.Motion ConsistencyThe macroblocks forming a kernel must conform to a consistent motion pattern.The macroblocks are merged into multiple groups based on their motion similarity.The neighboring macroblocks are compared and if their motions are similar,they will be put into one motion group.The similarity is measured as follows.(10) where denotes the motion vector of a macroblock M,and denotes the motion vector of a neighboring macroblock N of M.If is smaller than a certain threshold(set to3in this paper),the two macroblocks will be put into one group as a candidate kernel.2.5.2.Low Estimation ErrorThe motion estimation error of the whole kernel should be smaller than a certain threshold,which is set to0.85in this paper. This is to make sure that the motion estimation is accurate enough.Motion estimation is applied to each macroblock of a candidate kernel.The motion estimation error for a macroblock M with motion vector is computed as:(11) where is the number of correctly estimated pixels and is the total number of pixels in the macroblock,which is256in this paper.The motion estimation error of an entire kernel K is described in the following equation.(12)where denotes the macroblocks that belong to kernel K,and denote the macroblock and its motion vector respectively.is the percentage of wrong pixels in the motion compensated macroblock with motion vector.2.5.3.Texture constraintTexture constraint is applied to each candidate kernel to ensure that a kernel contains some texture.Each kernel is checked for the percentage of edge points.If the percentage is smaller than a threshold,the kernel is removed.The edge points are detected using a Sobel edge detector.Texture constraint is based on the observation that the motion vectors offlat regions can be easily affected by noises and are not reliable;while motion vectors of textured regions are more reliable and more likely to correspond to the real motion.2.5.4.Merge macroblocks into kernelsAt the beginning of the algorithm,each macroblock is assumed to be a candidate kernel.Then neighboring macroblocks are merged into large kernels as described below.For a macroblock M that is neighboring to a candidate kernel K,if its motion estimation error is lower than a certain threshold,then its motion difference with the kernel is checked.The difference between their motions is:(13) where is a macroblock of kernel K that is neighboring to M.If the difference is smaller than a certain threshold,then the macroblock is added to the kernel and the information of the kernel,such as its size and motion are updated as follows.The size of a kernel K is measured in the number of macroblocks forming K.The kernel motion is the average motion of all the macroblocks of K.It can be described by the following equation.(14) where denotes the macroblocks that belong to kernel K and denotes the motion vector of a certain macroblock i.After kernel detection,a video frame is segmented into kernels and undefined regions.These undefined regions fail to be clustered into kernels,because they have inconsistent motion pattern,high motion estimation error,or no internal texture.2.5.5.Background Kernel DetectionIf the camera is not moving,the background kernel is the one containing still macroblocks.Otherwise,the background kernel is the largest kernel conforming to the camera motion.2.5.6.Kernel Merge With the Background RegionThe criterion for kernel merge is based on motion.If the motion difference between a detected kernel and the background is smaller than a threshold,then it is merged with the background region according to the following rule.1.If the camera is still and a kernel has no motion or almost no motion,then add this kernel to the background.2.If the camera is moving,then detect the background kernelfirst.If the difference of this macroblock’s motion and thebackground motion is smaller than a motion similarity threshold,then add this kernel to the background kernel.After kernel detection and merge,a frame is usually segmented into three parts,which are the background,the object kernels,and the undefined region.The background corresponds to the region that moves consistently with the camera motion. The object kernels correspond to the moving objects in the scene.And the undefined region contains macroblocks that cannot be put into either background kernel or object kernels based on motion cue,which will be further processed by a multiple cue algorithm.2.6.Multiple Cue ProcessingThe kernel detection and merge process using motion information may not be enough to recover the whole object region.The detected object regions may miss some object macroblocks or add some background macroblocks.Therefore,a post-processing step is needed to add the object macroblocks and remove the background ones.This region growing and refinement process is based on the color,texture,and other cues.The algorithm considers the similarity between macroblocks based on color,texture, and other cues to reassign a macroblock to the background or an object kernel.The algorithm aims to segment a frame into regions consisting of macroblocks similar in motion or color,or consistent in texture.The equation for the similarity between two macroblocks and is as follow.(15) where and are the chromaticity index of the pixel in A and B block and returns1if equals to ,and0otherwise.2.7.Object TrackingA motion locale list is used to link the multiple occurrences of an object and get its motion trajectory.Each time an object is segmented,it is considered as a motion locale.When a motion locale is added to the motion locale list,the motion locale list is searched for a matching one.The matching is based on color distribution,size,and motion continuity.The search is intended tofind the occurrence of the object in a previous frame and link its occurrences over multiple frames.The confidence value of a motion locale is increased by one if a good matching is found.If no good matching can be found for a motion locale,then it is added as a new one to the motion locale list with confidence value set to zero.The motion locale list can serve for two purposes, object tracking and outlier removal.The motion locales with no link to other motion locales are most probably outliers and are removed from the list.Figure5shows how a motion locale is added to the motion locale list.The information of each segmented object,such as frame number,locale number,bounding rectangle,centroid,color histogram,size,and matching locale,are stored into a spatiotemporal motion locale list.The list shows the link between motion locales and the confidence value of each motion locale.The motion trajectory of a motion locale can be computed by tracing it in the list.The confidence value associated with a motion locale indicates the degree of its spatiotemporal consistency.The higher the confidence value,the more consistent it is spatiotemporally.increase confidence valueand add to the locale listsegmented object localeslocale feature extractionexisting locale listsearch the locale listl o c a l e N l o c a l e 5l o c a l e 4l o c a l e 3l o c a l e 2l o c a l e 1bounding rectanglecolor distributionset confidence value to 0add to the locale list link with the matching localefind a match yes nosizemotioncentroid:::Figure 5.Motion locale and locale list3.EXPERIMENTAL RESULTS3.1.Source Video ClipsThe Bike clip has different camera motions,such as still,pan/tilt,and zoom,which is suitable for testing the ability of the algorithm to detect the camera motion correctly.It also has a moving object that exists in multiple frames and is used to produce some experimental results about object segmentation and tracking.The Tennis clip has many frames with camera zooming motion.It is chosen to show the experimental results of camera motion detection and object segmentation under different camera motions,especially under zooming.3.2.Time and SpeedKMC uses MPEG motion vectors as the source of motion information and does not estimate optical flow.Therefore,it is very fast and runs in real time.3.3.Threshold SelectionThe KMC algorithm uses several thresholds for kernel detection and merge and multiple cue re finement.The thresholds are chosen empirically and remain the same for all testing videos.The threshold for motion estimation error is set to 0.85and the threshold for multiple cue re finement is set to 0.8.3.4.ResultsTable 1shows the camera pan detection result of bike video.The first column is the frame number and the second columnshows the angles of the camera pan and the angles is in the range of to.For example,at frame 3,the angle of camera pan is ,which means the camera is panning towards southeast.The last column shows the magnitude of the pan in pixels.For example,the first row of the table shows that the camera is pan towards southeast in a distance of 5pixels.The angle and distance of camera pan are extracted from the largest motion group in the scene,which corresponds to camera motion.Table1.Camera Pan Parameters of Bike VideoFrame Angle Magnitude33045930471531762751533387Figure6.Camera zooming detection.Figure6is an example of camera zooming from another video clip.From the motion vector map,it is clear that the camera is zooming in and the motion vectors are moving outward from the center of the frame.The right part of the frame does not conform to this motion pattern because of the object motion.Table2shows the camera zooming detection result of tennis video.This video has many frames with camera zooming, which is modeled by an affine motion model.The table shows the frame number and the computed affine motion parameters. Thefirst column is the frame number in which a camera zooming happens.Columns2to7correspond to the six affine motion parameters to.Figure7is an example of object segmentation on frame9of bike video.The result shows four rectangular areas.The upper right area shows the current frame image,the upper left one visualizes the motion vectors of this frame with each small rectangle standing for a macroblock of pixels.The bottom right area shows thefinal object segmentation result and the bottom left area shows the result of kernel detection and merge.The shaded regions denote the detected kernels.TheTable2.Camera Zooming Pattern Modeled by an Affine Motion Model.Frame a1a2a3a4a5a627 1.056-0.016-7.2260.000 1.052-5.97733 1.093-0.003-14.840-0.010 1.117-11.79439 1.144-0.000-24.0460.001 1.138-16.56745 1.140-0.005-22.842-0.001 1.147-17.27251 1.1420.003-24.1110.000 1.146-17.33357 1.144-0.002-23.9050.000 1.151-17.71063 1.145-0.013-22.629-0.006 1.146-16.05469 1.124-0.003-20.465-0.003 1.130-14.86075 1.130-0.009-20.7020.005 1.120-15.00981 1.125-0.013-19.707-0.006 1.144-15.582870.9140.683-61.718-0.012 1.130-13.016Figure7.Object Segmentation Result on Frame9of Bike Video.black region corresponds to the background kernel,which may be disjointed.It is desirable to allow the background kernel to be disjointed,because the foreground objects may cover some parts.It can be seen that the detected kernels are the regions with reliable motion vectors.These kernels all have some edge points and the largeflat regions are not detected as kernel, because they do not have enough edge points.These kernels also have good motion consistency among their macroblocks.By comparing the images in the middle row with those in the bottom row,it can be seen that three object macroblocks are missed from the kernel-based segmentation.The reason is that these three macroblocks locate at the boundary of the object and have different motion vectors.Therefore,motion-based segmentation cannotfind them.However,the multiple cue procedure refines the segmentation result andfinds the three macroblocks.Figure8shows the result of object segmentation on frame9of tennis video.The result shows six images,each with a resolution of pixels.Thefirst two images on the top show the original image and its motion vector map of macroblocks of pixels.The pointer in the middle of a macroblock is an illustration of its motion vector.The direction of the pointer is the direction to which the macroblock is moving and the length of the pointer is the distance that the macroblock actually moves in pixels.The two images in the middle show the results after kernel detection and merge and the bottom ones show the results after multiple cue refinement.The black region corresponds to the background and most of the background macroblocks do not move,which means the camera is still in this frame.The detected kernel corresponds to the moving hand in the scene and the white ball is missed because it has wrong motion vectors.By comparing the middle images and those bottom ones,it is shown that the gray macroblocks with zero motion vectors are the new ones added to the segmented object kernel by using multiple cues.The motion of these macroblocks does not conform to a consistent object motion model. However,they are still recovered.This shows that the multiple cue approach is helpful in recovering an object that does not move consistently.This means that only part of an object moves consistently with a certain motion model can be detected by using motion information and the other parts of the object may be detected by using other cues,such as color,texture,etc.Figure9is the illustration of object tracking result in the bike video.It shows the result of object segmentation and tracking over multiple frames.The object is segmented and a bounding rectangle is formed to cover the whole object.The list shows the spatiotemporal segmentation result,including the object segmentation results when the object is moving and when the object is still in the scene.When the object is moving,the motion and other cues are used to segment the object from the background,Figure8.Object Segmentation Result on Frame9of Tennis Video.Figure9.Object Tracking over Multiple Frames Result on Bike Video.and when it is still,a projection and spatial segmentation algorithm is applied to segment the object.Table3shows the performance of KMC algorithm.Thefirst column is the frame number.The second column is the number of pixels in the manually segmented object,which is used to measure the performance of the segmentation results using KMC.The third column shows the number of object pixels of the segmented object.The fourth column shows the percentage of correctly segmented pixels,which is the result of dividing column3by column2.Thefifth column is the number of background pixels of the segmented object and the last column shows the percentage of background pixels.These two percentages show how good the segmentation is.Since the two percentages are obtained by dividing object pixels and background pixels segmented by KMC with manually segmented object pixels,the two percentages may add up to more than .From the table,it can be seen that the correct ratios are all above,while the wrong ratios are all below.4.CONCLUSION AND FUTURE WORKThis paper shows that the KMC algorithm can perform motion analysis directly on MPEG motion vectors and segment moving objects from the background scene under different camera motions,such as pan,tilt,and zoom.It combines multiple cues,suchTable3.Performance of spatial segmentation.Frame NumOfObjectPels(manual)correct rate wrong rate32691229885.4%40415.0%62784232683.5%34412.4%92810251289.4%49617.6%122933261089.0%52217.8%152889256188.7%47216.3%183348268580.2%46914.0%213378261088.6%60818.0%as motion,color,texture and other cues,in a graceful way to achieve good segmentation results.The algorithm makes use of spatiotemporal information to keep track of segmented moving objects.The proposed KMC algorithm can be improved in several ways.The camera motion detection procedure can use a Singular Value Decomposition(SVD)algorithm to estimate the affine camera motion parameters.The KMC algorithm can not detect moving objects when the kernel detection procedure fails to produce good result.An Expectation Maximization(EM)statistical method can be used to make the kernel detection process more robust.It is also an interesting topic to study how to apply the segmentation results of KMC to object based video coding,such as MPEG-4.Moreover,even though KMC makes use of some compressed information such as MPEG motion vectors,it is still far from working directly on the compressed domain. Considering the compression ratio of MPEG videos,it is desirable to segment moving objects directly on the compressed domain so that both the processing speed and efficiency can be improved.Finally,it is reasonable to anticipate that KMC will produce good results if the motion vectors of macroblocks correspond to their real motion.Therefore,optimization on MPEG motion vectors is desirable with the purpose to obtain better motion vectors.ACKNOWLEDGMENTSThis work was supported in part by the Canadian National Science and Engineering Research Council under the grant OGP-36727,and the grant for Telelearning Research Project in the Canadian Network of Centres of Excellence.REFERENCES1.M.Flickner,et al.,“Query by image and video content:the QBIC system,”IEEE Computer28(9),pp.23–32,1995.2.S.F.Chang,et al.,“VideoQ:an automated content based video search system using visual cues,”in Proc.ACM Multimedia97,pp.313–324,1997.3.S.Smith,“Asset-2:Real-time motion segmentation and object tracking,”Proc.ICCV,pp.237–250,1995.4.R.Polana and R.Nelson,“Detection and recognition of periodic,non-rigid motion,”International Journal of ComputerVision23(3),pp.261–282,1997.5.Malassiotis and Strintzis,“Object-based coding of stereo image sequences,”IEEE Trans.on Circuits and Systems forVideo Technology7(6),pp.891–902,1997.6.ISO/IEC JTC1/SC29/WG11N2202,Coding of moving pictures and audio,March1998.7.R.Mech and M.Wollborn,“A noise robust method for segmentation of moving objects in video sequences considering amoving camera,”Signal Processing66(2),pp.203–217,1998.8.A.Neri,S.Colonnese,and G.Russo,“Automatic moving objects and background segmentation by means of higher orderstatistics,”SPIE3024,pp.8–14,1997.9.L.Vincent and P.Soille,“Watersheds in digital spaces:an efficient algorithm based on immersion simulations,”IEEETransactions on PAMI13(6),pp.583–598,1991.10.Z.Li,O.Zaiane,and Z.Tauber,“Illumination invariance and object model in content-based image and video retrieval,”Journal of Visual Communication and Image Representation10(3),pp.219–244,1999.11.M.Lee,et al.,“A layered video object coding system using sprite and affine motion model,”IEEE Trans.on Circuits andSystems for Video Technology7(1),pp.130–144,1997.12.W.H.Press,et al.,Numerical recipes in C,Cambridge University Press,1992.。