Kernel-based multiple cue algorithm for object segmentation
- 格式:pdf
- 大小:262.60 KB
- 文档页数:11
MATLAB Toolboxestop Audio - Astronomy - BioMedicalInformatics - Chemometrics - Chaos - Chemistry - Coding - Control - Communications - Engineering - Excel - FEM - Finance - GAs - Graphics - Images - ICA - Kernel - Markov - Medical - MIDI - Misc. - MPI - NNets - Oceanography - Optimization - Plot - Signal Processing - Optimization - Statistics - SVM - etc ...NewZSM (zero sum multinomial)/zsmcode.htmlBinaural-modeling software for MATLAB/Windows/home/Michael_Akeroyd/download2.ht mlStatistical Parametric Mapping (SPM)/spm/ext/BOOTSTRAP MATLAB TOOLBOX.au/downloads/bootstrap_toolbox.htmlThe DSS package for MATLABDSS Matlab package contains algorithms for performing linear, deflation and symmetric DSS.http://www.cis.hut.fi/projects/dss/package/Psychtoolbox/download.htmlMultisurface Method Tree with MATLAB/~olvi/uwmp/msmt.htmlA Matlab Toolbox for every single topic !/~baum/toolboxes.htmleg. BrainStorm - MEG and EEG data visualization and processing CLAWPACK is a software package designed to compute numerical solutionsto hyperbolic partial differential equations using a wave propagation approach/~claw/DIPimage - Image Processing ToolboxPRTools - Pattern Recognition Toolbox (+ Neural Networks)NetLab - Neural Network ToolboxFSTB - Fuzzy Systems ToolboxFusetool - Image Fusion Toolboxhttp://www.metapix.de/toolbox.htmWAVEKIT - Wavelet ToolboxGat - Genetic Algorithm ToolboxTSTOOL is a MATLAB software package for nonlinear time series analysis. TSTOOL can be used for computing: Time-delay reconstruction, Lyapunov exponents, Fractal dimensions, Mutual information, Surrogate data tests, Nearest neighbor statistics, Return times, Poincare sections, Nonlinear predictionhttp://www.physik3.gwdg.de/tstool/MATLAB / Data description toolboxA Matlab toolbox for data description, outlier and novelty detection March 26, 2004 - D.M.J. Taxhttp://www-ict.ewi.tudelft.nl/~davidt/dd_tools/dd_manual.htmlMBEhttp://www.pmarneffei.hku.hk/mbetoolbox/Betabolic network toolbox for Matlabhttp://www.molgen.mpg.de/~lieberme/pages/network_matlab.htmlPharmacokinetics toolbox for Matlabhttp://page.inf.fu-berlin.de/~lieber/seiten/pbpk_toolbox.htmlThe SpiderThe spider is intended to be a complete object orientated environment for machine learning in Matlab. Aside from easy use of base learning algorithms, algorithms can be plugged together and can be comparedwith, e.g model selection, statistical tests and visual plots. This gives all the power of objects (reusability, plug together, share code) but also all the power of Matlab for machine learning research. http://www.kyb.tuebingen.mpg.de/bs/people/spider/index.htmlSchwarz-Christoffel Toolbox/matlabcentral/fileexchange/loadFile.do?o bjectId=1316&objectType=file#XML Toolbox/matlabcentral/fileexchange/loadFile.do?o bjectId=4278&objectType=fileFIR/TDNN Toolbox for MATLABBeta version of a toolbox for FIR (Finite Impulse Response) and TD (Time Delay) Neural Networks./interval-comp/dagstuhl.03/oish.pdfMisc.http://www.dcsc.tudelft.nl/Research/Software/index.htmlAstronomySaturn and Titan trajectories ... MALTAB astronomy/~abrecht/Matlab-codes/AudioMA Toolbox for Matlab Implementing Similarity Measures for Audio http://www.oefai.at/~elias/ma/index.htmlMAD - Matlab Auditory Demonstrations/~martin/MAD/docs/mad.htmMusic Analysis - Toolbox for Matlab : Feature Extraction from Raw Audio Signals for Content-Based Music Retrievalhttp://www.ai.univie.ac.at/~elias/ma/WarpTB - Matlab Toolbox for Warped DSPBy Aki Härmä and Matti Karjalainenhttp://www.acoustics.hut.fi/software/warp/MATLAB-related Softwarehttp://www.dpmi.tu-graz.ac.at/~schloegl/matlab/Biomedical Signal data formats (EEG machine specific file formats with Matlab import routines)http://www.dpmi.tu-graz.ac.at/~schloegl/matlab/eeg/MPEG Encoding library for MATLAB Movies (Created by David Foti)It enables MATLAB users to read (MPGREAD) or write (MPGWRITE) MPEG movies. That should help Video Quality project.Filter Design packagehttp://www.ee.ryerson.ca:8080/~mzeytin/dfp/index.htmlOctave by Christophe COUVREUR (Generates normalized A-weigthing, C-weighting, octave and one-third-octave digital filters)/matlabcentral/fileexchange/loadFile.do?o bjectType=file&objectId=69Source Coding MATLAB Toolbox/users/kieffer/programs.htmlBio Medical Informatics (Top)CGH-Plotter: MATLAB Toolbox for CGH-data AnalysisCode: http://sigwww.cs.tut.fi/TICSP/CGH-Plotter/Poster:http://sigwww.cs.tut.fi/TICSP/CSB2003/Posteri_CGH_Plotter.pdfThe Brain Imaging Software Toolboxhttp://www.bic.mni.mcgill.ca/software/MRI Brain Segmentation/matlabcentral/fileexchange/loadFile.do?o bjectId=4879Chemometrics (providing PCA) (Top)Matlab Molecular Biology & Evolution Toolbox(Toolbox Enables Evolutionary Biologists to Analyze and View DNA and Protein Sequences)James J. Caihttp://www.pmarneffei.hku.hk/mbetoolbox/Toolbox provided by Prof. Massart research grouphttp://minf.vub.ac.be/~fabi/publiek/Useful collection of routines from Prof age smilde research group http://www-its.chem.uva.nl/research/pacMultivariate Toolbox written by Rune Mathisen/~mvartools/index.htmlMatlab code and datasetshttp://www.acc.umu.se/~tnkjtg/chemometrics/dataset.htmlChaos (Top)Chaotic Systems Toolbox/matlabcentral/fileexchange/loadFile.do?o bjectId=1597&objectType=file#HOSA Toolboxhttp://www.mathworks.nl/matlabcentral/fileexchange/loadFile.do?ob jectId=3013&objectType=fileChemistry (Top)MetMAP - (Metabolical Modeling, Analysis and oPtimization alias Met. M. A. P.)http://webpages.ull.es/users/sympbst/pag_ing/pag_metmap/index.htmDoseLab - A set of software programs for quantitative comparison of measured and computed radiation dose distributionsGenBank Overview/Genbank/GenbankOverview.htmlMatlab:/matlabcentral/fileexchange/loadFile.do?o bjectId=1139CodingCode for the estimation of Scaling Exponentshttp://www.cubinlab.ee.mu.oz.au/~darryl/secondorder_code.html Control (Top)Control Tutorial for Matlab/group/ctm/AnotherCommunications (Top)Channel Learning Architecture toolbox(This Matlab toolbox is a supplement to the article "HiperLearn: A High Performance Learning Architecture")http://www.isy.liu.se/cvl/Projects/hiperlearn/Source Coding MATLAB Toolbox/users/kieffer/programs.htmlTCP/UDP/IP Toolbox 2.0.4/matlabcentral/fileexchange/loadFile.do?o bjectId=345&objectType=fileHome Networking Basis: Transmission Environments and Wired/Wireless ProtocolsWalter Y. Chen/support/books/book5295.jsp?category=new& language=-1MATLAB M-files and Simulink models/matlabcentral/fileexchange/loadFile.do?o bjectId=3834&objectType=fileEngineering (Top)OPNML/MATLAB Facilities/OPNML_Matlab/Mesh Generation/home/vavasis/qmg-home.htmlOpenFEM : An Open-Source Finite Element Toolbox/CALFEM is an interactive computer program for teaching the finite element method (FEM)http://www.byggmek.lth.se/Calfem/frinfo.htmThe Engineering Vibration Toolbox/people/faculty/jslater/vtoolbox/vtoolbox .htmlSaGA - Spatial and Geometric Analysis Toolboxby Kirill K. Pankratov/~glenn/kirill/saga.htmlMexCDF and NetCDF Toolbox For Matlab-5&6/staffpages/cdenham/public_html/MexCDF/nc4ml5.htmlCUEDSID: Cambridge University System Identification Toolbox/jmm/cuedsid/Kriging Toolbox/software/Geostats_software/MATLAB_KRIG ING_TOOLBOX.htmMonte Carlo (Dr Nando)http://www.cs.ubc.ca/~nando/software.htmlRIOTS - The Most Powerful Optimal Control Problem Solver/~adam/RIOTS/ExcelMATLAB xlsheets/matlabcentral/fileexchange/loadFile.do?o bjectId=4474&objectType=filewrite2excel/matlabcentral/fileexchange/loadFile.do?o bjectId=4414&objectType=fileFinite Element Modeling (FEM) (Top)OpenFEM - An Open-Source Finite Element Toolbox/NLFET - nonlinear finite element toolbox for MATLAB ( framework for setting up, solving, and interpreting results for nonlinear static and dynamic finite element analysis.)/GetFEM - C++ library for finite element methods elementary computations with a Matlab interfacehttp://www.gmm.insa-tlse.fr/getfem/FELIPE - FEA package to view results ( contains neat interface to MATLA /~blstmbr/felipe/Finance (Top)A NEW MATLAB-BASED TOOLBOX FOR COMPUTER AIDED DYNAMIC TECHNICAL TRADING Stephanos Papadamou and George StephanidesDepartment of Applied Informatics, University Of Macedonia Economic & Social Sciences, Thessaloniki, Greece/fen31/one_time_articles/dynamic_tech_trade_ matlab6.htmPaper::8089/eps/prog/papers/0201/0201001.pdfCompEcon Toolbox for Matlab/~pfackler/compecon/toolbox.htmlGenetic Algorithms (Top)The Genetic Algorithm Optimization Toolbox (GAOT) for Matlab 5 /mirage/GAToolBox/gaot/Genetic Algorithm ToolboxWritten & distributed by Andy Chipperfield (Sheffield University, UK) /uni/projects/gaipp/gatbx.htmlManual: /~gaipp/ga-toolbox/manual.pdfGenetic and Evolutionary Algorithm Toolbox (GEATbx)Evolutionary Algorithms for MATLAB/links/ea_matlab.htmlGenetic/Evolutionary Algorithms for MATLABhttp://www.systemtechnik.tu-ilmenau.de/~pohlheim/EA_Matlab/ea_mat lab.htmlGraphicsVideoToolbox (C routines for visual psychophysics on Macs by Denis Pelli)/VideoToolbox/Paper: /pelli/pubs/pelli1997videotoolbox.pdf4D toolbox/~daniel/links/matlab/4DToolbox.htmlImages (Top)Eyelink Toolbox/eyelinktoolbox/Paper: /eyelinktoolbox/EyelinkToolbox.pdfCellStats: Automated statistical analysis of color-stained cell images in Matlabhttp://sigwww.cs.tut.fi/TICSP/CellStats/SDC Morphology Toolbox for MATLAB (powerful collection of latest state-of-the-art gray-scale morphological tools that can be applied to image segmentation, non-linear filtering, pattern recognition and image analysis)/Image Acquisition Toolbox/products/imaq/Halftoning Toolbox for MATLAB/~bevans/projects/halftoning/toolbox/ind ex.htmlDIPimage - A Scientific Image Processing Toolbox for MATLABhttp://www.ph.tn.tudelft.nl/DIPlib/dipimage_1.htmlPNM Toolboxhttp://home.online.no/~pjacklam/matlab/software/pnm/index.html AnotherICA / KICA and KPCA (Top)ICA TU Toolboxhttp://mole.imm.dtu.dk/toolbox/menu.htmlMISEP Linear and Nonlinear ICA Toolboxhttp://neural.inesc-id.pt/~lba/ica/mitoolbox.htmlKernel Independant Component Analysis/~fbach/kernel-ica/index.htmMatlab: kernel-ica version 1.2KPCA- Please check the software section of kernel machines.KernelStatistical Pattern Recognition Toolboxhttp://cmp.felk.cvut.cz/~xfrancv/stprtool/MATLABArsenal A MATLAB Wrapper for Classification/tmp/MATLABArsenal.htmMarkov (Top)MapHMMBOX 1.1 - Matlab toolbox for Hidden Markov Modelling using Max. Aposteriori EMPrerequisites: Matlab 5.0, Netlab. Last Updated: 18 March 2002. /~parg/software/maphmmbox_1_1.tarHMMBOX 4.1 - Matlab toolbox for Hidden Markov Modelling using Variational BayesPrerequisites: Matlab 5.0,Netlab. Last Updated: 15 February 2002.. /~parg/software/hmmbox_3_2.tar/~parg/software/hmmbox_4_1.tarMarkov Decision Process (MDP) Toolbox for MatlabKevin Murphy, 1999/~murphyk/Software/MDP/MDP.zipMarkov Decision Process (MDP) Toolbox v1.0 for MATLABhttp://www.inra.fr/bia/T/MDPtoolbox/Hidden Markov Model (HMM) Toolbox for Matlab/~murphyk/Software/HMM/hmm.htmlBayes Net Toolbox for Matlab/~murphyk/Software/BNT/bnt.htmlMedical (Top)EEGLAB Open Source Matlab Toolbox for Physiological Research (formerly ICA/EEG Matlab toolbox)/~scott/ica.htmlMATLAB Biomedical Signal Processing Toolbox/Toolbox/Powerful package for neurophysiological data analysis ( Igor Kagan webpage)/Matlab/Unitret.htmlEEG / MRI Matlab Toolbox/Microarray data analysis toolbox (MDAT): for normalization, adjustment and analysis of gene expression data.Knowlton N, Dozmorov IM, Centola M. Department of Arthritis andImmunology, Oklahoma Medical Research Foundation, Oklahoma City, OK, USA 73104. We introduce a novel Matlab toolbox for microarray data analysis. This toolbox uses normalization based upon a normally distributed background and differential gene expression based on 5 statistical measures. The objects in this toolbox are open source and can be implemented to suit your application. AVAILABILITY: MDAT v1.0 is a Matlab toolbox and requires Matlab to run. MDAT is freely available at:/publications/2004/knowlton/MDAT.zip MIDI (Top)MIDI Toolbox version 1.0 (GNU General Public License)http://www.jyu.fi/musica/miditoolbox/Misc. (Top)MATLAB-The Graphing Tool/~abrecht/matlab.html3-D Circuits The Circuit Animation Toolbox for MATLAB/other/3Dcircuits/SendMailhttp://carol.wins.uva.nl/~portegie/matlab/sendmail/Coolplothttp://www.reimeika.ca/marco/matlab/coolplots.htmlMPI (Matlab Parallel Interface)Cornell Multitask Toolbox for MATLAB/Services/Software/CMTM/Beolab Toolbox for v6.5Thomas Abrahamsson (Professor, Chalmers University of Technology, Applied Mechanics, Göteborg, Sweden)http://www.mathworks.nl/matlabcentral/fileexchange/loadFile.do?ob jectId=1216&objectType=filePARMATLABNeural Networks (Top)SOM Toolboxhttp://www.cis.hut.fi/projects/somtoolbox/Bayes Net Toolbox for Matlab/~murphyk/Software/BNT/bnt.htmlNetLab/netlab/Random Neural Networks/~ahossam/rnnsimv2/ftp: ftp:///pub/contrib/v5/nnet/rnnsimv2/NNSYSID Toolbox (tools for neural network based identification of nonlinear dynamic systems)http://www.iau.dtu.dk/research/control/nnsysid.htmlOceanography (Top)WAFO. Wave Analysis for Fatigue and Oceanographyhttp://www.maths.lth.se/matstat/wafo/ADCP toolbox for MATLAB (USGS, USA)Presented at the Hydroacoustics Workshop in Tampa and at ADCP's in Action in San Diego/operations/stg/pubs/ADCPtoolsSEA-MAT - Matlab Tools for Oceanographic AnalysisA collaborative effort to organize and distribute Matlab tools for the Oceanographic Community/Ocean Toolboxhttp://www.mar.dfo-mpo.gc.ca/science/ocean/epsonde/programming.htmlEUGENE D. GALLAGHER(Associate Professor, Environmental, Coastal & Ocean Sciences) /edgwebp.htmOptimization (Top)MODCONS - a MATLAB Toolbox for Multi-Objective Control System Design /mecheng/jfw/modcons.htmlLazy Learning Packagehttp://iridia.ulb.ac.be/~lazy/SDPT3 version 3.02 -- a MATLAB software for semidefinite-quadratic-linear programming.sg/~mattohkc/sdpt3.htmlMinimum Enclosing Balls: Matlab Code/meb/SOSTOOLS Sum of Squares Optimization Toolbox for MATLAB User’s guide /sostools/sostools.pdfPSOt - a Particle Swarm Optimization Toolbox for use with MatlabBy Brian Birge ... A Particle Swarm Optimization Toolbox (PSOt) for use with the Matlab scientific programming environment has been developed. PSO isintroduced briefly and then the use of the toolbox is explained with some examples. A link to downloadable code is provided.Plot/software/plotting/gbplot/Signal Processing (Top)Filter Design with Motorola DSP56Khttp://www.ee.ryerson.ca:8080/~mzeytin/dfp/index.htmlChange Detection and Adaptive Filtering Toolboxhttp://www.sigmoid.se/Signal Processing Toolbox/products/signal/ICA TU Toolboxhttp://mole.imm.dtu.dk/toolbox/menu.htmlTime-Frequency Toolbox for Matlabhttp://crttsn.univ-nantes.fr/~auger/tftb.htmlVoiceBox - Speech Processing Toolbox/hp/staff/dmb/voicebox/voicebox.htmlLeast Squared - Support Vector Machines (LS-SVM)http://www.esat.kuleuven.ac.be/sista/lssvmlab/WaveLab802 : the Wavelet ToolboxBy David Donoho, Mark Reynold Duncan, Xiaoming Huo, Ofer Levi /~wavelab/Time-series Matlab scriptshttp://wise-obs.tau.ac.il/~eran/MATLAB/TimeseriesCon.htmlUvi_Wave Wavelet Toolbox Home Pagehttp://www.gts.tsc.uvigo.es/~wavelets/index.htmlAnotherSupport Vector Machine (Top)MATLAB Support Vector Machine ToolboxDr Gavin CawleySchool of Information Systems, University of East Anglia/~gcc/svm/toolbox/LS-SVM - SISTASVM toolboxes/dmi/svm/LSVM Lagrangian Support Vector Machine/dmi/lsvm/Statistics (Top)Logistic regression/SAGA/software/saga/Multi-Parametric Toolbox (MPT) A tool (not only) for multi-parametric optimization.http://control.ee.ethz.ch/~mpt/ARfit: A Matlab package for the estimation of parameters and eigenmodes of multivariate autoregressive modelshttp://www.mat.univie.ac.at/~neum/software/arfit/The Dimensional Analysis Toolbox for MATLABHome: http://www.sbrs.de/Paper:http://www.isd.uni-stuttgart.de/~brueckner/Papers/similarity2002. pdfFATHOM for Matlab/personal/djones/PLS-toolboxMultivariate analysis toolbox (N-way Toolbox - paper)http://www.models.kvl.dk/source/nwaytoolbox/index.aspClassification Toolbox for Matlabhttp://tiger.technion.ac.il/~eladyt/classification/index.htmMatlab toolbox for Robust Calibrationhttp://www.wis.kuleuven.ac.be/stat/robust/toolbox.htmlStatistical Parametric Mapping/spm/spm2.htmlEVIM: A Software Package for Extreme Value Analysis in Matlabby Ramazan Gençay, Faruk Selcuk and Abdurrahman Ulugulyagci, 2001. Manual (pdf file) evim.pdf - Software (zip file) evim.zipTime Series Analysishttp://www.dpmi.tu-graz.ac.at/~schloegl/matlab/tsa/Bayes Net Toolbox for MatlabWritten by Kevin Murphy/~murphyk/Software/BNT/bnt.htmlOther: /information/toolboxes.htmlARfit: A Matlab package for the estimation of parameters and eigenmodes of multivariate autoregressive models/~tapio/arfit/M-Fithttp://www.ill.fr/tas/matlab/doc/mfit4/mfit.htmlDimensional Analysis Toolbox for Matlab/The NaN-toolbox: A statistic-toolbox for Octave and Matlab® ... handles data with and without MISSING VALUES.http://www-dpmi.tu-graz.ac.at/~schloegl/matlab/NaN/Iterative Methods for Optimization: Matlab Codes/~ctk/matlab_darts.htmlMultiscale Shape Analysis (MSA) Matlab Toolbox 2000p.br/~cesar/projects/multiscale/Multivariate Ecological & Oceanographic Data Analysis (FATHOM) From David Jones/personal/djones/glmlab (Generalized Linear Models in MATLA.au/staff/dunn/glmlab/glmlab.html Spacial and Geometric Analysis (SaGA) toolboxInteresting audio links with FAQ, VC++, on the topicMATLAB Toolboxes(C) 2004 - SPMC / SoCCE / UoP。
一种基于量子密钥与混沌映射的图像加密新方法张克;高会新【摘要】According to the certifiable security of quantum key distribution protocol and pseudo random characteristics of the chaotic sequence, a new image encryption method that combined quantum key and chaotic mapping was proposed. Firstly, initial quantum key was obtained withBB84 protocol, then quantum key was formed after error checking for the initial key with data consulting approach, and the last key stream was achieved by combining the quantum key and chaotic sequence from Logistic mapping. Lastly, pixel values were replaced with XOR operation of pixel values and the last key stream, and image was encrypted. Simulation results and analysis show that, the new cryptographic algorithm can more effectively resist statistics-based and plain text attack, while ensuring the security of key transformation; the key stream results from the new method is highly sensitive to initial parameters.%依据量子密钥分配协议具有可证明的安全性及混沌序列的伪随机性,提出一种基于量子密钥与混沌映射相结合的图像加密新方法。
内联时延混沌映射耦合Lorenz系统的图像加密算法宋鑫超;苏庆堂;赵永升【摘要】为解决当前图像加密算法采用独立的置乱与扩散操作,降低算法内联性,且忽略混沌序列生成存在的时延因素,使其难以抵御明文攻击等不足,提出一种内联时延混沌映射耦合 Lorenz系统的图像加密算法。
将时间延迟引入 Logistic 映射中,生成 Arnold映射的初值;基于明文像素点,构造 Arnold映射迭代次数计算模型;根据 Arnold映射的迭代次数,建立其映射控制参数的计算函数,生成一组随机序列,利用位置集合,完成图像置乱;迭代超混沌 Lorenz 系统,生成4D序列组;引入密钥流,修正4D序列组;构造像素扩散机制,完成图像加密。
实验结果表明,与当前加密结构相比,该算法拥有更高的保密性能与更强的密钥敏感性。
%Using current image encryption algorithm is difficult to resist plaintext attacks induced by independently scrambling and diffusion operating resulting in reducing the algorithm’s inline,also the existing time delay factor of the chaos sequence is ig-nored,so the image encryption algorithm based on inline time delay chaotic map coupled with Lorenz system was proposed.The initial value of Arnold map was generated by introducing the time delay into the Logistic map.The iteration number model of Ar-nold map was constructed based on the pixels of plaintext.The initial value of Arnold map was generated by introducing the time delay into the Logistic map and basing on iteration number.The calculation function of the mapping control parameters wases-tablished based on the initial value for iterating to produce the random sequence and the image was permutated by position set. The 4D sequencegroup was generated by iterating the hyper-chaotic system.The pixel diffusion mechanism was constructed by the modified 4D sequence group with key stream to finish image encryption.Experimental results show that this algorithm has better security performance and stronger key sensitivity compared with the existing encryption structures.【期刊名称】《计算机工程与设计》【年(卷),期】2016(037)007【总页数】5页(P1757-1761)【关键词】图像加密;时间延迟;内联混沌映射;位置集合;Lorenz系统【作者】宋鑫超;苏庆堂;赵永升【作者单位】鲁东大学信息与电气工程学院,山东烟台 264025;鲁东大学信息与电气工程学院,山东烟台 264025;鲁东大学现代技术教学部,山东烟台 264025【正文语种】中文【中图分类】TP391传统的加密算法没有考虑到图像具有大数据容量、较高的冗余度等特点,难以用于图像加密[1-4]。
专利名称:防攻击高级加密标准的加密芯片的算法专利类型:发明专利
发明人:周玉洁,陈志敏,秦晗,谭咏伟
申请号:CN200610119238.1
申请日:20061207
公开号:CN101196965A
公开日:
20080611
专利内容由知识产权出版社提供
摘要:本发明公开一种防攻击高级加密标准的加密芯片的算法,涉及信息安全技术领域;该算法的机理是通过把输入的初始数据与一个随机数异或运算而把DPA需要使用到的中间数据掩盖;而Masking的关键在于中间的所有数据都是被修改过的但最终可以把数据再恢复还原输出。
因此这个设计需要两个数据通道,一个用于被修改的所需加密数据的正常加密处理,一个用于随机数的处理,使得最后可以将两个通道的数据通过简单异或而还原真实输出。
本发明具有相对安全而易实现的,无统计分析规律,并能最终把输出数据恢复还原的特点。
申请人:上海安创信息科技有限公司
地址:201204 上海市浦东新区张江高科技园区毕升路299弄6号202
国籍:CN
代理机构:上海申汇专利代理有限公司
代理人:吴宝根
更多信息请下载全文后查看。
基于复合离散混沌迭代系统的半脆弱水印算法
吴向东
【期刊名称】《中南林业科技大学学报》
【年(卷),期】2010(030)007
【摘要】针对一般傅立叶域水印算法中水印嵌入点固定、易受攻击的特点,利用复合离散混沌迭代系统,选取非固定的频率点作为水印信息嵌入点,大大提高了水印信息的安全性.同时,为保证水印算法的鲁棒性,采取双重嵌入的策略,这样即便一个嵌入点受到攻击,还可以从另一点提取水印信息.实验表明,本算法安全性高,鲁棒性也好于基于傅立叶变换的单重嵌入方法.
【总页数】5页(P176-180)
【作者】吴向东
【作者单位】中南林业科技大学,计算机科学学院,湖南,长沙,410004
【正文语种】中文
【中图分类】TP393
【相关文献】
1.一种基于混沌的彩色图像空域半脆弱水印算法 [J], 丁文霞;卢焕章;王浩;谢剑斌
2.基于复合离散混沌系统的图像加密算法 [J], 程甲;赵怀勋;朱建杨
3.一种基于复合离散混沌系统的对称图像加密算法 [J], 杨华千;张伟;韦鹏程;王永;杨德刚
4.基于复合离散混沌动力系统的序列密码算法 [J], 李红达;冯登国
5.基于Logistic混沌序列和奇异值分解的半脆弱水印算法 [J], 李剑;李生红;孙锬锋
因版权原因,仅展示原文概要,查看原文内容请购买。
一种基于时滞混沌的加密算法(英文)
徐淑奖;王继志;杨素香
【期刊名称】《计算物理》
【年(卷),期】2008(25)6
【摘要】基于半导体激光时滞混沌映射,提出一种新的加密算法.用Ikeda方程产生的二进制序列掩盖明文,对明文块做依赖于密钥的置换,并用传统的混沌加密方法加密.在每一轮加密过程中,都会用一个与混沌映射、明文和密文相关的随机数对时滞项做微扰,以提高算法的安全性;状态转移函数不仅与密钥相关,而且与本轮输入的明文符号以及上一轮输出的密文符号相关,有效地防止了选择明文/密文攻击.仿真实验表明,该算法可行、有效.
【总页数】8页(P749-756)
【关键词】混沌;加密;混沌密码;安全
【作者】徐淑奖;王继志;杨素香
【作者单位】山东省计算中心,山东济南250014;山东经济学院统计与数学学院,山东济南250014
【正文语种】中文
【中图分类】TP309.7
【相关文献】
1.基于一般时滞Lur’e系统混沌同步与时滞相关的M-矩阵法 [J], 池自英
2.结点含时滞的具有零和非零时滞耦合的复杂网络混沌同步 [J], 梁义;王兴元
3.基于LMI的参数未知时变时滞混沌系统模糊自适应H_∞同步 [J], 杨东升;张化光;赵琰;宋崇辉;王迎春
4.基于时滞反馈控制的混沌时滞神经网络的自适应同步 [J], 罗宇婧;李小林
5.一种基于忆阻时滞混沌系统的图像加密方法 [J], 林周彬;罗松江;高俊杰
因版权原因,仅展示原文概要,查看原文内容请购买。
基于指纹及中国剩余定理的改进模糊金库算法孙粉茹;游林【期刊名称】《电子器件》【年(卷),期】2011(034)005【摘要】According to the shortage of the traditional biometric encryption algorithm, this paper proposes an improved fuzzy vault algorithm based on fingerprint features according to traditional fuzzy vault algorithm. In the process of key binding, bio-key can be obtained from the integration of biological features and the key according to the positive integers which prime to each other. In the process of key recovery, the CRT ( Chinese Remainder Theorem) will be used to decrypt the shadow and the key can be recovered according to the idea of threshold(t,n). Through the theoretical analysis and experimental results, the safety and successful recovery rate of key is very good,and it shows our algorithm is highly efficient and is of practical significance.%针对传统生物特征加密算法中所存在的不足.对模糊金库算法进行了一定的研究,基于指纹特征提出了一种改进的模糊金库算法.在密钥绑定阶段,根据生成的两两互素的正整数,将待保护密钥和生物特征相结合生成生物密钥.在密钥恢复阶段,根据(t,n)门限思想,运用中国剩余定理将获得的影子进行解密并恢复密钥.通过理论分析和仿真实验,密钥的安全性和成功恢复率较好,且有一定的实际应用意义.【总页数】3页(P593-595)【作者】孙粉茹;游林【作者单位】杭州电子科技大学通信工程学院,杭州310018;杭州电子科技大学通信工程学院,杭州310018【正文语种】中文【中图分类】TP309.7【相关文献】1.基于口令和中国剩余定理的模糊金库算法 [J], 游林;黄艳红2.基于指纹改进的模糊金库算法 [J], 游林;杨玲3.基于指纹的模糊金库方案改进 [J], 林刚;游林4.基于金库算法理论对指纹密钥算法的分析与改进 [J], 马俊;李新中5.利用模糊金库算法改进指纹密钥的设计 [J], 周小燕;易正江因版权原因,仅展示原文概要,查看原文内容请购买。
一种改进的随机性模糊金库算法
刘艳涛;游林
【期刊名称】《科技通报》
【年(卷),期】2011(27)2
【摘要】模糊金库(fuzzy vault)算法作为目前较流行的一种指纹密钥技术,得到了人们越来越多的关注。
本文基于原有模糊金库算法,提出了一种新的模糊金库算法,该算法中:首先,上锁时利用一组随机数建立一线性函数,函数的长度不再与密钥信息的长度有关;其次,引入伪随机序列发生器,以指纹特征值作为生成元,产生随机数组用于构建模糊金库;另外,本文还提出密钥分离思想,将密钥信息分别存储于上锁过程的两部分中,使密钥信息与用户身份更牢固的绑定在一起,由此获得更高的随机性和安全性。
【总页数】5页(P288-292)
【关键词】指纹密钥;模糊金库;伪随机序列;安全性;随机性
【作者】刘艳涛;游林
【作者单位】杭州电子科技大学通信工程学院
【正文语种】中文
【中图分类】TP393.08
【相关文献】
1.基于指纹改进的模糊金库算法 [J], 游林;杨玲
2.一种考虑DG参数随机性的改进LHS概率谐波潮流算法 [J], 邵振国;苏文博
3.基于指纹及中国剩余定理的改进模糊金库算法 [J], 孙粉茹;游林
4.一种改进的伪随机数生成算法及随机性分析 [J], 刘倩;范安东
5.利用模糊金库算法改进指纹密钥的设计 [J], 周小燕;易正江
因版权原因,仅展示原文概要,查看原文内容请购买。
RFID中基于二分叠加的二进制防碰撞算法何晓桃;郑文丰【期刊名称】《华南师范大学学报(自然科学版)》【年(卷),期】2011(000)003【摘要】提出了基于二分叠加的二进制搜索树防碰撞算法.对标签生成随机数的过程进行优化,标签每次生成随机数(O,1)的概率都相同,经过二次叠加后分布的概率就不完全相同,可从源头上降低碰撞概率,提高信道利用率.仿真结果表明,基于二分叠加的一进制搜索树防碰撞算法能减少碰撞次数,提高信道利用率.%In order to improve the existing problems of large collision numbers and weak collision performance of the traditional binary collision algorithm, a binary anti - collision algorithm based on sum of twice is proposed. The random generation process of the new algorithm is optimized, which has different distribution probability through the sum of twice, although every random number (0,1) is the same. The new algorithm can obtain less collision times and higher channel utilization.【总页数】5页(P61-65)【作者】何晓桃;郑文丰【作者单位】广东工业大学计算机学院,广东广州510006;广东省科普信息中心,广东广州510040【正文语种】中文【中图分类】TP301.6【相关文献】1.基于二进制搜索算法的RFID系统防碰撞算法 [J], 邓洁;程良伦2.基于动态二进制改进算法的RFID防碰撞算法 [J], 苏俊;王忠;陈和恒3.基于二进制树的RFID系统自适应多分支防碰撞算法 [J], 崔英花4.基于后退式二进制搜索算法的有源RFID系统防碰撞算法 [J], 王静;盛磊5.基于二进制搜索算法的RFID系统防碰撞算法 [J], 孙文胜;马建波因版权原因,仅展示原文概要,查看原文内容请购买。
基于混沌算法的无线安全认证技术研究
陈作聪;孙树峰
【期刊名称】《微电子学与计算机》
【年(卷),期】2007(24)12
【摘要】针对无线网络安全威胁,提出了一种基于混沌算法的无线安全认证方案,利用混沌对初始条件的依赖性可以生成混沌序列,以此作为无线用户的身份,这种身份具有惟一性和不可伪造性,并且是可以动态变化的。
同时还给出了IEEE802.11无线局域网环境中基于混沌理论的动态认证方案和过程。
研究结果表明,该方案具有良好的加密效果,运算量小,抗攻击强,有较高的安全性。
【总页数】3页(P115-117)
【关键词】无线局域网;安全技术;挑战/应答;混沌算法;认证
【作者】陈作聪;孙树峰
【作者单位】琼州学院,海南五指山572200;上海公安高等专科学校,上海200336【正文语种】中文
【中图分类】TP393.08
【相关文献】
1.基于ECDSA优化算法的智能农业无线传感器节点的网络安全认证 [J], 马少华;张兴;韩冬;史伟
2.基于EAP的无线安全认证技术研究 [J], 陈作聪;孙树峰
3.基于非线性多基站分布式混沌随机共振方法的无线通信系统定位技术研究 [J],
何迪;张毅;郁文贤
4.基于混沌二进制粒子群算法的认知无线电频谱分配策略 [J], 滕志军; 谢露莹; 滕利鑫; 曲福娟
5.基于混沌遗传算法的无线传感器网络改进LEACH算法 [J], 李蛟;胡黄水;赵宏伟;鲁晓帆
因版权原因,仅展示原文概要,查看原文内容请购买。
OverviewThe Arista CCS-710 fanless compact power over ethernet switch series was designed to extend the Cognitive Campus network into space-constrained and quiet environments such as conference rooms, retail showrooms, broadcast control rooms and small enclosures like ATMs. With multiple mounting options, the CCS-710 series switches are well suited for any deployment where sound and space are as important as reliable network operations.CCS-710P Compact Series SwitchesData SheetProduct HighlightsPower over Ethernet plus 10G 710P-12•12 x 10M-1Gbe RJ45 @ 30W •2 x 1/10G SFP+ Uplinks710P-16P•12 x 10M-1Gbe RJ45 @ 30W•2 x mGig (1Gb - 5Gb) RJ45 @ 60W•1 x mGig (1G - 5Gb) RJ45 PD up to 90W •1 x mGig (1G - 5Gb) RJ45 •2 x 1/10G SFP+ UplinksHardware•Fanless for silent operation•Small form factor for constrained spacesOperational Simplicity•Zero Touch Provisioning •On-Prem or Cloud Managed •PoE Powered CapablePower over Ethernet•PoE Pass through •Continuous PoE•Powered Device / Power Sourcing Equipment capableTra c and Flow Monitoring•sFlow •IPFIXSegmentation and Overlay•802.1Q VLANs •VXLAN•EVPN L2 (type 2)Arista Extensible Operating System (EOS®)•Single binary image•Fine-grained truly modular network OS •Stateful Fault Containment (SFC) •Stateful Fault Repair (SFR)•Full access to Linux shell and tools •Extensible platform - bash, python, C++ , GO, OpenCon gArista EOSThe Arista 710P series runs the same single binary Arista EOS software as all Arista routing and switching products, simplifying network administration and software life cycle management.Arista EOS is a modular operating system with an architecture that separates state from protocol processing and application logic. Built on top of a hardened Linux kernel, all EOS processes run in protected memory spaces and exchange state through an in-memory database. This multi-process state sharing architecture provides the foundation for smart system updates and self-healing resiliency. With Arista EOS, Linux based tools for monitoring and automation can be run natively on the switch.CloudVisionCloudVision is Arista's management plane solution for simplifying network operations. Built on a modern state-streaming architecture, CloudVision is a multi-function software platform that enables a suite of capabilities for automated provisioning, change control, continuous compliance, real-time telemetry, predictive analytics and 3rd party management plane orchestration. As a multi-domain solution, CloudVision is a single management platform across data center, campus, WiFi, multi-cloud, and routing interconnect use-cases. CloudVision is o ered as an on-premise appliance (virtual or physical) as well as a SaaS solution, CloudVision-as-a-Service.CloudVision Cognitive Uni ed EdgeThe Arista 710P also supports CloudVision CUE, a SaaS-based management platform delivering a simpli ed CloudVision experience focused on the network edge to the commercial market space. This 'Edge as-a-Service' approach provides streamlined dashboards to manage both wired and wireless connectivity, with plug and play Zero Touch Provisioning and AI-powered insights to simplify the network edge with a turnkeysolution.Arista 710P Series PoE Switches710P Campus PoE |Model OverviewCognitive CampusArista’s family of switching and WiFi platforms are managed through a single Cognitive Management Plane in CloudVision. Real time telemetry, time series database for after the fact troubleshooting with sub second granularity, and AI/ML powered analytics to monitor the infrastructure, users and applications on the Network. Rich change control used to simplify and automate the deployment and maintenance of campus infrastructure.Cognitive SegmentationThe CCS-710 compact series supports industry standard 802.1X and RADIUS authentication and interoperates with leading authentication solutions. Authorized users and devices can be placed into policy assigned VLAN segments in the network, automating the network layer of zero trust architecture. EVPN VXLAN features are supported, providing additional methodologies to meet or exceed your segmentation and security policies.Cognitive Continuous PoEContinuous PoE enables powering downstream devices even when the system is rebooted on a per port basis. Providing uninterrupted power to a devices and eliminates the need for a backup power source. The CCS-710P-16P has one PoE input port that gives the ability to power itself and PoE end devices by drawing 802.3bt up to 90W from the wiring closet.POE for WorkspacesThe CCS-710 delivers PoE networking in a compact, light weight and fanless form factor. It supports new scenarios for deploying powered ethernet outside the wiring closet and into conference rooms, retail environments, remote or temporary workspaces, or sensitive locations such as classrooms and broadcast control booths where traditional fan cooled systems are too noisy or ambient cooling is preferred to forced air.Power suppliesThe CCS-710 compact series switches are powered by external 150W or 280W power supplies providing a choice of PoE budget of 100W or 230W. The 710P-16P can also be powered through the 5 Gigabit PoE PSE uplink port, drawing up to 90W (802.3bt Type 4) of power from an upstream PoE switch. The CCS-710’s modest 40W system power requirement o ers administrators a variety of POE subscriber budgets allowing them to optimize their equipment and power budget for workspace POE applications even when powered from upstream PoE switches.Mounting OptionsThe CCS-710's modern aesthetics work in o ce desktop use cases. The optional magnetic mount allows the compact switch to be mounted to metal xtures, in either vertical or under-counter applications. The optional “L” bracket kit, allows xing to bottom/top at surfaces. The optional 19" rack mount kit allows the switch and its external power supply to be installed in common IDF or datacenter racks. The DIN rail mount option allows the switch to be deployed in manufacturing or warehousing environments.Operating EnvironmentThe CCS-710 series supports operating environments up to 40C/104F in the upright applications. In vertical, under counter or enclosedapplications they support operation in ambient temperatures up to 35C/95F.Arista 710P Series Power SupplyArista 710P -16P Switch Arista 710P -12 Switch710P Campus PoE | Features Layer 2 Features•802.1w Rapid Spanning Tree•802.1s Multiple Spanning Tree Protocol•Rapid Per VLAN Spanning Tree (RPVST+)•4096 VLAN IDs, 510 active VLANs•Q-in-Q•802.3ad Link Aggregation/LACP•MLAG (Multi-Chassis Link Aggregation)•Uses IEEE 802.3ad LACP•802.1Q VLANs/Trunking•802.1AB Link Layer Discovery Protocol•802.3x Flow Control On Uplinks•Jumbo Frames (9216 Bytes)•IGMP v1/v2/v3 snooping•Storm ControlLayer 3 Features•Routing Protocols: OSPF, OSPFv3, BGP, MP-BGP, IS-IS, and RIPv2 •64-way Equal Cost Multi-path Routing (ECMP)•Bi-Directional Forwarding Detection (BFD)•Route Maps•IGMP v2/v3•PIM-SM / PIM-SSM *•Anycast RP (RFC 4610)•VRRP•Virtual ARP (VARP)•Policy Based Routing (PBR)•Unicast Reverse Path Forwarding (uRPF)•GRE IP Decap•Selective Route DownloadAdvanced Monitoring and Provisioning •Zero Touch Provisioning (ZTP)•Latency Analyzer and Microburst Detection (LANZ) •Con gurable Congestion Noti cation (CLI, Syslog) *•Streaming Events (GPB Encoded) *•Capture/Mirror of congested tra c *•Integrated packet capture/analysis with TCPDump •Advanced Mirroring - Port Mirroring (4 sessions) •Enhanced Remote Port Mirroring•L2/3/4 Filtering on Mirror Sessions*•Advanced Event Management suite (AEM)•CLI Scheduler•Event Manager•Event Monitor•Linux tools•FlowTracker features*•RFC 3176 sFlow•IPFIX support*•Restore & con gure from USB•eAPI•IEEE 1588 PTP (Transparent Clock and Boundary Clock) Virtualization Support•VXLAN Bridging•VXLAN Tunnel Virtual Port Termination: 1K•EOS CVX control plane•EVPN type 2Security Features•Service ACLs•Control Plane Protection (CPP)•Ingress / Egress ACLs using L2, L3, L4 elds •Ingress / Egress ACL Logging and Counters•MAC ACLs•802.1X Enhancements•Multi-Host 802.1X AUTH•MAC-Based AUTH (MAB)•Dynamic VLAN assignment•Named VLAN support•ACL Deny Logging•ACL Counters•Atomic ACL Hitless restart•DHCP Relay•MAC access list security•TACACS+•RADIUS•ARP trapping and rate limitingQuality of Service (QoS) Features•Up to 8 queues per port•802.1p based classi cation•DSCP based classi cation and remarking•Explicit Congestion Noti cation (ECN)•QoS interface trust (COS / DSCP)•Strict priority queueing•Weighted Round Robin (WRR) Scheduling•ACL based DSCP Marking•ACL based Policing•Policing/Shaping•Rate limitingPoE Capabilities•LLDP enhancements for PoE including Media Endpoint Discovery (MED) attributes reporting•PoE Controls•VLAN for VoIP, QoS•PD Pass through•Cognitive PoE* Not currently supported in EOSFor detailed EOS feature support visit https:///en/support/product-documentation/supported-features710P Campus PoE |FeaturesNetwork Management•CloudVision•Con guration session commit and rollback •RS-232 Serial Console Port •USB Port•SNMP v1, v2, v3•Management over IPv6 •Telnet and SSHv2 •Syslog •AAA•Industry Standard CLI•Beacon LED for system identi cation •System Logging•Environment monitoring •MLAG ISSU•Maintenance modeExtensibility•Linux Tools•Bash shell access and scripting •RPM support•Custom kernel modules•Programmatic access to system state•Python •C++•Native KVM/QEMU supportStandard Compliance•802.1D Bridging and Spanning Tree •802.1p QOS/COS •802.1Q VLAN Tagging•802.1w Rapid Spanning Tree•802.1s Multiple Spanning Tree Protocol •802.1AB Link Layer Discovery Protocol •802.3ad Link Aggregation with LACP •802.3az Energy E cient Ethernet (EEE) * •802.3x Flow Control •802.3u 100BASE-TX •802.3ab 1000BASE-T •802.3z Gigabit Ethernet •802.3ae 10 Gigabit Ethernet•802.3af/at 15W/30W Power over Ethernet (PoE) •802.3bt 60W Power over Ethernet (PoE)•RFC 2460 Internet Protocol, Version 6 (IPv6) Speci cation •RFC 2461 Neighbor Discovery for IP Version 6 (IPv6) •RFC 2462 IPv6 Stateless Address Auto-con guration•RFC 2463 Internet Control Message Protocol (ICMPv6) for the Internet Protocol Version 6 (IPv6) Speci cation •IEEE 1588-2008 Precision Time ProtocolSNMP•RFC 3635 EtherLike-MIB •RFC 3418 SNMPv2-MIB•RFC 2863 IF-MIB•RFC 2864 IF-INVERTED-STACK-MIB •RFC 2096 IP-FORWARD-MIB •RFC 4363 Q-BRIDGE-MIB •RFC 4188 BRIDGE-MIB •RFC 2013 UDP-MIB •RFC 2012 TCP-MIB •RFC 2011 IP-MIB•RFC 2790 HOST-RESOURCES-MIB •RFC 3636 MAU-MIB •RMON-MIB •RMON2-MIB •HC-RMON-MIB •LLDP-MIB•LLDP-EXT-DOT1-MIB •LLDP-EXT-DOT3-MIB •ENTITY-MIB•ENTITY-SENSOR-MIB •ENTITY-STATE-MIB •ARISTA-ACL-MIB •ARISTA-QUEUE-MIB •RFC 4273 BGP4-MIB •RFC 4750 OSPF-MIB•ARISTA-CONFIG-MAN-MIB •ARISTA-REDUNDANCY-MIB •RFC 2787 VRRPv2MIB •MSDP-MIB •PIM-MIB •IGMP-MIB•IPMROUTE-STD-MIB•SNMP Authentication Failure trap•ENTITY-SENSOR-MIB support for DOM (Digital Optical Monitoring)•User con gurable custom OIDsSee EOS release notes for Supported MIBs *Not currently supported in EOSTable SizesIngress / Egress ACLs 2K / 512MAC addresses 32K IPv4/v6 Hosts 16K / 8K IPv4/v6 Routes 8K / 2K ECMP 64-way IGMP Groupsup to 2K710P Campus PoE Platform Features Speci cationsFeature/Model710P-16P710P-12Ports16 RJ452 10G SFP12 RJ45 2 10G SFP10M-1G UTP (30W)112121G-5G UTP (60W)2201G-5G UTP 32010G22 Throughput (FDX)104Gbps64Gbps Packets/Second155 Mpps95 Mpps Latency (RJ-45) 1.2 microseconds 1.2 microseconds CPU Dual Core-x86Dual Core-x86 System Memory4GB4GB System Flash8GB8GB Packet Bu er2MB2MB USB Ports11 Console Ports11 Power (Max, excluding PoE)51W46WPower Input PoE pass though150W external280W external150W external280W externalSize (WxHxD)1U 10.6” x 1.75” x 9.8” (26.9 x 4.4 x 24.8 cm) Weight 5 lbs (2.27kg) 4.51 lbs (2.05kg) EOS License Group LIC-FIX-CG-xxxMinimum EOS 4.27.0FX-710P 4.27.0FX-710PPoE Budget - Power Supply Models710P-16P710P-12 150W AC external PS99W104W 280W AC external PS229W234W PoE Input Port ( 60W/90W)9W/39W N/A1: 802.3af/at2: 802.3bt3: 1 Powered Device port up to 90W, 1 data only portSupported Optics and Cables Interface Type10G10GBASE-CR SFP+ to SFP+: 0.5m-5m 10GBASE-AOC SFP+ to SFP+: 3m-30m10GBASE-SRL 100m 10GBASE-SR 300m 10GBASE-LRL 1km 10GBASE-LR 10km 10GBASE-ER40km 100Mb TX,1GbE SX/LX/TXYes710P Campus PoE | Technical SpecicationsEnvironmental CharacteristicsOperating Temperature 10 to 40˚C (32 to 104˚F)Storage Temperature -40 to 70˚C (-40 to 158˚F)Relative Humidity 5 to 95%Operating Altitude0 to 10,000 ft, (0-3,000m)Emissions and Safety ComplianceEMCFCC Class A, ICES-003, EN 55032, EN IEC 61000-3-2:2019, EN 61000-3-3ImmunityEN 55035 EN 300 386SafetyEN 62368-1:2014 + A11:2017 IEC-62368-1:2014Certi cationsBSMI (Taiwan)CE (European Union) KCC (South Korea) NRTL (North America)RCM (Australia/New Zealand) UKCA (United Kingdom) VCCI (Japan)European Union Directives2014/53/EU Radio Equipment Directive 2014/35/EU Low Voltage Directive 2014/30/EU EMC Directive 2012/19/EU WEEE Directive 2011/65/EU RoHS Directive2015/863/EU Commission Delegated DirectiveFurther Information Product Certi cation PortalArista Optics and CablesThe Arista CCS-710 Series supports a wide range of 1G and 10G pluggable optics and cables in the SFP ports. For details about the di erent optical modules and the minimum EOS Software release required for each of the supported optical modules, visit https:///en/products/transceivers-cablesPower Supply Speci cationsSupply PWR-150-ADPPWR-280-ADPOutput Power 150W280WInput Voltage 100-240VACInput Current 2.78-3.06A2.78-5.18AInput Frequency 50-60Hz Input Connector IEC 320-C13E ciency (Typical)88%87.5%1. Certain air ow con gurations or the use of higher power or reduced temperature range optics may reduce maximum operating temperature.Product Number Product DescriptionCCS-710P-12-EU Arista 710P, 12 x 1G PoE, 2x10G SFP+ compact switch 150W power adapter, EU Power CordCCS-710P-12-JPN Arista 710P, 12 x 1G PoE, 2x10G SFP+ compact switch 150W power adapter, JPN Power CordCCS-710P-12-NA Arista 710P, 12 x 1G PoE, 2x10G SFP+ compact switch 150W power adapter, NA Power CordCCS-710P-12-UK Arista 710P, 12 x 1G PoE, 2x10G SFP+ compact switch 150W power adapter, UK Power CordCCS-710P-12Arista 710P, 12 x 1G PoE, 2x10G SFP+ compact switch 150W power adapter, no power cordCCS-710P-12#Arista 710P, 12 x 1G PoE, 2x10G SFP+ compact switch (power adapter sold separately)CCS-710P-16P-EU Arista 710P, 12 x 1G PoE, 2 x 5G PoE, 2 x 5G (1 data only, 1 PD/PSE), 2x10G SFP+ compact switch 280W power adapter, EU Power CordCCS-710P-16P-JPN Arista 710P, 12 x 1G PoE, 2 x 5G PoE, 2 x 5G (1 data only, 1 PD/PSE), 2x10G SFP+ compact switch 280W power adapter, JPN Power CordCCS-710P-16P-NA Arista 710P, 12 x 1G PoE, 2 x 5G PoE, 2 x 5G (1 data only, 1 PD/PSE), 2x10G SFP+ compact switch 280W power adapter, NA Power Cord CCS-710P-16P-UK Arista 710P, 12 x 1G PoE, 2 x 5G PoE, 2 x 5G (1 data only, 1 PD/PSE), 2x10G SFP+ compact switch 280W power adapter, UK Power Cord CCS-710P-16P Arista 710P, 12 x 1G PoE, 2 x 5G PoE, 2 x 5G (1 data only, 1 PD/PSE), 2x10G SFP+ compact switch 280W power adapterCCS-710P-16P#Arista 710P, 12 x 1G PoE, 2 x 5G PoE, 2 x 5G (1 data only, 1 PD/PSE), 2x10G SFP+ compact switch (power adapter sold separately) PWR-150-ADP Arista Power Adapter, 150W, POE, AC (Spare)PWR-150-ADP#Arista Power Adapter, 150W, POE, AC (Ships with switch)PWR-280-ADP Arista Power Adapter, 280W, POE, AC (Spare)PWR-280-ADP#Arista Power Adapter, 280W, POE, AC (Ships with switch)KIT-CCS-710Spare accessory kit for Arista 710P Series switches. (Power cords available separately)KIT-CCS-710-DIN Spare DIN-Rail Mount Kit for Arista 710P Series switchesKIT-CCS-710-RM Spare Rack Mount Kit for Arista 710P Series switchesKIT-CCS-710-MGN Spare Magnet Mount Kit for Arista 710P Series switches (include 3-in-1 bracket, Magnet)LIC-FIX-CG-FLX-L FLX-Lite License for Arista Fixed Compact Switches - Dynamic Routing, EVPN, VXLAN (includes E Features)SS-CV-CG-SWITCH-1M CloudVision SW Subscription License for 1-Month for 1 Switch. 1G/mG Compact Platforms. Includes E, FLX-Lite.SS-CV-LT-CG-SWITCH-1M CloudVision Lite SW Subscription License for 1-Month for 1 Switch. 1G/mG Compact Platforms.SS-CVS-CG-EN-1M CloudVision as-a-Service Subscription Lic for 1-Month for 1 Switch. 1G/mG Compact Platforms. Includes E, FLX-Lite. Electronic Delivery Only.SS-CVS-LT-CG-SWITCH-1M CloudVision as-a-Service Lite Subscription Lic for 1-Month for 1 Switch. 1G/mG Compact Platforms. Electronic Delivery Only.SS-CVSC-CG-SWITCH-1M CloudVision CUE SW Subscription License for 1-Month for 1 Switch. 1G/mG Compact Platforms. Includes E, FLX-Lite.SS-CVS-CG-SWITCH-1M CloudVision Service Subscription License for 1-Month for 1 Switch. 1G/mG Compact Platforms.WarrantyThe Arista CCS-710 switches come with a one-year limited hardware warranty, which covers parts, repair, or replacement with a 10 business day turnaround after the unit is received.Service and SupportSupport services including next business day and 4-hour advance hardware replacement are available. For service depot locations, please see: /en/serviceHeadquarters5453 Great America Parkway Santa Clara, California 95054 408-547-5500Support****************** 408-547-5502 866-476-0000Sales**************** 408-547-5501 866-497-0000Copyright 2023 Arista Networks, Inc. The information contained herein is subject to change without notice. Arista, the Arista logo and EOS are trademarks of Arista Networks. Other product or service names may be trademarks or service marksof others.Dec 10th, 2023 03-0062-07Product NumberProduct DescriptionCAB-C13-US Power Cord, North America, C13 to NEMA 5-15P , 8 Feet (2.5m)CAB-C13-C14-INTL Power Cord C13 to C14 (2m)CAB-C13-EU Power Cord, Europe, C13 to CEE 7/7, 8 Feet (2.5m)CAB-C13-UK Power Cord, United Kingdom, C13 to BS 1363/A, 8 Feet (2.5m)CAB-C13-AUS Power Cord, Australia, C13 to AS/NZS 3112, 8 Feet (2.5m)CAB-C13-IT Power Cord, Italy, C13 to CEI 23-16, 8 Feet (2.5m)CAB-C13-AR Power Cord, Argentina, C13 to IRAM 2073, 8 Feet (2.5m)CAB-C13-INPower Cord C13 to India (10A) 2m。
专利名称:非易失性存储器设备和所述非易失性存储器设备的操作方法
专利类型:发明专利
发明人:鲜于桢
申请号:CN201710156724.9
申请日:20170316
公开号:CN107203435A
公开日:
20170926
专利内容由知识产权出版社提供
摘要:一种操作非易失性存储器设备的方法包括:通过感测存储在存储单元阵列的源页中的数据,将感测数据存储在页缓冲器电路中;从所述页缓冲器电路输出所述感测数据;对从页缓冲器电路输出的感测数据执行纠错码(ECC)解码;将经解码的数据存储在所述页缓冲器电路中;以及通过使用与所述源页相对应的种子值对从所述页缓冲器电路输出的经解码数据执行去随机化,将去随机化的数据提供给外部设备作为读取数据。
申请人:三星电子株式会社
地址:韩国京畿道
国籍:KR
代理机构:中科专利商标代理有限责任公司
代理人:倪斌
更多信息请下载全文后查看。
专利名称:使用填充字节提高纠错能力的装置和方法专利类型:发明专利
发明人:齐德科夫·瑟奇
申请号:CN200710085001.0
申请日:20070228
公开号:CN101098485A
公开日:
20080102
专利内容由知识产权出版社提供
摘要:一种用于使用填充字节来提高纠错能力的解码电路和方法,其中在该解码方法中,将输入数据分组解码。
当基于解码结果而确定不可能进行纠错时,在输入数据分组中检测填充字节部分。
将填充字节部分中的数据的电平值转换为填充字节电平值。
将具有转换后的电平值的数据分组解码并输出。
所述解码电路包括:解码器,其将输入数据分组解码;以及控制块,其检测输入数据分组中的填充字节部分,转换输入数据分组,并且当解码器不能进行输入数据分组的纠错时将转换后的数据分组解码。
申请人:三星电子株式会社
地址:韩国京畿道
国籍:KR
代理机构:北京市柳沈律师事务所
代理人:钱大勇
更多信息请下载全文后查看。
Jian Wang and Ze-Nian Lijwangc,li@cs.sfu.caSchool of Computing ScienceSimon Fraser UniversityBurnaby,BC V5A1S6,CanadaABSTRACTThis paper proposes a novel algorithm to solve the problem of segmenting foreground-moving objects from the background scene.The major cue used for object segmentation is the motion information,which is initially extracted from MPEG motion vectors.Since the MPEG motion vectors are generated for simple video compression without any consideration of visual objects,they may not correspond to the true motion of the macroblocks.We propose a Kernel-based Multiple Cue(KMC) algorithm to deal with the above inconsistency of MPEG motion vectors and use multiple cues to segment moving objects. KMC detects and calibrates camera movements;and thenfinds the kernels of moving objects.The segmentation starts from these kernels,which are textured regions with credible motion vectors.Beside motion information,it also makes use of color and texture to help achieving a better segmentation.Moreover,KMC can keep track of the segmented objects over multiple frames,which is useful for object-based coding.Experimental results show that KMC combines temporal and spatial information in a graceful way,which enables it to segment and track the moving objects under different camera motions.Future work includes object segmentation in compressed domain,motion estimation from raw video,etc.Keywords:Motion vector,Locale,Kernel,Object segmentation,Multiple cues,Tracking1.INTRODUCTIONThe segmentation of foreground moving objects from the background scene in digital video has seen a high degree of interest in recent years.Many object segmentation algorithms have been proposed in the literature.QBIC and VideoQ use color as the major cue for segmentation.ASSET2detects corners in each frame to establish the feature correspondence between consecutive frames,estimate motion information,and segments moving objects based on this motion information.Polana and Nelson use Fourier Transform to detect and recognize objects with repetitive motion pattern,such as walking people. Malassiotis and Strintzis use the difference map between consecutive frames and apply an active contour model(snake) algorithm to segment moving objects.These algorithms can be grouped into color and texture based or motion based.MPEG-4is currently examining two temporal algorithms,and one spatial algorithm that is based on a watershed algorithm for object segmentation.Li proposes feature localization instead of segmentation and introduces a new concept locale,which is a set of tiles(or pixels)that can capture a certain feature.Locales are different from the segmented regions in three ways.Figure1shows the two locales in an image.The tiles forming the locales may not be connected.The locales can be overlapping,because a tile may belong to multiple locales.Moreover,the union of all locales in an image does not have to be the entire image.This paper proposes a Kernel-based Multiple Cue(KMC)algorithm,which uses MPEG motion vectors as the major cue to solve the problem of object segmentation.The KMC algorithm can deal with the inconsistency of the MPEG motion vectors and use multiple cues to refine the segmentation result.Moreover,a spatial segmentation algorithm is applied to extract the object shape and an object tracking algorithm is proposed to keep track of the segmented objects over multiple frames.2.OBJECT SEGMENTATION AND TRACKING2.1.DefinitionsIn this paper,kernel is defined as a group of neighboring macroblocks with credible motion vectors.In order to ensure that,the kernel must meet three criteria,which are motion consistency,low residue,and texture constraint.The detection of kernel is intended tofind regions with credible motion vectors,and subsequent object segmentation will start from these regions so that there exists a better chance to get good result.blueL red blueredL Figure 1.Localization vs.segmentation locale 3locale 1locale NFigure 2.Motion localeThis paper introduces a concept motion locale ,which is similar to locale .Motion locales can be non-connected,in-complete,and overlapping and they contain spatiotemporal information about a segmented moving object such as object size,centroid position,color distribution,texture,and a link to its previous occurrence.Figure 2shows the procedure of keeping track of a motion locale over multiple frames.Each rectangle represents a video frame and the irregular polygon stands for a moving object.The dashed line describes the motion trajectory of the centroid of the moving object.The information about a segmented object is saved in a motion locale,which represents a spatiotemporal entity.The multiple occurrences of the same object are reflected by the links between locales.The trajectory of a moving object can be seen clearly from the motion locale.2.2.Basic AssumptionsThe KMC algorithm has several assumptions,which are near orthographic camera projection assumption,rigid object assump-tion,and global affine motion assumption.The paper assumes that the distance from the scene to the camera is far compared to the variation of depth in the scene.Therefore,the depth estimation can be neglected.The rigid object is assumed so that the macroblocks composing the object can be clustered and segmented from the background based on motion information.The global motion caused by camera is assumed to be affine motion and a 2D affine motion model is used to describe the camera motion.2.3.Flow ChartFigure 3shows the overall structure of the KMC algorithm.At first,different camera motions,such as still (no motion),pan/tilt,and zoom,are detected and calibrated.After that,a kernel detection and merge procedure is applied to deal with the inconsistency of the MPEG motion vectors and to find the regions with reliable motion vectors,and subsequent object segmentation will start from the detected kernels.The kernel detection and merge procedure segments a video frame into the background,the object kernels,and the undefined region.However,sometimes the motion cue itself is not enough to recover the whole object region.The detected object kernels may miss some object macroblocks or add some background macroblocks.In order to deal with that,a region growing and refinement process is applied to add the object macroblocks and remove the background ones.This process is based on the color,texture,and other cues by considering the similarity between neighboringFigure3.Overall structure of KMCmacroblocks to reassign a macroblock to the background or an object kernel.Moreover,a spatial segmentation algorithm is performed to extract the object shape and an object tracking algorithm is designed to detect the segmented objects and their motion trajectory over multiple frames.2.4.Camera Motion DetectionA six-parameter2D affine motion model is used to detect background motion in this paper.The background macroblocks are assumed to conform to an affine motion model,which has six parameters.The three pairs of parameters can be resolved by using the centroid coordinates and motion vectors of at least three macroblocks.The estimation of the affine motion model is to solve the equations(1)(2) where to are the affine parameters to be estimated.2.4.1.Camera Pan/TiltWhen the camera is panning/tilting,there exists a major motion group moving towards a certain direction that corresponds to the camera motion.Therefore,pan/tilt can be detected by checking if there is a motion group that occupies a large part of the scene.In this paper,the threshold is set to0.6,which means a pan/tilt is detected if the largest motion group occupies more than 60percent of the entire scene.However,the method can make mistakes if the moving objects occupy a large part of the scene and move with similar motions.In order to avoid that,the motion of outmost macroblocks are checked to see if the boundary of the scene is moving or not.If the boundary of the scene is also moving,then the camera is moving;otherwise,the camera is still.Figure4.Symmetrical Pair of Macroblocks under Zoom2.4.2.Camera ZoomCamera zoom is modeled using a six-parameter affine motion model.When the camera is zooming,the motion pattern shows some symmetry around the center.It is detected by checking the number of symmetrical pair of macroblocks.Figure4shows some macroblocks under camera zoom.The shaded macroblocks are located symetrically around the center with opposite motion vectors.They are linked by dash lines.When the camera is zooming,the motion vectors will display some degree of symmetry around the center of the frame.When the camera is zooming in,all the pixels should move away from the center;when the camera is zooming out,all the pixels should move towards the center.Let denotes the center of the frame.For each macroblock A in the frame,there exists a symmetrical macroblock B. The centers of A and B are represented by and respectively.The motion vectors of A and B are and. The following relationships hold for A and B.(3)The affine motion parameters are estimated using the centroid coordinates and motion vectors of three macroblocks that show the symmetrical motion pattern illustrated in Figure4.Suppose the centroid of the macroblocks is,, and and their motion vectors are,,.The estimation of the affine motion model is to solve the equations(4)(5)(6)(7)(8)(9)where to are the affine parameters to be estimated.In order to improve the result,all the data that conform to the zoom pattern are used to estimate the affine parameters and their averages are used as thefinal affine motion parameters.In this paper,video clips are used for experiments and at least six pairs of macroblocks with the above symmetrical motion pattern are needed to estimate the affine motion parameters.The average of the estimated parameters are used as the affine parameters for the camera motion.2.5.Kernel Detection and MergeKernel detection is intended tofind the regions with credible motion vectors.The neighboring macroblocks with similar motion and small motion estimation error are merged to form a kernel.A kernel can be any size and any shape.Once a kernel is formed, it is checked to see if the kernel has some texture.If the kernel has no texture,then it is considered to be unreliable and removed from the kernel list.2.5.1.Motion ConsistencyThe macroblocks forming a kernel must conform to a consistent motion pattern.The macroblocks are merged into multiple groups based on their motion similarity.The neighboring macroblocks are compared and if their motions are similar,they will be put into one motion group.The similarity is measured as follows.(10) where denotes the motion vector of a macroblock M,and denotes the motion vector of a neighboring macroblock N of M.If is smaller than a certain threshold(set to3in this paper),the two macroblocks will be put into one group as a candidate kernel.2.5.2.Low Estimation ErrorThe motion estimation error of the whole kernel should be smaller than a certain threshold,which is set to0.85in this paper. This is to make sure that the motion estimation is accurate enough.Motion estimation is applied to each macroblock of a candidate kernel.The motion estimation error for a macroblock M with motion vector is computed as:(11) where is the number of correctly estimated pixels and is the total number of pixels in the macroblock,which is256in this paper.The motion estimation error of an entire kernel K is described in the following equation.(12)where denotes the macroblocks that belong to kernel K,and denote the macroblock and its motion vector respectively.is the percentage of wrong pixels in the motion compensated macroblock with motion vector.2.5.3.Texture constraintTexture constraint is applied to each candidate kernel to ensure that a kernel contains some texture.Each kernel is checked for the percentage of edge points.If the percentage is smaller than a threshold,the kernel is removed.The edge points are detected using a Sobel edge detector.Texture constraint is based on the observation that the motion vectors offlat regions can be easily affected by noises and are not reliable;while motion vectors of textured regions are more reliable and more likely to correspond to the real motion.2.5.4.Merge macroblocks into kernelsAt the beginning of the algorithm,each macroblock is assumed to be a candidate kernel.Then neighboring macroblocks are merged into large kernels as described below.For a macroblock M that is neighboring to a candidate kernel K,if its motion estimation error is lower than a certain threshold,then its motion difference with the kernel is checked.The difference between their motions is:(13) where is a macroblock of kernel K that is neighboring to M.If the difference is smaller than a certain threshold,then the macroblock is added to the kernel and the information of the kernel,such as its size and motion are updated as follows.The size of a kernel K is measured in the number of macroblocks forming K.The kernel motion is the average motion of all the macroblocks of K.It can be described by the following equation.(14) where denotes the macroblocks that belong to kernel K and denotes the motion vector of a certain macroblock i.After kernel detection,a video frame is segmented into kernels and undefined regions.These undefined regions fail to be clustered into kernels,because they have inconsistent motion pattern,high motion estimation error,or no internal texture.2.5.5.Background Kernel DetectionIf the camera is not moving,the background kernel is the one containing still macroblocks.Otherwise,the background kernel is the largest kernel conforming to the camera motion.2.5.6.Kernel Merge With the Background RegionThe criterion for kernel merge is based on motion.If the motion difference between a detected kernel and the background is smaller than a threshold,then it is merged with the background region according to the following rule.1.If the camera is still and a kernel has no motion or almost no motion,then add this kernel to the background.2.If the camera is moving,then detect the background kernelfirst.If the difference of this macroblock’s motion and thebackground motion is smaller than a motion similarity threshold,then add this kernel to the background kernel.After kernel detection and merge,a frame is usually segmented into three parts,which are the background,the object kernels,and the undefined region.The background corresponds to the region that moves consistently with the camera motion. The object kernels correspond to the moving objects in the scene.And the undefined region contains macroblocks that cannot be put into either background kernel or object kernels based on motion cue,which will be further processed by a multiple cue algorithm.2.6.Multiple Cue ProcessingThe kernel detection and merge process using motion information may not be enough to recover the whole object region.The detected object regions may miss some object macroblocks or add some background macroblocks.Therefore,a post-processing step is needed to add the object macroblocks and remove the background ones.This region growing and refinement process is based on the color,texture,and other cues.The algorithm considers the similarity between macroblocks based on color,texture, and other cues to reassign a macroblock to the background or an object kernel.The algorithm aims to segment a frame into regions consisting of macroblocks similar in motion or color,or consistent in texture.The equation for the similarity between two macroblocks and is as follow.(15) where and are the chromaticity index of the pixel in A and B block and returns1if equals to ,and0otherwise.2.7.Object TrackingA motion locale list is used to link the multiple occurrences of an object and get its motion trajectory.Each time an object is segmented,it is considered as a motion locale.When a motion locale is added to the motion locale list,the motion locale list is searched for a matching one.The matching is based on color distribution,size,and motion continuity.The search is intended tofind the occurrence of the object in a previous frame and link its occurrences over multiple frames.The confidence value of a motion locale is increased by one if a good matching is found.If no good matching can be found for a motion locale,then it is added as a new one to the motion locale list with confidence value set to zero.The motion locale list can serve for two purposes, object tracking and outlier removal.The motion locales with no link to other motion locales are most probably outliers and are removed from the list.Figure5shows how a motion locale is added to the motion locale list.The information of each segmented object,such as frame number,locale number,bounding rectangle,centroid,color histogram,size,and matching locale,are stored into a spatiotemporal motion locale list.The list shows the link between motion locales and the confidence value of each motion locale.The motion trajectory of a motion locale can be computed by tracing it in the list.The confidence value associated with a motion locale indicates the degree of its spatiotemporal consistency.The higher the confidence value,the more consistent it is spatiotemporally.increase confidence valueand add to the locale listsegmented object localeslocale feature extractionexisting locale listsearch the locale listl o c a l e N l o c a l e 5l o c a l e 4l o c a l e 3l o c a l e 2l o c a l e 1bounding rectanglecolor distributionset confidence value to 0add to the locale list link with the matching localefind a match yes nosizemotioncentroid:::Figure 5.Motion locale and locale list3.EXPERIMENTAL RESULTS3.1.Source Video ClipsThe Bike clip has different camera motions,such as still,pan/tilt,and zoom,which is suitable for testing the ability of the algorithm to detect the camera motion correctly.It also has a moving object that exists in multiple frames and is used to produce some experimental results about object segmentation and tracking.The Tennis clip has many frames with camera zooming motion.It is chosen to show the experimental results of camera motion detection and object segmentation under different camera motions,especially under zooming.3.2.Time and SpeedKMC uses MPEG motion vectors as the source of motion information and does not estimate optical flow.Therefore,it is very fast and runs in real time.3.3.Threshold SelectionThe KMC algorithm uses several thresholds for kernel detection and merge and multiple cue re finement.The thresholds are chosen empirically and remain the same for all testing videos.The threshold for motion estimation error is set to 0.85and the threshold for multiple cue re finement is set to 0.8.3.4.ResultsTable 1shows the camera pan detection result of bike video.The first column is the frame number and the second columnshows the angles of the camera pan and the angles is in the range of to.For example,at frame 3,the angle of camera pan is ,which means the camera is panning towards southeast.The last column shows the magnitude of the pan in pixels.For example,the first row of the table shows that the camera is pan towards southeast in a distance of 5pixels.The angle and distance of camera pan are extracted from the largest motion group in the scene,which corresponds to camera motion.Table1.Camera Pan Parameters of Bike VideoFrame Angle Magnitude33045930471531762751533387Figure6.Camera zooming detection.Figure6is an example of camera zooming from another video clip.From the motion vector map,it is clear that the camera is zooming in and the motion vectors are moving outward from the center of the frame.The right part of the frame does not conform to this motion pattern because of the object motion.Table2shows the camera zooming detection result of tennis video.This video has many frames with camera zooming, which is modeled by an affine motion model.The table shows the frame number and the computed affine motion parameters. Thefirst column is the frame number in which a camera zooming happens.Columns2to7correspond to the six affine motion parameters to.Figure7is an example of object segmentation on frame9of bike video.The result shows four rectangular areas.The upper right area shows the current frame image,the upper left one visualizes the motion vectors of this frame with each small rectangle standing for a macroblock of pixels.The bottom right area shows thefinal object segmentation result and the bottom left area shows the result of kernel detection and merge.The shaded regions denote the detected kernels.TheTable2.Camera Zooming Pattern Modeled by an Affine Motion Model.Frame a1a2a3a4a5a627 1.056-0.016-7.2260.000 1.052-5.97733 1.093-0.003-14.840-0.010 1.117-11.79439 1.144-0.000-24.0460.001 1.138-16.56745 1.140-0.005-22.842-0.001 1.147-17.27251 1.1420.003-24.1110.000 1.146-17.33357 1.144-0.002-23.9050.000 1.151-17.71063 1.145-0.013-22.629-0.006 1.146-16.05469 1.124-0.003-20.465-0.003 1.130-14.86075 1.130-0.009-20.7020.005 1.120-15.00981 1.125-0.013-19.707-0.006 1.144-15.582870.9140.683-61.718-0.012 1.130-13.016Figure7.Object Segmentation Result on Frame9of Bike Video.black region corresponds to the background kernel,which may be disjointed.It is desirable to allow the background kernel to be disjointed,because the foreground objects may cover some parts.It can be seen that the detected kernels are the regions with reliable motion vectors.These kernels all have some edge points and the largeflat regions are not detected as kernel, because they do not have enough edge points.These kernels also have good motion consistency among their macroblocks.By comparing the images in the middle row with those in the bottom row,it can be seen that three object macroblocks are missed from the kernel-based segmentation.The reason is that these three macroblocks locate at the boundary of the object and have different motion vectors.Therefore,motion-based segmentation cannotfind them.However,the multiple cue procedure refines the segmentation result andfinds the three macroblocks.Figure8shows the result of object segmentation on frame9of tennis video.The result shows six images,each with a resolution of pixels.Thefirst two images on the top show the original image and its motion vector map of macroblocks of pixels.The pointer in the middle of a macroblock is an illustration of its motion vector.The direction of the pointer is the direction to which the macroblock is moving and the length of the pointer is the distance that the macroblock actually moves in pixels.The two images in the middle show the results after kernel detection and merge and the bottom ones show the results after multiple cue refinement.The black region corresponds to the background and most of the background macroblocks do not move,which means the camera is still in this frame.The detected kernel corresponds to the moving hand in the scene and the white ball is missed because it has wrong motion vectors.By comparing the middle images and those bottom ones,it is shown that the gray macroblocks with zero motion vectors are the new ones added to the segmented object kernel by using multiple cues.The motion of these macroblocks does not conform to a consistent object motion model. However,they are still recovered.This shows that the multiple cue approach is helpful in recovering an object that does not move consistently.This means that only part of an object moves consistently with a certain motion model can be detected by using motion information and the other parts of the object may be detected by using other cues,such as color,texture,etc.Figure9is the illustration of object tracking result in the bike video.It shows the result of object segmentation and tracking over multiple frames.The object is segmented and a bounding rectangle is formed to cover the whole object.The list shows the spatiotemporal segmentation result,including the object segmentation results when the object is moving and when the object is still in the scene.When the object is moving,the motion and other cues are used to segment the object from the background,Figure8.Object Segmentation Result on Frame9of Tennis Video.Figure9.Object Tracking over Multiple Frames Result on Bike Video.and when it is still,a projection and spatial segmentation algorithm is applied to segment the object.Table3shows the performance of KMC algorithm.Thefirst column is the frame number.The second column is the number of pixels in the manually segmented object,which is used to measure the performance of the segmentation results using KMC.The third column shows the number of object pixels of the segmented object.The fourth column shows the percentage of correctly segmented pixels,which is the result of dividing column3by column2.Thefifth column is the number of background pixels of the segmented object and the last column shows the percentage of background pixels.These two percentages show how good the segmentation is.Since the two percentages are obtained by dividing object pixels and background pixels segmented by KMC with manually segmented object pixels,the two percentages may add up to more than .From the table,it can be seen that the correct ratios are all above,while the wrong ratios are all below.4.CONCLUSION AND FUTURE WORKThis paper shows that the KMC algorithm can perform motion analysis directly on MPEG motion vectors and segment moving objects from the background scene under different camera motions,such as pan,tilt,and zoom.It combines multiple cues,suchTable3.Performance of spatial segmentation.Frame NumOfObjectPels(manual)correct rate wrong rate32691229885.4%40415.0%62784232683.5%34412.4%92810251289.4%49617.6%122933261089.0%52217.8%152889256188.7%47216.3%183348268580.2%46914.0%213378261088.6%60818.0%as motion,color,texture and other cues,in a graceful way to achieve good segmentation results.The algorithm makes use of spatiotemporal information to keep track of segmented moving objects.The proposed KMC algorithm can be improved in several ways.The camera motion detection procedure can use a Singular Value Decomposition(SVD)algorithm to estimate the affine camera motion parameters.The KMC algorithm can not detect moving objects when the kernel detection procedure fails to produce good result.An Expectation Maximization(EM)statistical method can be used to make the kernel detection process more robust.It is also an interesting topic to study how to apply the segmentation results of KMC to object based video coding,such as MPEG-4.Moreover,even though KMC makes use of some compressed information such as MPEG motion vectors,it is still far from working directly on the compressed domain. Considering the compression ratio of MPEG videos,it is desirable to segment moving objects directly on the compressed domain so that both the processing speed and efficiency can be improved.Finally,it is reasonable to anticipate that KMC will produce good results if the motion vectors of macroblocks correspond to their real motion.Therefore,optimization on MPEG motion vectors is desirable with the purpose to obtain better motion vectors.ACKNOWLEDGMENTSThis work was supported in part by the Canadian National Science and Engineering Research Council under the grant OGP-36727,and the grant for Telelearning Research Project in the Canadian Network of Centres of Excellence.REFERENCES1.M.Flickner,et al.,“Query by image and video content:the QBIC system,”IEEE Computer28(9),pp.23–32,1995.2.S.F.Chang,et al.,“VideoQ:an automated content based video search system using visual cues,”in Proc.ACM Multimedia97,pp.313–324,1997.3.S.Smith,“Asset-2:Real-time motion segmentation and object tracking,”Proc.ICCV,pp.237–250,1995.4.R.Polana and R.Nelson,“Detection and recognition of periodic,non-rigid motion,”International Journal of ComputerVision23(3),pp.261–282,1997.5.Malassiotis and Strintzis,“Object-based coding of stereo image sequences,”IEEE Trans.on Circuits and Systems forVideo Technology7(6),pp.891–902,1997.6.ISO/IEC JTC1/SC29/WG11N2202,Coding of moving pictures and audio,March1998.7.R.Mech and M.Wollborn,“A noise robust method for segmentation of moving objects in video sequences considering amoving camera,”Signal Processing66(2),pp.203–217,1998.8.A.Neri,S.Colonnese,and G.Russo,“Automatic moving objects and background segmentation by means of higher orderstatistics,”SPIE3024,pp.8–14,1997.9.L.Vincent and P.Soille,“Watersheds in digital spaces:an efficient algorithm based on immersion simulations,”IEEETransactions on PAMI13(6),pp.583–598,1991.10.Z.Li,O.Zaiane,and Z.Tauber,“Illumination invariance and object model in content-based image and video retrieval,”Journal of Visual Communication and Image Representation10(3),pp.219–244,1999.11.M.Lee,et al.,“A layered video object coding system using sprite and affine motion model,”IEEE Trans.on Circuits andSystems for Video Technology7(1),pp.130–144,1997.12.W.H.Press,et al.,Numerical recipes in C,Cambridge University Press,1992.。