Silica_ST_SPEAr_Embedded_Processors_2011
- 格式:pdf
- 大小:5.23 MB
- 文档页数:11
QuickSpecsNEC Vector Engine Accelerators OverviewNEC Vector Engine AcceleratorsHewlett Packard Enterprise supports, on selected HPE ProLiant and Apollo servers, computational modules based on the NEC Vector Engine technology.The NEC Vector Engine Accelerator Module with its unmatched memory bandwidth per core offers a balanced architecture for applications bounded by insufficient Byte per FLOPS characteristics.Extremely large amount of data can be processed per cycle thanks to the native vector architecture.Moreover, users can easily exploit these capabilities via a standard development environment leveraged from the vector supercomputers era. Applications don’t have to be migrated to a new programming environment. Existing Fortran and C/C++ codes will simply have to be recompiled for the Vector Engine processor.Full software environment is available with compilers, libraries and tools. Compilers are able to vectorize and auto-parallelize loops. Parallelization with OpenMP and MPI is supported.The NEC Vector Engine Accelerator Module is offered in a PCIe form factor, to be hosted by an HPE supported server running a standard Linux® operating system as the user front end.It has been developed using 16nm FinFET process technology for extreme high performance and low power consumption.An outstanding memory bandwidth of 1.2 TB/s is leveraged from the exceptional integration of six HBM2 memory modules and a multi-core vector processor using Chip-on-Wafer-on-Substrate technology.The eight cores share a Last-Level-Cache, facilitating shared memory parallelization.NEC Vector Engine ModelsHPE NEC Vector Engine Accelerator Module Q7G75A Notes: Q7G75A is to be used with HPE Apollo 6500 Gen10. Please see the server QuickSpecs for configuration rules, including requirements for enablement kits.HPE NEC Vector Engine Accelerator Module Q7G75C Notes: Q7G75C is to be used with HPE ProLiant DL380 Gen10. Please see the server QuickSpecs for configuration rules, including requirements for enablement kits.Description HPE NEC Vector Engine Accelerator ModuleHPE NEC VectorEngine Accelerator Module Q7G75A or Q7G75CImageHPE NEC Vector Engine Accelerator Module (VE) offers the best memory bandwidth per core to accelerate AI and HPC real applications. Its record Bytes per FL OPS ratio unleashes applications that are memory bandwidth bounded on current architectures. High sustained application performance of Vector Supercomputers is now available in this PCIe card form factor, at a fraction of the power consumption.Performance2.15 TFLOPS DP | 4.3 TFLOPS SP Memory Size48 GB HBM2 Stacked Memory Memory Bandwidth1.2 TB/s to HBM2 Stacked Memory Bytes/FLOPS0.56 Cores8 Vector Cores Each core with 3 FMA units, 1 Scalar unit, 64 registers of 16,384 bits (256 elements) - 128kB p. corePeer to Peer via PCIex16 PCIe Gen3 Power<300W CoolingPassive Cooling Form FactorDouble-width, Full Height, Full Length Supported Servers and Operating Systems Supported Servers Maximum number of VE cards per Server Server supported Operating Systems HPE ProLiant DL380 Gen10 Up to 3RHEL and CentOS 7.4, 7.5HPE Apollo 6500 Gen10 Up to 8RHEL and CentOS 7.4, 7.5 Software (orderseparately)NEC Fortran (2003, 2008), C (11), C++ (14) compilers. OpenMP 4.5. NEC MPI 3.1. BL AS, FFT, libc, Lapack, etc libraries. Stencil library. GNU profiler (gprof). GNU debugger (gdb) and Eclipse parallel tools platform (PTP). FtraceViewer, PROGINF tools. Notes:− HPE ProLiant DL380 Gen10 servers must be equipped with several options to receive the HPE NEC Vector Engine. Forexample, High Performance Heatsink Kit, High Performance Temperature Fan Kit, Graphics Cable Kit. Only a selection of HPE ProLiant DL380 Gen10 server models are supported with the HPE NEC Vector Engine Accelerator Module. Please see the HPE ProLiant DL380 Gen10 server QuickSpecs for configuration rules. − NEC Software Licenses are available from HPE on a per project basis.Performance of the Vector Engine 1.0 Type 10B-P•The Vector Engine 1.0 Type 10B-P PCIe module is built for HPC and AI.•8 vector cores.•16MB last-level-cache shared by all the cores at 3TB/s (400GB/s per core).•Each core has 64 registers of 16,384 bits (256 elements) for a total of 128kB per core.•Three Fused Multiply-Add (FMA), one Scalar and a few other functional units are available per core.• 2.15 TFLOPS of double-precision performance.• 4.30 TFLOPS of single-precision performance.•48GB HBM2 at 1.2 TB/s.•Power consumption: less than 300W.•x16 PCIe Gen 3.0 maximizes bandwidth between the HPE ProL iant server and the vector processors. The whole application being run on the Vector Engine, it is less subject to PCIe bottleneck than codes offloading functions to accelerators and transferring data constantly.•Vector processors can communicate directly when placed under the same root complex. Up to 8 VEs in an Apollo 6500 Gen10.Service and SupportService and SupportNotes:This option is covered under HPE Support Services / Service Contract applied to the HPE ProLiant Server. No separate HPE Support Services need to be purchased.Most HPE branded options sourced from HPE that are compatible with your product will be covered under your main product support at the same level of coverage, allowing you to upgrade freely. Please check HPE ProLiant Server documentation for more details on the services for this particular option.HPE Pointnext - Service and SupportGet the most from your HPE Products. Get the expertise you need at every step of your IT journey with HPE Pointnext Services. We help you lower your risks and overall costs using automation and methodologies that have been tested and refined by HPE experts through thousands of deployments globally. HPE Pointnext Advisory Services, focus on your business outcomes and goals, partnering with you to design your transformation and build a roadmap tuned to your unique challenges. Our Professional and Operational Services can be leveraged to speed up time-to-production, boost performance and accelerate your business. HPE Pointnext specializes in flawless and on-time implementation, on-budget execution, and creative configurations that get the most out of software and hardware alike.Consume IT on your termsHPE GreenLake brings the cloud experience directly to your apps and data wherever they are—the edge, colocations, or your data center. It delivers cloud services for on-premises IT infrastructure specifically tailored to your most demanding workloads. With a pay-per-use, scalable, point-and-click self-service experience that is managed for you, HPE GreenLake accelerates digital transformation in a distributed, edge-to-cloud world.•Get faster time to market•Save on TCO, align costs to business•Scale quickly, meet unpredictable demand•Simplify IT operations across your data centers and cloudsManaged services to run your IT operationsHPE GreenLake Management Services provides services that monitor, operate, and optimize your infrastructure and applications, delivered consistently and globally to give you unified control and let you focus on innovation. Recommended ServicesHPE Pointnext Tech Care.HPE Pointnext Tech Care is the new operational service experience for HPE products. Tech Care goes beyond traditional support by providing access to product specific experts, an AI driven digital experience, and general technical guidance to not only reduce risk but constantly search for ways to do things better. HPE Pointnext Tech Care has been reimagined from the ground up to support a customer-centric, AI driven, and digitally enabled customer experience to move your business forward. HPE Pointnext Tech Care is available in three response levels. Basic, which provides 9x5 business hour availability and a 2 hour response time. Essential which provides a 15 minute response time 24x7 for most enterprise level customers, and Critical which includes a 6 hour repair commitment where available and outage management response for severity 1 incidents.https:///services/techcareHPE Pointnext Complete CareHPE Pointnext Complete Care is a modular, edge-to-cloud IT environment service that provides a holistic approach to optimizing your entire IT environment and achieving agreed upon IT outcomes and business goals through a personalized and customer-centric experience. All delivered by an assigned team of HPE Pointnext Services experts. HPE Pointnext Complete Care provides: • A complete coverage approach -- edge to cloud•An assigned HPE team•Modular and fully personalized engagement•Enhanced Incident Management experience with priority access•Digitally enabled and AI driven customer experiencehttps:///services/completecareTechnical SpecificationsWarranty and Support ServicesWarranty and Support Services will extend to include HPE options configured with your server or storage device. The price of support service is not impacted by configuration details. HPE sourced options that are compatible with your product will be covered under your server support at the same level of coverage allowing you to upgrade freely. Installation for HPE options is available as needed. To keep support costs low for everyone, some high value options will require additional support. Additional support is only required on select high value workload accelerators, fibre channel switches, InfiniBand and UPS batteries over12KVA.See the specific high value options that require additional support hereProtect your business beyond warranty with HPE Support ServicesHPE Pointnext provides a comprehensive portfolio including Advisory and Transformational, Professional, and Operational Services to help accelerate your digital transformation. From the onset of your transformation journey, Advisory and Transformational Services focus on designing the transformation and creating a solution roadmap. Professional Services specializes in creative configurations with flawless and on-time implementation, and on-budget execution. Finally, operational services provides innovative new approaches like Flexible Capacity and Complete Care, to keep your business at peak performance. HPE is ready to bring together all the pieces of the puzzle for you, with an eye on the future, and make the complex simple.Parts and MaterialsHewlett Packard Enterprise will provide HPE-supported replacement parts and materials necessary to maintain the covered hardware product in operating condition, including parts and materials for available and recommended engineering improvements.Parts and components that have reached their maximum supported lifetime and/or the maximum usage limitations as set forth in the manufacturer's operating manual, product QuickSpecs, or the technical product data sheet will not be provided, repaired, or replaced as part of these services.The defective media retention service feature option applies only to Disk or eligible SSD/Flash Drives replaced by Hewlett Packard Enterprise due to malfunction.HPE Support CenterThe HPE Support Center is a personalized online support portal with access to information, tools and experts to support HPE business products. Submit support cases online, chat with HPE experts, access support resources or collaborate with peers. Learn more https:///hpesc/public/homeHPE's Support Center Mobile App* allows you to resolve issues yourself or quickly connect to an agent for live support. Now, you can get access to personalized IT support anywhere, anytime.HPE Insight Remote Support and HPE Support Center are available at no additional cost with a HPE warranty, HPE Support Service or HPE contractual support agreement.Notes:*HPE Support Center Mobile App is subject to local availability.For more informationVisit the Hewlett Packard Enterprise Service and Support website.Summary of ChangesDate Version History Action Description of Change15-Nov-2021 Version 3 Changed Service and Support section was updated.02-Dec-2019 Version 2 Changed Overview and Standard Features sections were updated.Q7G75C addition to be used with HPE ProLiant DL380 Gen10 02-Apr-2019 Version 1 New New QuickSpecsCopyrightMake the right purchase decision. Contact our presales specialists.ChatEmailCall© Copyright 2021 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.a00059759enw - 16363 - WorldWide - V3 - 15-November-2021Get updates。
GSPBOX_-Atoolboxforsignalprocessingongraphs_GSPBOX:A toolbox for signal processing on graphsNathanael Perraudin,Johan Paratte,David Shuman,Lionel Martin Vassilis Kalofolias,Pierre Vandergheynst and David K.HammondMarch 16,2016AbstractThis document introduces the Graph Signal Processing Toolbox (GSPBox)a framework that can be used to tackle graph related problems with a signal processing approach.It explains the structure and the organization of this software.It also contains a general description of the important modules.1Toolbox organizationIn this document,we brie?y describe the different modules available in the toolbox.For each of them,the main functions are brie?y described.This chapter should help making the connection between the theoretical concepts introduced in [7,9,6]and the technical documentation provided with the toolbox.We highly recommend to read this document and the tutorial before using the toolbox.The documentation,the tutorials and other resources are available on-line 1.The toolbox has ?rst been implemented in MATLAB but a port to Python,called the PyGSP,has been made recently.As of the time of writing of this document,not all the functionalities have been ported to Python,but the main modules are already available.In the following,functions pre?xed by [M]:refer to the MATLAB implementation and the ones pre?xed with [P]:refer to the Python implementation. 1.1General structure of the toolbox (MATLAB)The general design of the GSPBox focuses around the graph object [7],a MATLAB structure containing the necessary infor-mations to use most of the algorithms.By default,only a few attributes are available (see section 2),allowing only the use of a subset of functions.In order to enable the use of more algorithms,additional ?elds can be added to the graph structure.For example,the following line will compute the graph Fourier basis enabling exact ?ltering operations.1G =gsp_compute_fourier_basis(G);Ideally,this operation should be done on the ?y when exact ?ltering is required.Unfortunately,the lack of well de?ned class paradigm in MATLAB makes it too complicated to be implemented.Luckily,the above formulation prevents any unnecessary data copy of the data contained in the structure G .In order to avoid name con?icts,all functions in the GSPBox start with [M]:gsp_.A second important convention is that all functions applying a graph algorithm on a graph signal takes the graph as ?rst argument.For example,the graph Fourier transform of the vector f is computed by1fhat =gsp_gft(G,f);1Seehttps://lts2.epfl.ch/gsp/doc/for MATLAB and https://lts2.epfl.ch/pygsp for Python.The full documentation is also avail-able in a single document:https://lts2.epfl.ch/gsp/gspbox.pdf1a r X i v :1408.5781v 2 [c s .I T ] 15 M a r 2016The graph operators are described in section4.Filtering a signal on a graph is also a linear operation.However,since the design of special?lters(kernels)is important,they are regrouped in a dedicated module(see section5).The toolbox contains two additional important modules.The optimization module contains proximal operators,projections and solvers compatible with the UNLocBoX[5](see section6).These functions facilitate the de?nition of convex optimization problems using graphs.Finally,section??is composed of well known graph machine learning algorithms.1.2General structure of the toolbox(Python)The structure of the Python toolbox follows closely the MATLAB one.The major difference comes from the fact that the Python implementation is object-oriented and thus allows for a natural use of instances of the graph object.For example the equivalent of the MATLAB call:1G=gsp_estimate_lmax(G);can be achieved using a simple method call on the graph object:1G.estimate_lmax()Moreover,the use of class for the"graph object"allows to compute additional graph attributes on the?y,making the code clearer as its MATLAB equivalent.Note though that functionalities are grouped into different modules(one per section below) and that several functions that work on graphs have to be called directly from the modules.For example,one should write:1layers=pygsp.operators.kron_pyramid(G,levels)This is the case as soon as the graph is the structure on which the action has to be performed and not our principal focus.In a similar way to the MATLAB implementation using the UNLocBoX for the convex optimization routines,the Python implementation uses the PyUNLocBoX,which is the Python port of the UNLocBoX. 2GraphsThe GSPBox is constructed around one main object:the graph.It is implemented as a structure in Matlab and as a class in Python.It stores the nodes,the edges and other attributes related to the graph.In the implementation,a graph is fully de?ned by the weight matrix W,which is the main and only required attribute.Since most graph structures are far from fully connected, W is implemented as a sparse matrix.From the weight matrix a Laplacian matrix is computed and stored as an attribute of the graph object.Different other attributes are available such as plotting attributes,vertex coordinates,the degree matrix,the number of vertices and edges.The list of all attributes is given in table1.2Attribute Format Data type DescriptionMandatory?eldsW N x N sparse matrix double Weight matrix WL N x N sparse matrix double Laplacian matrixd N x1vector double The diagonal of the degree matrixN scalar integer Number of verticesNe scalar integer Number of edgesplotting[M]:structure[P]:dict none Plotting parameterstype text string Name,type or short descriptiondirected scalar[M]:logical[P]:boolean State if the graph is directed or notlap_type text string Laplacian typeOptional?eldsA N x N sparse matrix[M]:logical[P]:boolean Adjacency matrixcoords N x2or N x3matrix double Vectors of coordinates in2D or3D.lmax scalar double Exact or estimated maximum eigenvalue U N x N matrix double Matrix of eigenvectorse N x1vector double Vector of eigenvaluesmu scalar double Graph coherenceTable1:Attributes of the graph objectThe easiest way to create a graph is the[M]:gsp_graph[P]:pygsp.graphs.Graph function which takes the weight matrix as input.This function initializes a graph structure by creating the graph Laplacian and other useful attributes.Note that by default the toolbox uses the combinatorial de?nition of the Laplacian operator.Other Laplacians can be computed using the[M]:gsp_create_laplacian[P]:pygsp.gutils.create_laplacian function.Please note that almost all functions are dependent of the Laplacian de?nition.As a result,it is important to select the correct de?nition at? rst.Many particular graphs are also available using helper functions such as:ring,path,comet,swiss roll,airfoil or two moons. In addition,functions are provided for usual non-deterministic graphs suchas:Erdos-Renyi,community,Stochastic Block Model or sensor networks graphs.Nearest Neighbors(NN)graphs form a class which is used in many applications and can be constructed from a set of points (or point cloud)using the[M]:gsp_nn_graph[P]:pygsp.graphs.NNGraph function.The function is highly tunable and can handle very large sets of points using FLANN[3].Two particular cases of NN graphs have their dedicated helper functions:3D point clouds and image patch-graphs.An example of the former can be seen in thefunction[M]:gsp_bunny[P]:pygsp.graphs.Bunny.As for the second,a graph can be created from an image by connecting similar patches of pixels together.The function[M]:gsp_patch_graph creates this graph.Parameters allow the resulting graph to vary between local and non-local and to use different distance functions [12,4].A few examples of the graphs are displayed in Figure1.3PlottingAs in many other domains,visualization is very important in graph signal processing.The most basic operation is to visualize graphs.This can be achieved using a call to thefunction[M]:gsp_plot_graph[P]:pygsp.plotting.plot_graph. In order to be displayable,a graph needs to have2D(or3D)coordinates(which is a?eld of the graph object).Some graphs do not possess default coordinates(e.g.Erdos-Renyi).The toolbox also contains routines to plot signals living on graphs.The function dedicated to this task is[M]:gsp_plot_ signal[P]:pygsp.plotting.plot_signal.For now,only1D signals are supported.By default,the value of the signal is displayed using a color coding,but bars can be displayed by passing parameters.3Figure 1:Examples of classical graphs :two moons (top left),community (top right),airfoil (bottom left)and sensor network (bottom right).The third visualization helper is a function to plot ?lters (in the spectral domain)which is called [M]:gsp_plot_filter [P]:pygsp.plotting.plot_filter .It also supports ?lter-banks and allows to automatically inspect the related frames.The results obtained using these three plotting functions are visible in Fig.2.4OperatorsThe module operators contains basics spectral graph functions such as Fourier transform,localization,gradient,divergence or pyramid decomposition.Since all operator are based on the Laplacian de? nition,the necessary underlying objects (attributes)are all stored into a single object:the graph.As a ?rst example,the graph Fourier transform [M]:gsp_gft [P]:pygsp.operators.gft requires the Fourier basis.This attribute can be computed with the function [M]:gsp_compute_fourier_basis[P]:/doc/c09ff3e90342a8956bec0975f46527d3240ca692.html pute_fourier_basis [9]that adds the ?elds U ,e and lmax to the graph structure.As a second example,since the gradient and divergence operate on the edges of the graph,a search on the edge matrix is needed to enable the use of these operators.It can be done with the routines [M]:gsp_adj2vec[P]:pygsp.operators.adj2vec .These operations take time and should4Figure 2:Visualization of graph and signals using plotting functions.NameEdge derivativefe (i,j )Laplacian matrix (operator)Available Undirected graph Combinatorial LaplacianW (i,j )(f (j )?f (i ))D ?WV Normalized Laplacian W (i,j ) f (j )√d (j )f (i )√d (i )D ?12(D ?W )D ?12V Directed graph Combinatorial LaplacianW (i,j )(f (j )?f (i ))12(D ++D ??W ?W ?)V Degree normalized Laplacian W (i,j ) f (j )√d ?(j )?f (i )√d +(i )I ?12D ?12+[W +W ?]D ?12V Distribution normalized Laplacianπ(i ) p (i,j )π(j )f (j )? p (i,j )π(i )f (i )12 Π12PΠ12+Π?12P ?Π12 VTable 2:Different de?nitions for graph Laplacian operator and their associated edge derivative.(For directed graph,d +,D +and d ?,D ?de?ne the out degree and in-degree of a node.π,Πis the stationary distribution of the graph and P is a normalized weight matrix W .For sake of clarity,exact de?nition of those quantities are not given here,but can be found in [14].)be performed only once.In MATLAB,these functions are called explicitly by the user beforehand.However,in Python they are automatically called when needed and the result stored as an attribute. The module operator also includes a Multi-scale Pyramid Transform for graph signals [6].Again,it works in two steps.Firstthe pyramid is precomputed with [M]:gsp_graph_multiresolution [P]:pygsp.operators.graph_multiresolution .Second the decomposition of a signal is performed with [M]:gsp_pyramid_analysis [P]:pygsp.operators.pyramid_analysis .The reconstruction uses [M]:gsp_pyramid_synthesis [P]:pygsp.operators.pyramid_synthesis .The Laplacian is a special operator stored as a sparse matrix in the ?eld L of the graph.Table 2summarizes the available de?nitions.We are planning to implement additional ones.5FiltersFilters are a special kind of linear operators that are so prominent in the toolbox that they deserve their own module [9,7,2,8,2].A ?lter is simply an anonymous function (in MATLAB)or a lambda function (in Python)acting element-by-element on the input.In MATLAB,a ?lter-bank is created simply by gathering these functions together into a cell array.For example,you would write:51%g(x)=x^2+sin(x)2g=@(x)x.^2+sin(x);3%h(x)=exp(-x)4h=@(x)exp(-x);5%Filterbank composed of g and h6fb={g,h};The toolbox contains many prede?ned design of?lter.They all start with[M]:gsp_design_in MATLAB and are in the module[P]:pygsp.filters in Python.Once a?lter(or a?lter-bank)is created,it can be applied to a signal with[M]: gsp_filter_analysis in MATLAB and a call to the method[P]:analysis of the?lter object in Python.Note that the toolbox uses accelerated algorithms to scale almost linearly with the number of sample[11].The available type of?lter design of the GSPBox can be classi?ed as:Wavelets(Filters are scaled version of a mother window)Gabor(Filters are shifted version of a mother window)Low passlter(Filters to de-noise a signal)High pass/Low pass separationlterbank(tight frame of2lters to separate the high frequencies from the low ones.No energy is lost in the process)Additionally,to adapt the?lter to the graph eigen-distribution,the warping function[M]:gsp_design_warped_translates [P]:pygsp.filters.WarpedTranslates can be used[10].6UNLocBoX BindingThis module contains special wrappers for the UNLocBoX[5].It allows to solve convex problems containing graph terms very easily[13,15,14,1].For example,the proximal operator of the graph TV norm is given by[M]:gsp_prox_tv.The optimization module contains also some prede?ned problems such as graph basis pursuit in[M]:gsp_solve_l1or wavelet de-noising in[M]:gsp_wavelet_dn.There is still active work on this module so it is expected to grow rapidly in the future releases of the toolbox.7Toolbox conventions7.1General conventionsAs much as possible,all small letters are used for vectors(or vector stacked into a matrix)and capital are reserved for matrices.A notable exception is the creation of nearest neighbors graphs.A variable should never have the same name as an already existing function in MATLAB or Python respectively.This makes the code easier to read and less prone to errors.This is a best coding practice in general,but since both languages allow the override of built-in functions,a special care is needed.All function names should be lowercase.This avoids a lot of confusion because some computer architectures respect upper/lower casing and others do not.As much as possible,functions are named after the action they perform,rather than the algorithm they use,or the person who invented it.No global variables.Global variables makes it harder to debug and the code is harder to parallelize.67.2MATLABAll function start by gsp_.The graph structure is always therst argument in the function call.Filters are always second.Finally,optional parameter are last.In the toolbox,we do use any argument helper functions.As a result,optional argument are generally stacked into a graph structure named param.If a transform works on a matrix,it will per default work along the columns.This is a standard in Matlab(fft does this, among many other functions).Function names are traditionally written in uppercase in MATLAB documentation.7.3PythonAll functions should be part of a module,there should be no call directly from pygsp([P]:pygsp.my_function).Inside a given module,functionalities can be further split in differentles regrouping those that are used in the same context.MATLAB’s matrix operations are sometimes ported in a different way that preserves the efciency of the code.When matrix operations are necessary,they are all performed through the numpy and scipy libraries.Since Python does not come with a plotting library,we support both matplotlib and pyqtgraph.One should install the required libraries on his own.If both are correctly installed,then pyqtgraph is favoured unless speci?cally speci?ed. AcknowledgementsWe would like to thanks all coding authors of the GSPBOX.The toolbox was ported in Python by Basile Chatillon,Alexandre Lafaye and Nicolas Rod.The toolbox was also improved by Nauman Shahid and Yann Sch?nenberger.References[1]M.Belkin,P.Niyogi,and V.Sindhwani.Manifold regularization:A geometric framework for learning from labeled and unlabeledexamples.The Journal of Machine Learning Research,7:2399–2434,2006.[2] D.K.Hammond,P.Vandergheynst,and R.Gribonval.Wavelets on graphs via spectral graph theory.Applied and ComputationalHarmonic Analysis,30(2):129–150,2011.[3]M.Muja and D.G.Lowe.Scalable nearest neighbor algorithms for high dimensional data.Pattern Analysis and Machine Intelligence,IEEE Transactions on,36,2014.[4]S.K.Narang,Y.H.Chao,and A.Ortega.Graph-wavelet?lterbanks for edge-aware image processing.In Statistical Signal ProcessingWorkshop(SSP),2012IEEE,pages141–144.IEEE,2012.[5]N.Perraudin,D.Shuman,G.Puy,and P.Vandergheynst.UNLocBoX A matlab convex optimization toolbox using proximal splittingmethods.ArXiv e-prints,Feb.2014.[6] D.I.Shuman,M.J.Faraji,and P.Vandergheynst.A multiscale pyramid transform for graph signals.arXiv preprint arXiv:1308.4942,2013.[7] D.I.Shuman,S.K.Narang,P.Frossard,A.Ortega,and P.Vandergheynst.The emerging?eld of signal processing on graphs:Extendinghigh-dimensional data analysis to networks and other irregular domains.Signal Processing Magazine,IEEE,30(3):83–98,2013.7[8] D.I.Shuman,B.Ricaud,and P.Vandergheynst.A windowed graph Fourier transform.Statistical Signal Processing Workshop(SSP),2012IEEE,pages133–136,2012.[9] D.I.Shuman,B.Ricaud,and P.Vandergheynst.Vertex-frequency analysis on graphs.arXiv preprint arXiv:1307.5708,2013.[10] D.I.Shuman,C.Wiesmeyr,N.Holighaus,and P.Vandergheynst.Spectrum-adapted tight graph wavelet and vertex-frequency frames.arXiv preprint arXiv:1311.0897,2013.[11] A.Susnjara,N.Perraudin,D.Kressner,and P.Vandergheynst.Accelerated?ltering on graphs using lanczos method.arXiv preprintarXiv:1509.04537,2015.[12] F.Zhang and E.R.Hancock.Graph spectral image smoothing using the heat kernel.Pattern Recognition,41(11):3328–3342,2008.[13] D.Zhou,O.Bousquet,/doc/c09ff3e90342a8956bec0975f46527d3240ca692.html l,J.Weston,and B.Sch?lkopf.Learning with local and global consistency.Advances in neural informationprocessing systems,16(16):321–328,2004.[14] D.Zhou,J.Huang,and B.Sch?lkopf.Learning from labeled and unlabeled data on a directed graph.In the22nd international conference,pages1036–1043,New York,New York,USA,2005.ACM Press.[15] D.Zhou and B.Sch?lkopf.A regularization framework for learning from graph data.2004.8。
r8s使用指南中国科学院植物研究所张金龙编译zhangjl@前言r8s是美国加利福尼亚大学戴维斯分校的进化生物学家Mike Sanderson编写的用于估算进化树分化时间的软件,在进化生物学、分子生物地理学等学科有着广泛的应用,已经成为估算分化时间不可或缺的软件之一。
该软件中的一些方法如NPRS和PL是软件作者最先提出的,目前在同类的其他软件中还难以实现。
R8s的运行平台为MacOS和Linux,在国内应用的还不多,也难以找到中文的练习资料和说明。
本文基于当前版本r8s 1.7.1,参照其说明书,介绍该软件在Linux下的安装和操作,并对其模块的功能和选项进行简要的说明。
译者于北京香山2010年1月23日目录一r8s下载与安装 (1)下载 (1)安装 (1)1 在MacOS上 (1)2 在Linux上(以Ubuntu 9.0为例) (1)(1)下载源程序 (1)(2)解压缩 (1)(3) 源代码的编译 (1)注:g77编译器的安装 (1)3 Windows用户 (2)二程序运行 (2)1 在Linux中(Ubuntu linux 或PHYLIS) (2)2 在WindowXP中运行 (3)程序运行模式 (3)1 交互模式 (3)2 批处理模式 (3)三命令行说明 (4)blformat命令: 进化树的基本信息 (4)mrca命令为节点定名 (5)fixage命令:设定节点的分化时间 (5)constrain命令:限定节点的分化时间 (5)divtime 命令分化时间估算 (5)showage 显示分化时间和分化速率: (6)describe 显示进化树及树的说明 (6)set 命令 (7)calibrate 时间校对 (7)profile 从多个树中提取某个节点的信息 (7)rrlike 检验进化速率 (7)四数据处理过程中的建议 (7)关于进化模型的说明 (7)局部进化时间模型localmodel (7)对于获得时间的建议 (8)关于时间估算的bootstrap的方法 (8)改错 (8)五实例分析 (8)附录命令参考 (11)blformat (11)calibrate (11)cleartrees (11)collapse (11)constrain (11)describe (11)divtime (11)execute (12)fixage (12)localmodel (12)mrca (12)profile (12)prune (12)quit (12)reroot (12)rrlike (12)set (12)showage (13)unfixage (13)mrp (13)bd (14)一r8s下载与安装下载r8s的下载网址/r8s//r8s/r8s1.71.dist.tar.Z安装1 在MacOS上在MacOS上运行,在UNIX shell中运行已经编译好的可执行文件即可。
ACM Word Template for SIG Site1st Author1st author's affiliation1st line of address2nd line of address Telephone number, incl. country code 1st author's E-mail address2nd Author2nd author's affiliation1st line of address2nd line of addressTelephone number, incl. country code2nd E-mail3rd Author3rd author's affiliation1st line of address2nd line of addressTelephone number, incl. country code3rd E-mailABSTRACTA s network speed continues to grow, new challenges of network processing is emerging. In this paper we first studied the progress of network processing from a hardware perspective and showed that I/O and memory systems become the main bottlenecks of performance promotion. Basing on the analysis, we get the conclusion that conventional solutions for reducing I/O and memory accessing latencies are insufficient for addressing the problems.Motivated by the studies, we proposed an improved DCA combined with INIC solution which has creations in optimized architectures, innovative I/O data transferring schemes and improved cache policies. Experimental results show that our solution reduces 52.3% and 14.3% cycles on average for receiving and transmitting respectively. Also I/O and memory traffics are significantly decreased. Moreover, an investigation to the behaviors of I/O and cache systems for network processing is performed. And some conclusions about the DCA method are also presented.KeywordsKeywords are your own designated keywords.1.INTRODUCTIONRecently, many researchers found that I/O system becomes the bottleneck of network performance promotion in modern computer systems [1][2][3]. Aim to support computing intensive applications, conventional I/O system has obvious disadvantages for fast network processing in which bulk data transfer is performed. The lack of locality support and high latency are the two main problems for conventional I/O system, which have been wildly discussed before [2][4].To overcome the limitations, an effective solution called Direct Cache Access (DCA) is suggested by INTEL [1]. It delivers network packages from Network Interface Card (NIC) into cache instead of memory, to reduce the data accessing latency. Although the solution is promising, it is proved that DCA is insufficient to reduce the accessing latency and memory traffic due to many limitations [3][5]. Another effective solution to solve the problem is Integrated Network Interface Card (INIC), which is used in many academic and industrial processor designs [6][7]. INIC is introduced to reduce the heavy burden for I/O registers access in Network Drivers and interruption handling. But recent report [8] shows that the benefit of INIC is insignificant for the state of the art 10GbE network system.In this paper, we focus on the high efficient I/O system design for network processing in general-purpose-processor (GPP). Basing on the analysis of existing methods, we proposed an improved DCA combined with INIC solution to reduce the I/O related data transfer latency.The key contributions of this paper are as follows:▪Review the network processing progress from a hardware perspective and point out that I/O and related last level memory systems have became the obstacle for performance promotion.▪Propose an improved DCA combined with INIC solution for I/O subsystem design to address the inefficient problem of a conventional I/O system.▪Give a framework of the improved I/O system architecture and evaluate the proposed solution with micro-benchmarks.▪Investigate I/O and Cache behaviors in the network processing progress basing on the proposed I/O system.The paper is organized as follows. In Section 2, we present the background and motivation. In Section 3, we describe the improved DCA combined INIC solution and give a framework of the proposed I/O system implementation. In Section 4, firstly we give the experiment environment and methods, and then analyze the experiment results. In Section 5, we show some related works. Finally, in Section 6, we carefully discuss our solutions with many existing technologies, and then draw some conclusions.2.Background and MotivationIn this section, firstly we revise the progress of network processing and the main network performance improvement bottlenecks nowadays. Then from the perspective of computer architecture, a deep analysis of network system is given. Also the motivation of this paper is presented.2.1Network processing reviewFigure 1 illustrates the progress of network processing. Packages from physical line are sampled by Network Interface Card (NIC). NIC performs the address filtering and stream control operations, then send the frames to the socket buffer and notifies OS to invoke network stack processing by interruptions. When OS receives the interruptions, the network stack accesses the data in socket buffer and calculates the checksum. Protocol specific operations are performed layer by layer in stack processing. Finally, data is transferred from socket buffer to the user buffer depended on applications. Commonly this operation is done by memcpy, a system function in OS.Figure 1. Network Processing FlowThe time cost of network processing can be mainly broke down into following parts: Interruption handling, NIC driver, stack processing, kernel routine, data copy, checksum calculation and other overheads. The first 4 parts are considered as packet cost, which means the cost scales with the number of network packets. The rests are considered as bit cost (also called data touch cost), which means the cost is in proportion to the total I/O data size. The proportion of the costs highly depends on the hardware platform and the nature of applications. There are many measurements and analyses about network processing costs [9][10]. Generally, the kernel routine cost ranges from 10% - 30% of the total cycles; the driver and interruption handling costs range from 15% - 35%; the stack processing cost ranges from 7% - 15%; and data touch cost takes up 20% - 35%. With the development of high speed network (e.g. 10/40 Gbps Ethernet), an increasing tendency for kernel routines, driver and interruption handling costs is observed [3].2.2 MotivationTo reveal the relationship among each parts of network processing, we investigate the corresponding hardware operations. From the perspective of computerhardware architecture, network system performance is determined by three domains: CPU speed, Memory speed and I/O speed. Figure 2 depicts the relationship.Figure 2. Network xxxxObviously, the network subsystem can achieve its maximal performance only when the three domains above are in balance. It means that the throughput or bandwidth ofeach hardware domain should be equal with others. Actually this is hard for hardware designers, because the characteristics and physical implementation technologies are different for CPU, Memory and I/O system (chipsets) fabrication. The speed gap between memory and CPU – a.k.a “the memory wall” – has been paid special attention for more than ten years, but still it is not well addressed. Also the disparity between the data throughput in I/O system and the computing capacity provided by CPU has been reported in recent years [1][2].Meanwhile, it is obvious that the major time costs of network processing mentioned above are associated with I/O and Memory speeds, e.g. driver processing, interruption handling, and memory copy costs. The most important nature of network processing is the “producer -consumer locality” between every two consecutive steps of the processing flow. That means the data produced in one hardware unit will be immediately accessed by another unit, e.g. the data in memory which transported from NIC will be accessed by CPU soon. However for conventional I/O and memory systems, the data transfer latency is high and the locality is not exploited.Basing on the analysis discussed above, we get the observation that the I/O and Memory systems are the limitations for network processing. Conventional DCA or INIC cannot successfully address this problem, because it is in-efficient in either I/O transfer latency or I/O data locality utilization (discussed in section 5). To diminish these limitations, we present a combined DCA with INIC solution. The solution not only takes the advantages of both method but also makes many improvements in memory system polices and software strategies.3. Design MethodologiesIn this section, we describe the proposed DCA combined with INIC solution and give a framework of the implementation. Firstly, we present the improved DCA technology and discuss the key points of incorporating it into I/O and Memory systems design. Then, the important software data structures and the details of DCA scheme are given. Finally, we introduce the system interconnection architecture and the integration of NIC.3.1 Improved DCAIn the purpose of reducing data transfer latency and memory traffic in system, we present an improved Direct Cache Access solution. Different with conventional DCA scheme, our solution carefully consider the following points. The first one is cache coherence. Conventionally, data sent from device by DMA is stored in memory only. And for the same address, a different copy of data is stored in cache which usually needs additional coherent unit to perform snoop operation [11]; but when DCA is used, I/O data and CPU data are both stored in cache with one copy for one memory address, shown in figure 4. So our solution modifies the cache policy, which eliminated the snoopingoperations. Coherent operation can be performed by software when needed. This will reduce much memory traffic for the systems with coherence hardware [12].I/O write *(addr) = bCPU write *(addr) = aCacheCPU write *(addr) = a I/O write with DCA*(addr) = bCache(a) cache coherance withconventional I/O(b) cache coherance withDCA I/OFigure 3. xxxxThe second one is cache pollution. DCA is a mixed blessing to CPU: On one side, it accelerates the data transfer; on the other side, it harms the locality of other programs executed in CPU and causes cache pollution. Cache pollution is highly depended on the I/O data size, which is always quite large. E.g. one Ethernet package contains a maximal 1492 bytes normal payload and a maximal 65536 bytes large payload for Large Segment Offload (LSO). That means for a common network buffer (usually 50 ~ 400 packages size), a maximal size range from 400KB to 16MB data is sent to cache. Such big size of data will cause cache performance drop dramatically. In this paper, we carefully investigate the relationship between the size of I/O data sent by DCA and the size of cache system. To achieve the best cache performance, a scheme of DCA is also suggested in section 4. Scheduling of the data sent with DCA is an effective way to improve performance, but it is beyond the scope of this paper.The third one is DCA policy. DCA policy refers the determination of when and which part of the data is transferred with DCA. Obviously, the scheme is application specific and varies with different user targets. In this paper, we make a specific memory address space in system to receive the data transferred with DCA. The addresses of the data should be remapped to that area by user or compilers.3.2 DCA Scheme and detailsTo accelerate network processing, many important software structures used in NIC driver and the stack are coupled with DCA. NIC Descriptors and the associated data buffers are paid special attention in our solution. The former is the data transfer interface between DMA and CPU, and the later contains the packages. For farther research, each package stored in buffer is divided into the header and the payload. Normally the headers are accessed by protocols frequently, but the payload is accessed only once or twice (usually performed as memcpy) in modern network stack and OS. The details of the related software data structures and the network processing progress can be found in previous works [13].The progress of transfer one package from NIC to the stack with the proposed solution is illustrated in Table 1. All the accessing latency parameters in Table 1 are based on a state of the art multi-core processor system [3]. One thing should be noticed is that the cache accessing latency from I/O is nearly the same with that from CPU. But the memory accessing latency from I/O is about 2/3 of that from CPU due to the complex hardware hierarchy above the main memory.Table 1. Table captions should be placed above the tabletransfer.We can see that DCA with INIC solution saves above 95% CPU cycles in theoretical and avoid all the traffic to memory controller. In this paper, we transfer the NIC Descriptors and the data buffers including the headers and payload with DCA to achieve the best performance. But when cache size is small, only transfer the Descriptors and the headers with DCA is an alternative solution.DCA performance is highly depended on system cache policy. Obviously for cache system, write-back with write-allocate policy can help DCA achieves better performance than write-through with write non-allocate policy. Basing on the analysis in section 3.1, we do not use the snooping cache technology to maintain the coherence with memory. Cache coherence for other non-DCA I/O data transfer is guaranteed by software.3.3 On-chip network and integrated NICFootnotes should be Times New Roman 9-point, and justified to the full width of the column.Use the “ACM Reference format” for references – that is, a numbered list at the end of the article, ordered alphabetically and formatted accordingly. See examples of some typical reference types, in the new “ACM Reference format”, at the end of this document. Within this template, use the style named referencesfor the text. Acceptable abbreviations, for journal names, can be found here: /reference/abbreviations/. Word may try to automatically ‘underline’ hotlinks in your references, the correct style is NO underlining.The references are also in 9 pt., but that section (see Section 7) is ragged right. References should be published materials accessible to the public. Internal technical reports may be cited only if they are easily accessible (i.e. you can give the address to obtain thereport within your citation) and may be obtained by any reader. Proprietary information may not be cited. Private communications should be acknowledged, not referenced (e.g., “[Robertson, personal communication]”).3.4Page Numbering, Headers and FootersDo not include headers, footers or page numbers in your submission. These will be added when the publications are assembled.4.FIGURES/CAPTIONSPlace Tables/Figures/Images in text as close to the reference as possible (see Figure 1). It may extend across both columns to a maximum width of 17.78 cm (7”).Captions should be Times New Roman 9-point bold. They should be numbered (e.g., “Table 1” or “Figure 2”), please note that the word for Table and Figure are spelled out. Figure’s captions should be centered beneath the image or picture, and Table captions should be centered above the table body.5.SECTIONSThe heading of a section should be in Times New Roman 12-point bold in all-capitals flush left with an additional 6-points of white space above the section head. Sections and subsequent sub- sections should be numbered and flush left. For a section head and a subsection head together (such as Section 3 and subsection 3.1), use no additional space above the subsection head.5.1SubsectionsThe heading of subsections should be in Times New Roman 12-point bold with only the initial letters capitalized. (Note: For subsections and subsubsections, a word like the or a is not capitalized unless it is the first word of the header.)5.1.1SubsubsectionsThe heading for subsubsections should be in Times New Roman 11-point italic with initial letters capitalized and 6-points of white space above the subsubsection head.5.1.1.1SubsubsectionsThe heading for subsubsections should be in Times New Roman 11-point italic with initial letters capitalized.5.1.1.2SubsubsectionsThe heading for subsubsections should be in Times New Roman 11-point italic with initial letters capitalized.6.ACKNOWLEDGMENTSOur thanks to ACM SIGCHI for allowing us to modify templates they had developed. 7.REFERENCES[1]R. Huggahalli, R. Iyer, S. Tetrick, "Direct Cache Access forHigh Bandwidth Network I/O", ISCA, 2005.[2] D. Tang, Y. Bao, W. Hu et al., "DMA Cache: Using On-chipStorage to Architecturally Separate I/O Data from CPU Data for Improving I/O Performance", HPCA, 2010.[3]Guangdeng Liao, Xia Zhu, Laxmi Bhuyan, “A New ServerI/O Architecture for High Speed Networks,” HPCA, 2011. [4] E. A. Le´on, K. B. Ferreira, and A. B. Maccabe. Reducingthe Impact of the MemoryWall for I/O Using Cache Injection, In 15th IEEE Symposium on High-PerformanceInterconnects (HOTI’07), Aug, 2007.[5] A.Kumar, R.Huggahalli, S.Makineni, “Characterization ofDirect Cache Access on Multi-core Systems and 10GbE”,HPCA, 2009.[6]Sun Niagara 2,/processors/niagara/index.jsp[7]PowerPC[8]Guangdeng Liao, L.Bhuyan, “Performance Measurement ofan Integrated NIC Architecture with 10GbE”, 17th IEEESymposium on High Performance Interconnects, 2009. [9] A.Foong et al., “TCP Performance Re-visited,” IEEE Int’lSymp on Performance Analysis of Software and Systems,Mar 2003[10]D.Clark, V.Jacobson, J.Romkey, and H.Saalwen. “AnAnalysis of TCP processing overhead”. IEEECommunications,June 1989.[11]J.Doweck, “Inside Intel Core microarchitecture and smartmemory access”, Intel White Paper, 2006[12]Amit Kumar, Ram Huggahalli., Impact of Cache CoherenceProtocols on the Processing of Network Traffic[13]Wenji Wu, Matt Crawford, “Potential performancebottleneck in Linux TCP”, International Journalof Communication Systems, Vol. 20, Issue 11, pages 1263–1283, November 2007.[14]Weiwu Hu, Jian Wang, Xiang Gao, et al, “Godson-3: ascalable multicore RISC processor with x86 emulation,”IEEE Micro, 2009. 29(2): pp. 17-29.[15]Cadence Incisive Xtreme Series./products/sd/ xtreme_series.[16]Synopsys GMAC IP./dw/dwtb.php?a=ethernet_mac [17]ler, P.M.Watts, A.W.Moore, "Motivating FutureInterconnects: A Differential Measurement Analysis of PCILatency", ANCS, 2009.[18]Nathan L.Binkert, Ali G.Saidi, Steven K.Reinhardt.Integrated Network Interfaces for High-Bandwidth TCP/IP.Figure 1. Insert caption to place caption below figure.Proceedings of the 12th international conferenceon Architectural support for programming languages and operating systems (ASPLOS). 2006[19]G.Liao, L.Bhuyan, "Performance Measurement of anIntegrated NIC Architecture with 10GbE", HotI, 2009. [20]Intel Server Network I/O Acceleration./technology/comms/perfnet/downlo ad/ServerNetworkIOAccel.pdfColumns on Last Page Should Be Made As Close AsPossible to Equal Length。
Package‘bbknnR’November20,2023Title Perform Batch Balanced KNN in RVersion1.1.0Date2023-11-17Description A fast and intuitive batch effect removal tool for single-cell data.BBKNN is origi-nally used in the'scanpy'python package,and now can be used with'Seurat'seamlessly. License MIT+file LICENSEEncoding UTF-8Depends R(>=4.1.0),methods,utilsLinkingTo Rcpp(>=1.0.8)Imports future,glmnet,Matrix,magrittr,Rcpp,RcppAnnoy,reticulate,rlang,Rtsne,Seurat,SeuratObject,tidytable,uwot(>=0.1.14)LazyData trueRoxygenNote7.2.3URL https:///ycli1995/bbknnR,https:///Teichlab/bbknn,https://bbknn.readthedocs.io/en/latest/BugReports https:///ycli1995/bbknnR/issuesSuggests dplyr,knitr,rmarkdown,testthat(>=3.0.0),patchworkConfig/testthat/edition3VignetteBuilder knitrNeedsCompilation yesAuthor Yuchen Li[aut,cre]Maintainer Yuchen Li<********************>Repository CRANDate/Publication2023-11-2015:10:09UTC12panc8_small R topics documented:panc8_small (2)RidgeRegression (3)RunBBKNN (4)Index8panc8_small A small example version of the pancreas scRNA-seq datasetDescriptionA subsetted version of the pancreas scRNA-seq dataset to test BBKNNUsagepanc8_smallFormatA Seurat object with the following slotsfilledassays Currently only contains one assay("RNA"-scRNA-seq expression data)counts-Raw expression data•data-Normalized expression data•scale.data-Scaled expression data•var.features-names of the current features selected as variable•meta.features-Assay level metadata such as mean and variancemeta.data Cell level metadataactive.assay Current default assayactive.ident Current default identsgraphs Emptyreductions Dimensional reductions:currently PCAversion Seurat version used to create the objectcommands Command historySourceSeuratData https:///satijalab/seurat-dataRidgeRegression3 RidgeRegression Perform ridge regression on scaled expression dataDescriptionPerform ridge regression on scaled expression data,accepting both technical and biological cate-gorical variables.The effect of the technical variables is removed while the effect of the biological variables is retained.This is a preprocessing step that can aid BBKNN integration.UsageRidgeRegression(object,...)##Default S3method:RidgeRegression(object,latent_data,batch_key,confounder_key,lambda=1,seed=42,verbose=TRUE,...)##S3method for class SeuratRidgeRegression(object,batch_key,confounder_key,assay=NULL,features=NULL,lambda=1,run_pca=TRUE,npcs=50,="pca",reduction.key="PC_",replace=FALSE,seed=42,verbose=TRUE,...)Argumentsobject An object...Arguments passed to other methodslatent_data Extra data to regress out,should be cells x latent databatch_key Variables to regress out as technical effects.Must be included in column names of latent_dataconfounder_key Variables to to retain as biological effects.Must be included in column names of latent_datalambda A user supplied lambda sequence.pass to glmnetseed Set a random seed.By default,sets the seed to42.Setting NULL will not set a seed.verbose Whether or not to print output to the consoleassay Name of Assay ridge regression is being run onfeatures Features to compute ridge regression on.If features=NULL,ridge regression will be run using the variable features for the Assay.run_pca Whether or not to run pca with regressed expression data(TRUE by default) npcs Total Number of PCs to compute and store(50by default) Dimensional reduction name(pca by default)reduction.key Dimensional reduction key,specifies the string before the number for the dimen-sion names(PC by default)replace Whether or not to replace original scale.data with regressed expression data (TRUE by default)ValueReturns a Seurat object.ReferencesPark,Jong-Eun,et al."A cell atlas of human thymic development defines T cell repertoire forma-tion."Science367.6480(2020):eaay3224.RunBBKNN Perform batch balanced KNNDescriptionBatch balanced KNN,altering the KNN procedure to identify each cell’s top neighbours in each batch separately instead of the entire cell pool with no accounting for batch.The nearest neighbours for each batch are then merged to create afinal list of neighbours for the cell.Aligns batches in a quick and lightweight manner.UsageRunBBKNN(object,...)##Default S3method:RunBBKNN(object,batch_list,n_pcs=50L,neighbors_within_batch=3L,trim=NULL,approx=TRUE,use_annoy=TRUE,annoy_n_trees=10L,pynndescent_n_neighbors=30L,pynndescent_random_state=0L,use_faiss=TRUE,metric="euclidean",set_op_mix_ratio=1,local_connectivity=1,seed=42,verbose=TRUE,...)##S3method for class SeuratRunBBKNN(object,batch_key,assay=NULL,reduction="pca",n_pcs=50L,graph_name="bbknn",set_op_mix_ratio=1,local_connectivity=1,run_TSNE=TRUE,TSNE_name="tsne",TSNE_key="tSNE_",run_UMAP=TRUE,UMAP_name="umap",UMAP_key="UMAP_",min_dist=0.3,spread=1,seed=42,verbose=TRUE,...)Argumentsobject An object...Arguments passed to other methodsbatch_list A character vector with the same length as nrow(pca)n_pcs Number of dimensions to use.Default is50.neighbors_within_batchHow many top neighbours to report for each batch;total number of neighboursin the initial k-nearest-neighbours computation will be this number times thenumber of batches.This then serves as the basis for the construction of a sym-metrical matrix of connectivities.trim Trim the neighbours of each cell to these many top connectivities.May help with population independence and improve the tidiness of clustering.The lowerthe value the more independent the individual populations,at the cost of moreconserved batch effect.Default is10times neighbors_within_batch times thenumber of batches.Set to0to skip.approx If TRUE,use approximate neighbourfinding-RcppAnnoy or pyNNDescent.This results in a quicker run time for large datasets while also potentially in-creasing the degree of batch correction.use_annoy Only used when approx=TRUE.If TRUE,will use RcppAnnoy for neighbour finding.If FALSE,will use pyNNDescent instead.annoy_n_trees Only used with annoy neighbour identification.The number of trees to construct in the annoy forest.More trees give higher precision when querying,at the costof increased run time and resource intensity.pynndescent_n_neighborsOnly used with pyNNDescent neighbour identification.The number of neigh-bours to include in the approximate neighbour graph.More neighbours givehigher precision when querying,at the cost of increased run time and resourceintensity.pynndescent_random_stateOnly used with pyNNDescent neighbour identification.The RNG seed to usewhen creating the graph.use_faiss If approx=FALSE and the metric is"euclidean",use the faiss package to com-pute nearest neighbours if installed.This improves performance at a minor costto numerical precision as faiss operates onfloat32.metric What distance metric to use.The options depend on the choice of neighbour algorithm."euclidean",the default,is always available.set_op_mix_ratioPass to’set_op_mix_ratio’parameter for umaplocal_connectivityPass to’local_connectivity’parameter for umapseed Set a random seed.By default,sets the seed to42.Setting NULL will not set a seed.verbose Whether or not to print output to the consolebatch_key Column name in meta.data discriminating between your batches.assay used to construct Graph.reduction Which dimensional reduction to use for the BBKNN input.Default is PCA graph_name Name of the generated BBKNN graph.Default is bbknn.run_TSNE Whether or not to run t-SNE based on BBKNN results.TSNE_name Name to store t-SNE dimensional reduction.TSNE_key Specifies the string before the number of the t-SNE dimension names.tSNE by default.run_UMAP Whether or not to run UMAP based on BBKNN results.UMAP_name Name to store UMAP dimensional reduction.UMAP_key Specifies the string before the number of the UMAP dimension names.tSNE by default.min_dist Pass to’min_dist’parameter for umapspread Pass to’spread’parameter for umapValueReturns a Seurat object containing a new BBKNN Graph.If run t-SNE or UMAP,will also return corresponded reduction objects.ReferencesPola´n ski,Krzysztof,et al."BBKNN:fast batch alignment of single cell transcriptomes."Bioinfor-matics36.3(2020):964-965.Index∗datasetspanc8_small,2glmnet,4panc8_small,2RidgeRegression,3RunBBKNN,4umap,6,78。
0.25 dB LSB, 7-Bit, Silicon DigitalAttenuator, 0.1 GHz to 6.0 GHz Data Sheet HMC1119Rev. C Document FeedbackInformation furnished by Analog Devices is believed to be accurate and reliable. However, noresponsibility is assumed by Analog Devices for its use, nor for any infringements of patents or other rights of third parties that may result from its use. Specifications subject to change without notice. No license is granted by implication or otherwise under any patent or patent rights of Analog Devices. T rademarks and registered trademarks are the property of their respective owners. O ne Technology Way, P.O. Box 9106, Norwood, MA 02062-9106, U.S.A. Tel: 781.329.4700 ©2016–2018 Analog Devices, Inc. All rights reserved. Technical Support FEATURESAttenuation range: 0.25 dB LSB steps to 31.75 dBLow insertion loss:1.1 dB at 1.0 GHz1.3 dB at2.0 GHzTypical step error: less than ±0.1 dBExcellent attenuation accuracy: less than ±0.2 dBLow phase shift error: 6° phase shift at 1.0 GHzSafe state transitionsHigh linearity1 dB compression (P1dB): 31 dBm typicalInput third-order intercept (IP3): 54 dBm typicalRF settling time (0.05 dB final RF output): 250 nsSingle supply operation: 3.3 V to 5.0 VESD rating: Class 2 (2 kV human body model (HBM))24-lead, 4 mm × 4 mm LFCSP package: 16 mm2 APPLICATIONSCellular infrastructureMicrowave radios and very small aperture terminals (VSATs) Test equipment and sensorsIF and RF designsFUNCTIONAL BLOCK DIAGRAMVGND65432112962-1Figure 1.GENERAL DESCRIPTIONThe HMC1119 is a broadband, highly accurate, 7-bit digital attenuator, operating from 0.1 GHz to 6.0 GHz with 31.5 dB attenuation control range in 0.25 dB steps.The HMC1119 is implemented in a silicon process, offering very fast settling time, low power consumption, and high ESD robustness. The device features safe state transitions and is optimized for excellent step accuracy and high linearity over frequency and temperature range. The RF input and output are internally matched to 50 Ω and do not require any external matching components. The design is bidirectional; therefore, the RF input and output are interchangeable. The HMC1119 has an on-chip regulator that can support a wide supply operating range from 3.3 V to 5.0 V with no performance change in electrical characteristics. The HMC1119 incorporates a driver that supports serial (3-wire) and parallel controls of the attenuator.The HMC1119 comes in a RoHS-compliant, compact, 4 mm ×4 mm LFCSP package.A fully populated evaluation board is available.HMC1119Data SheetRev. C | Page 2 of 15TABLE OF CONTENTSFeatures .............................................................................................. 1 Applications ....................................................................................... 1 Functional Block Diagram .............................................................. 1 General Description ......................................................................... 1 Revision History ............................................................................... 2 Specifications ..................................................................................... 3 Electrical Specifications ............................................................... 3 Timing Specifications .................................................................. 4 Absolute Maximum Ratings ....................................................... 5 ESD Caution .................................................................................. 5 Pin Configuration and Function Descriptions ............................. 6 Interface Schematics..................................................................... 7 Typical Performance Characteristics ............................................. 8 Insertion Loss, Return Loss, State Error, Step Error, andRelative Phase ................................................................................8 Input Power Compression and Third-Order Intercept ......... 10 Theory of Operation ...................................................................... 11 Serial Control Interface ............................................................. 11 RF Input Output ......................................................................... 11 Parallel Control Interface .......................................................... 12 Power-Up Sequence ................................................................... 12 Applications Information .............................................................. 13 Evaluation Printed Circuit Board ............................................ 13 Packaging and Ordering Information ......................................... 15 Outline Dimensions ................................................................... 15 Ordering Guide .. (15)REVISION HISTORY4/2018—Rev. B to Rev CChanges to Figure 23 ...................................................................... 12 Change to PCB Description, Table 7 ............................................ 13 Updated Outline Dimensions . (15)9/2017—Rev. A to Rev. BChanged CP-24-16 to HCP-24-3 ................................. Throughout Updated Outline Dimensions ....................................................... 15 Changes to Ordering Guide .......................................................... 15 8/2017—Rev. 0 to Rev. AAdded Timing Specifications Section ............................................. 4 Moved Table 2 .................................................................................... 4 Changes to Figure 5 and Figure 6 .................................................... 7 Changes to Serial Control Interface Section ............................... 11 Moved Figure 22 and Table 6 ........................................................ 11 Changes to Figure 23 ...................................................................... 12 Moved Parallel Control Interface Section, Direct Parallel Mode Section, Latched Parallel Mode Section, Power-Up Sequence Section, and Power-Up States Section ......................................... 12 Updated Outline Dimensions . (15)9/2016—Revision 0: Initial VersionData SheetHMC1119Rev. C | Page 3 of 15SPECIFICATIONSELECTRICAL SPECIFICATIONSV DD = 3.3 V to 5.0 V , T A = 25°C, 50 Ω system, unless otherwise noted. Table 1.ParameterTest Conditions/Comments Min Typ Max Unit FREQUENCY RANGE0.1 6.0 GHz INSERTION LOSS 0.1 GHz to 1.0 GHz 1.1 1.8 dB 0.1 GHz to 2.0 GHz 1.3 2.0 dB 0.1 GHz to 4.0 GHz 1.6 2.3 dB0.1 GHz to 6.0 GHz 2.0 2.8 dB ATTENUATION 0.2 GHz to 6.0 GHzRange Delta between minimum and maximum attenuation states31.75dB AccuracyReferenced to insertion loss; all attenuation states−(0.05 + 4% of attenuation setting) +(0.05 + 4% of attenuation setting) dB Step Error All attenuation states±0.1 dB Overshoot Between all attenuation states ≤0.1 dB RETURN LOSSAll attenuation states ATTNIN, ATTNOUT 1.0 GHz 23 dBm 2.0 GHz 22 dBm 4.0 GHz 19 dBm6.0 GHz 17 dBm RELATIVE PHASE 1.0 GHz 6 Degrees 2.0 GHz 18 Degrees 4.0 GHz 38 Degrees6.0 GHz 58 Degrees SWITCHING CHARACTERISTICSt RISE , t FALL 10%/90% RF output60 ns t ON , t OFF50% CTL to 10%/90% RF output 150 ns Settling Time 50% CTL to 0.05 dB final RF output 250 ns50% CTL to 0.10 dB final RF output 200 ns INPUT LINEARITYAll attenuation states, 0.2 GHz to 6 GHz 0.1 dB Compression (P0.1dB) 30 dBm 1 dB Compression (P1dB)31 dBm Input Third-Order Intercept (IP3) Two-tone input power = 16 dBm/tone, ∆f = 1 MHz 54 dBm SUPPLY CURRENT (I DD ) V DD = 3.3 V 0.3 mAV DD = 5.0 V 0.6 mA CONTROL VOLTAGE THRESHOLD <1 µA typical Low V DD = 3.3 V 0 0.5 VV DD = 5.0 V 0 0.8 V High V DD = 3.3 V 2.0 3.3 VV DD = 5.0 V 3.5 5.0 V RECOMMENDED OPERATING CONDITIONS Supply Voltage Range (V DD )3.0 5.4 V Digital Control Voltage Range For P/S, CLK, SERNIN, LE, D0 to D6 pins 0 V DD V RF Input PowerAll attenuation states, T CASE = 85°C 24 dBm Case Temperature (T CASE )−40+85°CHMC1119 Data SheetTIMING SPECIFICATIONSSee Figure 23 and Figure 24 for the timing diagrams.Table 2.Parameter Description Min Typ Max Unitt SCK Minimum serial period, see Figure 23 70 nst CS Control setup time, see Figure 23 15 nst CH Control hold time, see Figure 23 20 nst LN LE setup time, see Figure 23 15 nst LEW Minimum LE pulse width, see Figure 24 10 nst LES Minimum LE pulse spacing, see Figure 23 630 nst CKN Serial clock hold time from LE, see Figure 23 0 nst PH Hold time, see Figure 24 10 nst PS Setup time, see Figure 24 2 nsRev. C | Page 4 of 15Data SheetHMC1119Rev. C | Page 5 of 15ABSOLUTE MAXIMUM RATINGSTable 3.ParameterRating RF Input Power (T CASE = 85°C) 25 dBmDigital Control Inputs (P/S, CLK, SERNIN, LE, D0 to D6) −0.3 V to V DD + 0.5 V Supply Voltage (V DD )−0.3 V to +5.5 V Continuous Power Dissipation (P DISS ) 0.31 W Thermal Resistance (at Maximum Power Dissipation) 156°C/WTemperatureChannel Temperature 135°CStorage−65°C to +150°C Maximum Reflow Temperature 260°C (MSL3 Rating) ESD Sensitivity (HBM)2 kV (Class 2)Stresses at or above those listed under Absolute Maximum Ratings may cause permanent damage to the product. This is a stress rating only; functional operation of the product at these or any other conditions above those indicated in the operational section of this specification is not implied. Operation beyond the maximum operating conditions for extended periods may affect product reliability.ESD CAUTIONHMC1119Data SheetRev. C | Page 6 of 15PIN CONFIGURATION AND FUNCTION DESCRIPTIONSV SERNIN NOTES1. THE EXPOSED PAD AND GND PINS MUST BE CONNECTED TO RF DC GROUND.CLK LE GND ATTNOUT GNDG N G N G N G N G N G N D 6D 5D 4D 3D 2D 112962-002Figure 2. Pin ConfigurationTable 4. Pin Function DescriptionsPin No. Mnemonic Description1, 19 to 24 D0, D6 to D1 Parallel Control Voltage Inputs. These pins attain the required attenuation (see Table 6). There is no internal pull-up or pull-down on these pins; therefore, these pins must always be kept at a valid logic level (V IH or V IL ) and must not be left floating. 2 V DD Supply Voltage Pin.3P/S Parallel/Serial Control Input. There is no internal pull-up or pull-down on this pin; therefore, this pin must always be kept at a valid logic level (V IH or V IL ) and must not be left floating. For parallel mode, set Pin 3 to low; for serial mode, set Pin 3 to high.4, 6 to 13, 15 GND Ground. The package bottom has an exposed metal pad that must connect to the printed circuit board (PCB) RF/dc ground. See Figure 4 for the GND interface schematic.5 ATTNIN Attenuator Input. This pin is dc-coupled and matched to 50 Ω. A blocking capacitor is required. Select the value of the capacitor based on the lowest frequency of operation. See Figure 5.14 ATTNOUT Attenuator Output. This pin is dc-coupled and matched to 50 Ω.A blocking capacitor is required. Select the value of the capacitor based on the lowest frequency of operation. See Figure 5.16 LE Serial/Parallel Interface Latch Enable Input. There is no internal pull-up or pull-down on this pin; therefore, this pin must always be kept at a valid logic level (V IH or V IL ) and must not be left floating. See the Theory of Operation section for more information.17 CLK Serial Interface Clock Input. There is no internal pull-up or pull-down on this pin; therefore, this pin must always be kept at a valid logic level (V IH or V IL ) and must not be left floating. See the Theory of Operation section for more information.18 SERNIN Serial interface Data Input. There is no internal pull-up or pull-down on this pin; therefore, this pin must always be kept at a valid logic level (V IH or V IL ) and must not be left floating. See the Theory of Operation section for more information.EPADExposed Pad. The exposed pad must be connected to RF/dc ground.Data SheetHMC1119Rev. C | Page 7 of 15INTERFACE SCHEMATICSD0TO D512962-021Figure 3. D0 to D6 Interface12962-022Figure 4. GND Interface12962-023Figure 5. ATTIN and ATTOUT InterfaceV 12962-024Figure 6. P/S, LE, CLK, and SERNIN InterfaceHMC1119Data SheetRev. C | Page 8 of 15TYPICAL PERFORMANCE CHARACTERISTICSINSERTION LOSS, RETURN LOSS, STATE ERROR, STEP ERROR, AND RELATIVE PHASE–4–3–2–1I N S E R T I O N L O S S (d B )FREQUENCY (GHz)12962-003Figure 7. Insertion Loss vs. Frequency at Various TemperaturesFREQUENCY (GHz)–50–40–30–20–100I N P U T R E T U R N L O S S (d B)12962-004Figure 8. Input Return Loss (Major States Only)–2.0–1.6–1.2–0.8–0.400.40.81.21.62.0043281216202428S T A T E E R R O R (d B )ATTENUATION STATE (dB)12962-007Figure 9. State Error vs. Attentuation State, 0.1 GHz to 0.5 GHzFREQUENCY (GHz)–35–30–25–20–15–10–50N O R M A L I Z E D A T T E N U A T I O N (d B )12962-005Figure 10. Normalized Attenuation (Major States Only)FREQUENCY (GHz)–60–50–40–30–20–10O U T P U T R E T U R N L O S S (d B )12962-006Figure 11. Output Return Loss (Major States Only)–1–0.8–0.6–0.4–0.200.20.40.60.81S T A T E E R R O R (d B )043281216202428ATTENUATION STATE (dB)12962-009Figure 12. State Error vs. Attentuation State, 1 GHz to 6 GHzData SheetHMC1119Rev. C | Page 9 of 15–2.0–1.5–1.0–0.500.51.01.52.0S T A T E E R R O R (d B )FREQUENCY (GHz)12962-008Figure 13. State Error vs. Frequency, Major States Only–60–40–20020406080R E L A T I V E P H A S E (d e g )FREQUENCY (GHz)12962-011Figure 14. Relative Phase vs. Frequency, Major States Only–1.0–0.8–0.6–0.4–0.200.20.40.60.81.0S T E P E R R O R (d B )FREQUENCY (GHz)12962-010Figure 15. Step Error vs. Frequency, Major States OnlyHMC1119Data SheetRev. C | Page 10 of 15INPUT POWER COMPRESSION AND THIRD-ORDER INTERCEPT152025303540P 1d B(d B m )FREQUENCY (GHz)12962-012Figure 16. P1dB vs. Frequency at Various Temperatures, MinimumAttentuation State, 0.05 GHz to 1 GHz152025303540P 0.1d B(d B m )FREQUENCY (GHz)12962-013Figure 17. P0.1dB vs. Frequency at Various Temperatures, MinimumAttentuation State, 0.05 GHz to 1 GHzFREQUENCY (GHz)3040506070I P 3(d B m )0.200.40.60.8 1.012962-014Figure 18. IP3 vs. Frequency at Various Temperatures, MinimumAttentuation State, 0.1 GHz to 1 GHz 152025303540P 1d B (dB m )FREQUENCY (GHz)12962-015Figure 19. P1dB vs. Frequency at Various Temperatures, MinimumAttentuation State, 0.05 GHz to 6 GHz152025303540P 0.1d B (dB m )FREQUENCY (GHz)12962-016Figure 20. P0.1dB vs. Frequency at Various Temperatures, MinimumAttentuation State, 0.05 GHz to 6 GHzFREQUENCY (GHz)3040506070I P 3(d B m )12962-017Figure 21. IP3 vs. Frequency at Various Temperatures, MinimumAttentuation State, 0.1 GHz to 6 GHzTHEORY OF OPERATIONThe HMC1119 incorporates a 7-bit fixed attenuator array that offers an attenuation range of 0.25 dB to 31.75 dB, with 0.25 dB steps. An integrated driver provides both serial and parallel mode control of the attenuator array (see Figure 22).The HMC1119 can be in either serial or parallel mode control by setting the P/S pin to high or low, respectively (see Table 5). The 7-bit data, loaded in either serial or parallel mode, then latches with the control signal, LE, to determine the attenuator value. Table 5. Mode Selection Table 1P/S Pin State Control Mode Low Parallel HighSerial1The P/S pin must always be kept at a valid logic level (V IH or V IL ) and must not be left floating.SERIAL CONTROL INTERFACEThe HMC1119 utilizes a 3-wire serial to parallel (SPI)configuration, as shown in the serial mode timing diagram (see Figure 23): serial data input (SERNIN), clock (CLK), and latch enable (LE). The serial control interface activates when the P/S pin is set to high.In serial mode, the 7-bit SERNIN data is clocked MSB first on rising CLK edges into the shift register; then, LE must betoggled high to latch the new attenuation state into the device. The LE must be set low to clock a set of 7-bit data into the shift register because CLK is masked to prevent the attenuator value from changing if LE is kept high.In serial mode operation, both the serial control inputs (LE, CLK, SERNIN) and the parallel control inputs (D0 to D6) must always be kept at a valid logic level (V IH or V IL ) and must not be left floating. It is recommended to connect the parallel control inputs to ground and to use pull-down resistors on all serial control input lines if the device driving these input lines goes high impedance during hibernation.RF INPUT OUTPUTThe attenuator in the HMC1119 is bidirectional; the ATTNIN and ATTNOUT pins are interchangeable as the RF input and output ports. The attenuator is internally matched to 50 Ω at both input and output; therefore, no external matching components are required. The RF pins are dc-coupled; therefore, dc blocking capacitors are required on RF lines.SERNIND0D1D2D3D4D5D6CLK P/S LERFOUTPUT12962-018Figure 22. Attenuator Array Functional Block DiagramTable 6. Truth TableDigital Control Input 1Attenuation State (dB) D6 D5 D4 D3 D2 D1D0 Low Low Low Low Low Low Low 0 (reference) Low Low Low Low Low Low High 0.25 Low Low Low Low Low High Low 0.5 Low Low Low Low High Low Low 1.0 Low Low Low High Low Low Low 2.0 Low Low High Low Low Low Low 4.0 Low High Low Low Low Low Low 8.0 High Low Low Low Low Low Low 16.0 HighHighHigh High HighHigh High 31.751Any combination of the control voltage input states shown in Table 6 provides an attenuation equal to the sum of the bits selected.12962-19 P/SSERNINCLKLEFigure 23. Serial Control Timing DiagramPARALLEL CONTROL INTERFACEThe parallel control interface has seven digital control input lines(D6 to D0) to set the attenuation value. D6 is the most significantbit (MSB) that selects the 16 dB attenuator stage, and D0 is theleast significant bit (LSB) that selects the 0.25 dB attenuator stage(see Figure 22).In parallel mode operation, both the serial control inputs (LE, CLK,SERNIN) and the parallel control inputs (D0 to D6) must always bekept at a valid logic level (V IH or V IL) and must not be left floating. Itis recommended to connect the serial control inputs to ground andto use pull-down resistors on all parallel control input lines ifthe device driving these input lines goes high impedance duringhibernation.Setting P/S to low enables parallel mode. There are two modes ofparallel operation: direct parallel mode and latched parallel mode.Direct Parallel ModeFor direct parallel mode, the latch enable (LE) pin must be kepthigh. Change the attenuation state using the control voltage inputs(D0 to D6) directly. This mode is ideal for manual control of theattenuator and using hardware, switches, or a jumper.Latched Parallel ModeThe latch enable (LE) pin must be low when changing thecontrol voltage inputs (D0 to D6) to set the attenuation state.When the desired state is set, LE must be toggled high to transferthe 7-bit data to the bypass switches of the attenuator array, thentoggled low to latch the change into the device (see Figure 24).LED6TO D0P/S12962-2Figure 24. Latched Parallel Mode Timing DiagramPOWER-UP SEQUENCEThe ideal power-up sequence is as follows:1.Power up GND.2.Power up V DD.3.Power up the digital control inputs (the relative order ofthe digital control inputs is not important).4.Power up the RF input.For latched parallel mode operation, LE must be toggled. Therelative order of the digital inputs is not important as long as theinputs are powered up after GND and V DD.Power-Up StatesThe logic state of the device is at maximum attenuation when, atpower up, LE is set to low. The attenuator latches in the desiredpower-up state approximately 200 ms after power up.APPLICATIONS INFORMATIONEVALUATION PRINTED CIRCUIT BOARDThe schematic of the evaluation board, EV2HMC1119LP4M , is shown in Figure 25. The PCB is four-layer material with a copper thickness of 0.7 mils on each layer. Each copper layer is separated with a dielectric material. The top dielectric material is 10-mil RO4350 with a typical dielectric constant of 3.48. The middle and bottom dielectric materials are FR-4 material, used for mechanical strength and to meet the overall board thickness of approximately 62 mils, which allows SMA connectors to beAll RF and dc traces are routed on the top copper layer. The RF transmission lines are designed using coplanar waveguide model (CPWG) with a width of 18 mils, spacing of 17 mils, and dielectric thickness of 10 mils to maintain 50 Ω characteristic impedance. The inner and bottom layers are solid ground planes. For optimal electrical and thermal performance, an ample number of vias are populated around the transmission lines and under the package exposed pad. The evaluation board layout serves as a recommenda-tion for the optimal performance on both electrical and thermal aspects.12962-026Figure 25. EV2HMC1119LP4M Evaluation PCBTable 7. Bill of MaterialsItem Value 1 DescriptionManufacturer 2 J1, J2 PCB mount SMA connector J318-pin dc connectorTP1, TP2Through hole mount test point C1, C3 100 pF Capacitor, 0402 package C6 10 μF Capacitor, 0603 package C71000 pF Capacitor, 0402 package R1 to R11 0 Ω Resistor, 0402 package R12 to R25 100 kΩ Resistor, 0402 packageSW1, SW2 SPDT four-position DIP switchU1 HMC1119 digital attenuator Analog Devices, Inc.PCB 3600-01280-00-1 evaluation PCB EV2HMC1119LP4M 4 from Analog Devices1 Blank cells in the Value column indicate that there is no specific value recommendation for the listed component.2Blank cells in the Manufacturer column indicate that there is no specific manufacturer recommendation for the listed component. 3Circuit board material is Arlon 25FR. 4Reference this number when ordering the full evaluation PCB. See the Ordering Guide section.12962-027Figure 26. Applications CircuitPACKAGING AND ORDERING INFORMATIONOUTLINE DIMENSIONS0.50BSC0.500.400.30BOTTOM VIEWTOP VIEWSIDE VIEW4.104.00 SQ 3.900.950.850.750.05 MAX 0.02 NOM0.20 REFCOPLANARITY0.08PIN 1INDICATORFOR PROPER CONNECTION OF THE EXPOSED PAD, REFER TO THE PIN CONFIGURATION AND FUNCTION DESCRIPTIONSSECTION OF THIS DATA SHEET.12-08-2017-C0.300.250.180.20 MIN2.852.70 SQ 2.55EXPOSED PAD00SEATING PLANEDETAIL A (JEDEC 95)Figure 27. 24-Lead Lead Frame Chip Scale Package [LFCSP]4 mm × 4 mm Body and 0.85 mm Package Height(HCP-24-3)Dimensions shown in millimetersORDERING GUIDEModel 1Temperature Range MSL Rating 2 Package DescriptionPackage Option HMC1119LP4ME −40°C to +85°C MSL3 24-Lead Lead Frame Chip Scale Package [LFCSP] HCP-24-3 HMC1119LP4METR −40°C to +85°C MSL3 24-Lead Lead Frame Chip Scale Package [LFCSP] HCP-24-3 EV2HMC1119LP4MEvaluation Board1 All models are RoHS compliant.2See the Absolute Maximum Ratings section.©2016–2018 Analog Devices, Inc. All rights reserved. Trademarks and registered trademarks are the property of their respective owners. D12962-0-4/18(C)。
A6V10361096_c_en_--FDO181C Collective Smoke Detector Product ManualOverviewThe FDO181C is an optical smoke detector with an optical sensor. It works according to the principle of forward scattering. The detector reacts extremely sensitive on light aerosols caused by fire. The increased sensitivity makes the detection of smoldering and open fire possible.Characteristics– Intelligent detector with built-in CPU, providing advanced distributed intelligence for optimum reliable detection principle– Opto-electronic sampling chamber detects fire more reliable and accurate – Collective detector, no address setting, polarity free connection– Particularly suited for the early detection of smoke-generating flaming and smoldering fires– Resistant to environment and interference factors such as dust, fibers, insects,humidity, extreme temperatures, corrosive, vapors, vibration, synthetic aerosols.With immunity against electro-magnetic interference– Self-test of operating status, when fault occurs or low voltage happens, indicator can prompt user– Automatic drift compensation and dust prompt for reducing false alarm because of dust accumulation– 360° visible alarm indicator– Dust cap protects the detector from being contaminated by construction workApplication– Communication with FC18 controller via FDCI183 transponder, each FDCI183 can connect max. 32 collective detectors.– Communication with BC80 controller via BDS161 transponder, each BDS161 can connect max. 10 collective detectors.IndicatorThe detector is provided with an internal alarm indicator to show its operating status.Status Indicator Normal Off FaultFlash every 5sDust Prompt (Heavy dust)Double flash every 5s AlarmSteady onInstallationlEasy and time-saving mountingInstall the base and finish the wiring during the construction phase.1. Insert detector into base and turn it clockwisely until positioning pointer on basealign with positioning pointer on detector. (see below)Uninstallation:1. Turn it counterclockwisely and pull the detector out of base.After all the construction is finished, the dust cap must be taken away!No painting!DimensionsIn :mm (with the base)Connection diagramMaintenancel Performance testRecommendation:– Submit all detectors to an annual visual check. Detectors that are strongly soiled ormechanically damaged must be replaced.– Long term backup detectors should be stored with sealed plastic bag.– Carry out smoke test each year.Technical dataOperating voltage10…28VDCOperating current (quiescent)0.1mAActivation current60.0 mASensitivity Standard2.4%mResponse time10 sOperating temperature–10...+50°CStorage temperature–20 ... +70 °CHumidity≤96% (40±2℃)Color White, RAL 9010Protection category GB4208-93IP40Details for orderingType Material No.Part No.Designation Weight FDO181C S54320-F11-A2101190982Collective smoke detector0.081Kg FDB181C S54320-F9-A2101190980Collective detector base0.038KgA5Q00022000100566010FDO181dust capBeijing Siemens Cerberus Electronics Limited© Data and design subject to change without notice. No.1, Fengzhidonglu, Xibeiwang, HaiDian District,Beijing, 100094, ChinaTel: +10 6476 8806Fax: +10 6476 8899。
PYTHON FOR DATASCIENCE CHEAT SHEETPython Scikit-LearnP r e p r o c e s s i n gW o r k i n g O n M o d e lP o s t -P r o c e s s i n gI n t r o d u c t i o n•Using NumPy:>>>import numpy as np>>>a=np.array([(1,2,3,4),(7,8,9,10)],dtype=int)>>>data = np.loadtxt('file_name.csv', delimiter=',')•Using Pandas:>>>import pandas as pd>>>df=pd.read_csv file_name.csv ,header =0)D a t a L o a d i n gT r a i n -T e s tD a t aD a t a P r e p a r a t i o n•Standardization>>>from sklearn.preprocessing import StandardScaler>>>get_names = df.columns >>>scaler =preprocessing.StandardScaler()>>>scaled_df = scaler.fit_transform(df)>>>scaled_df =pd.DataFrame(scaled_df, columns=get_names)m•Normalization>>>from sklearn.preprocessing import Normalizer >>>pd.read_csv("File_name.csv")>>>x_array = np.array(df [ Column1 ] #Normalize Column1>>>normalized_X =preprocessing.normalize([x_array])M o d e l C h o o s i n gT r a i n -T e s tD a t aP r e d i c t i o nE v a l u a t e P e r f o r m a n c eUnsupervised Learning Estimator:•Principal Component Analysis (PCA):>>> from sklearn.decomposition import PCA>>> new_pca= PCA(n_components=0.95)•K Means:>>>from sklearn.cluster import KMeans >>> k_means = KMeans(n_clusters=5, random_state=0)Unsupervised :>>> k_means.fit(X_train)>>> pca_model_fit =new_pca.fit_transform(X_train)Supervised Learning Estimator:•Linear Regression:>>>from sklearn.linear_model import LinearRegression >>> new_lr =LinearRegression(normalize=True)•Support Vector Machine:>>> from sklearn.svm import SVC >>> new_svc = SVC(kernel='linear')Supervised:>>>new_ lr.fit(X, y)>>> knn.fit(X_train, y_train)>>>new_svc.fit(X_train, y_train)•Naive Bayes:>>> from sklearn.naive_bayes import GaussianNB>>> new_gnb = GaussianNB()•KNN:>>> from sklearn import neighbors >>>knn=neighbors.KNeighborsClassifier(n_ne ighbors=1)Clustering:1. Homogeneity:>>> from sklearn.metrics import homogeneity_score>>> homogeneity_score(y_true, y_predict)2. V-measure:>>> from sklearn.metrics import v_measure_score>>> metrics.v_measure_score(y_true, y_predict)Regression:1. Mean Absolute Error:>>> from sklearn.metrics import mean_absolute_error >>> y_true = [3, -0.5, 2]>>> mean_absolute_error(y_true, y_predict) 2. Mean Squared Error:>>> from sklearn.metrics import mean_squared_error >>> mean_squared_error(y_test, y_predict) 3. R² Score :>>> from sklearn.metrics import r2_score >>> r2_score(y_true, y_predict)Classification:1. Confusion Matrix:>>> from sklearn.metrics importconfusion_matrix>>> print(confusion_matrix(y_test,y_pred))2. Accuracy Score:>>> knn.score(X_test, y_test) >>> from sklearn.metrics importaccuracy_score>>> accuracy_score(y_test, y_pred)Cross-validation:>>> fromsklearn.cross_validation import cross_val_score >>>print(cross_val_score(knn, X_train, y_train, cv=4))>>>print(cross_val_score(new_lr, X, y, cv=2))Scikit-learn :“sklearn" is a machine learning library for the Python programming language. Simple and efficient tool for data mining, Data analysis and Machine Learning.Importing Convention -import sklearn>>>from sklearn.model_selection import train_test_split>>> X_train, X_test, y_train, y_test = train_test_split(X,y,random_state=0)M o d e l T u n i n gGrid Search:>>> from sklearn.grid_search import GridSearchCV>>> params = {"n_neighbors": np.arange(1,3), "metric":["euclidean", "cityblock"]}>>> grid = GridSearchCV(estimator=knn, param_grid=params)>>> grid.fit(X_train, y_train)>>> print(grid.best_score_)>>> print(grid.best_estimator_.n_neighbors)Randomized Parameter Optimization:>>> from sklearn.grid_search import RandomizedSearchCV >>> params = {"n_neighbors": range(1,5), "weights": ["uniform", "distance"]}>>> rsearch = RandomizedSearchCV(estimator=knn,param_distributions=params, cv=4, n_iter=8, random_state=5)>>> rsearch.fit(X_train, y_train)>>> print(rsearch.best_score_)Supervised:>>>y_predict =new_svc.predict(np.random.random((3,5)))>>>y_predict = new_lr.predict(X_test)>>>y_predict = knn.predict_proba(X_test)Unsupervised:>>>y_pred = k_means.predict(X_test)FURTHERMORE:Python for Data Science Certification Training Course。
Bluetooth® mesh SDK 1.6.2.0 GAGecko SDK Suite 2.7March 20, 2020Bluetooth mesh is a new topology available for Bluetooth Low Energy (LE) devices that Array enables many-to-many (m:m) communication. It's optimized for creating large-scale de-vice networks, and is ideally suited for building automation, sensor networks, and assettracking. Our software and SDK for Bluetooth development supports Bluetooth Mesh andBluetooth 5 functionality. Developers can add mesh networking communication to LE de-vices such as connected lights, home automation, and asset tracking systems. The soft-ware also supports Bluetooth beaconing, beacon scanning, and GATT connections soBluetooth mesh can connect to smart phones, tablets, and other Bluetooth LE devices.These release notes cover SDK version:1.6.2.0 released March 20, 20201.6.1.0 released February 12, 20201.6.0.0 released December 20, 2019Compatibility and Use NoticesIf you are new to the Silicon Labs Bluetooth mesh SDK, see Using This Release.Compatible Compilers:IAR Embedded Workbench for ARM (IAR-EWARM) version 8.30.1•Using w ine to build with the IarBuild.exe command line utility or IAR Embedded Workbench GUI on macOS or Linux could result in incorrect files being used due to collisions in wine’s hashing algorithm for generating short file names.•Customers on macOS or Linux are advised not to build with IAR outside of Simplicity Studio. Customers who do should carefully verify that the correct files are being used.GCC (The GNU Compiler Collection) version 7.2.1, provided with Simplicity Studio.Link-time optimization feature of GCC has been disabled, resulting in slight increase of image sizeContents Contents1New Items (3)1.1New Features (3)1.2New APIs (3)2Improvements (6)2.1Changed APIs (6)2.2Changed Documents (7)3Fixed Issues (8)4Known Issues in the Current Release (9)5Deprecated Items (10)6Removed Items (11)7Using This Release (12)7.1Installation and Use (12)7.2Support (12)8Legal (13)8.1Disclaimer (13)8.2Trademark Information (13)1 New Items1.1 New FeaturesAdded in release 1.6.2.0Profile qualification listing updated.Model qualification listing updated for all models released as GA.Support for time models (Time client, Time server, Time setup server) has been added as an alpha release.Model-specific initialization of generic models has been added.Additional Provisioner commands for manipulating the device database and key refresh procedure have been added.Added in release 1.6.1.0Models: qualification readiness for all implemented models.Added in release 1.6.0.0Models: support for light control models (LC client, LC server, LC setup server) has been added.Models: support for scene models (scene client, scene server, scene setup server) has been added. Note that due to a Mesh Model specification limitation only a single instance of a Scene server model is supported.New model qualification effort is ongoing, but not completed at the time of release.Persistent storage: NVM3 is supported for series 1 hardware SoC projectsSDK light sample application has been updated to support light controller functionality and scene functionalitySDK switch sample application has been updated to support scene recall functionalityMesh node functionality has been updated to support more than 240 replay protection list entries (amount is limited by the size of persis-tent storage and RAM available for Mesh)Mesh provisioner functionality has been updated to support up to 512 device database entries (amount is limited by the size of persistent storage and RAM available for Mesh)1.2 New APIsFor additional documentation please refer to the Bluetooth Mesh Software API Reference Manual installed with the Bluetooth Mesh SDK. Added in release 1.6.2.0Data types for signed and unsigned 64-bit numbers have been added.BGAPI commands and events for Time models have been added. Note that Time model functionality has been released as an alpha release.Time client model commands and events added:mesh_time_client_init(),mesh_time_client_deinit(),mesh_time_client_get_tai_utc_delta(),mesh_time_client_get_time(),mesh_time_client_get_time_role(),mesh_time_client_get_time_zone(),mesh_time_client_set_tai_utc_delta(),mesh_time_client_set_time(),mesh_time_client_set_time_role(),mesh_time_client_set_time_zone(),mesh_time_client_tai_utc_delta_status(),mesh_time_client_time_status(),mesh_time_client_time_role_status(),mesh_time_client_time_zone_status().Time server and setup server model commands and events added: mesh_time_server_init(),mesh_time_server_deinit(),mesh_time_server_get_datetime(),mesh_time_server_get_tai_utc_delta_new(),mesh_time_server_get_time(),mesh_time_server_get_time_role(),mesh_time_server_get_time_zone_offset_new(),mesh_time_server_set_time_zone_offset_new(),mesh_time_server_set_time(),mesh_time_server_set_time_role(),mesh_time_server_set_time_zone_offset_new(),mesh_time_server_tai_utc_delta_updated(),mesh_time_server_time_updated(),mesh_time_server_time_role_updated(),mesh_time_server_time_zone_offset_updated().Added in release 1.6.0.0BGAPI commands and events for LC and scene models have been added LC client model commands and events:mesh_lc_client_init(),mesh_lc_client_get_light_onoff(),mesh_lc_client_get_mode(),mesh_lc_client_get_om(),mesh_lc_client_get_property(),mesh_lc_client_set_light_onoff(),mesh_lc_client_set_mode(),mesh_lc_client_set_om(),mesh_lc_client_set_property(),mesh_lc_client_light_onoff_status(),mesh_lc_client_mode_status(),mesh_lc_client_om_status(),mesh_lc_client_property_status()LC server model commands and events:mesh_lc_server_init(),mesh_lc_server_deinit(),mesh_lc_server_init_all_properties(),mesh_lc_server_update_light_onoff(),mesh_lc_server_update_mode(),mesh_lc_server_update_om(),mesh_lc_server_ambient_lux_level_updated(),mesh_lc_server_light_onoff_updated(),mesh_lc_server_linear_output_updated(),mesh_lc_server_mode_updated(),mesh_lc_server_occupancy_updated(),mesh_lc_server_om_updated()mesh_lc_server_set_publish_mask()LC setup server model commands and events:mesh_lc_setup_server_update_property(),mesh_lc_setup_server_set_property()Scene client model commands and events:mesh_scene_client_init(),mesh_scene_client_delete(),mesh_scene_client_get(),mesh_scene_client_get_register(),mesh_scene_client_recall(),mesh_scene_client_store(),mesh_scene_client_register_status(),mesh_scene_client_status()Scene server model commands and events: mesh_scene_sever_init(),mesh_scene_server_deinit(),mesh_scene_server_get(),mesh_scene_server_publish(),mesh_scene_server_recall(),mesh_scene_server_register_get()Scene setup server model commands and events: mesh_scene_setup_server_init(),mesh_scene_setup_server_delete(),mesh_scene_setup_server_publish(),mesh_scene_setup_server_store()2 Improvements2.1 Changed APIsChanged in release 1.6.2.0BGAPI commands for model-specific initialization of generic client and server models have been added. These commands can be used in place of the already-existing initialization commands and may result in reduced size of the firmware image of a SoC project that uses only a subset of generic models.Eleven generic client model commands have been added:mesh_generic_client_init_battery(),mesh_generic_client_init_common(),mesh_generic_client_init_ctl(),mesh_generic_client_init_default_transition_time(),mesh_generic_client_init_level(),mesh_generic_client_init_lightness(),mesh_generic_client_init_location(),mesh_generic_client_init_on_off(),mesh_generic_client_init_power_level(),mesh_generic_client_init_power_on_off(),mesh_generic_client_init_property().Eleven generic server model commands have been added:mesh_generic_server_init_battery(),mesh_generic_server_init_common(),mesh_generic_server_init_ctl(),mesh_generic_server_init_default_transition_time(),mesh_generic_server_init_level(),mesh_generic_server_init_lightness(),mesh_generic_server_init_location(),mesh_generic_server_init_on_off(),mesh_generic_server_init_power_level(),mesh_generic_server_init_power_on_off(),mesh_generic_server_init_property().BGAPI commands for Provisioner have been added to allow additional manipulation of the Provisioner’s device database and key re-fresh procedure state.Six Provisioner commands have been added:mesh_prov_ddb_update_netkey_index(),mesh_prov_flush_key_refresh_state(),mesh_prov_get_key_refresh_phase(),mesh_prov_key_refresh_resume(),mesh_prov_key_refresh_start_from_phase(),mesh_prov_key_refresh_suspend(),Changed in release 1.6.1.0One new BGAPI command has been addedAdded LC server model command:mesh_lc_server:_set_regulator_interval()Changed in release 1.6.0.0Four new BGAPI commands and five new BGAPI events have been added Added mesh node commands:mesh_node_rssi()mesh_node_set_beacon_reporting()mesh_node_stop_unprov_beaconing()Added generic server event:mesh_generic_server_state_recall()Added mesn node events:mesh_node_beacon_received()mesh_node_heartbeat()mesh_node_neartbeat_start()mesh_node_heartbeat_stop()One new BGAPI errorcode has been added:bg_err_mesh_no_data_available2.2 Changed DocumentsNoneFixed Issues3 Fixed IssuesFixed in release 1.6.2.0466717 Fixed handling of extended scan response events in the stack469269 Fixed generic level unintentional wrap around from minimum to maximum value when using move requests 471532 Disengaging LC server from lightness did not work if lightness level was manually changed via a bound stateFixed in release 1.6.1.0434160 Added missing handling of segmented control messages444559 Optimized model configuration storage management on bootup448945 Made handling of disconnections during PB-GATT provisioning more robust454060 Fixed Scene Server model storage allocation455878 Fixed LC Server ambient lux level reporting456186 Fixed handling of segmented messges once a preceding segmented message reception was cancelled 456976 Fixed buffer allocation for stack internal control messages458876 Fixed error code given on provisioning failure due to malformed message459204 Fixed relaying of PDUs with RFU destination addresses461448 Fixed emitted heartbeat count when using local test interfaceFixed in release 1.6.0.04975, 439811 GCC linking with link-time optimization disabled270215 Added BGAPI for heartbeat monitoring in the application417988 Added BGAPI for received secure network beacon monitoring in the application421689 Key index and key value storing made into an atomic operation447682 Fixed local loopback interface449257 Fixed project conversion to C++ in StudioKnown Issues in the Current Release 4 Known Issues in the Current ReleaseIssues in bold were added since the previous release.3878 Mesh GATT events visible to the application Application can ignore BGAPI events related to GATTprovisioning and proxying based on service andcharacteristic parameters5662 Default device UUID does not conform to RFC4122 Customer needs to explicitly set UUID to a conformantone339993ISC file comments cause errors when generatingcodeAvoid using comments in ISC files401550 No BGAPI event for segmented message handlingfailure Application needs to deduce failure from timeout / lack of application layer response418636Issues with mesh_test local configuration state API(node identity, relay, network retransmission)454059 A large number of key refresh state change eventsare generated at the end of KR process, and that mayflood NCP queueIncrease NCP queue length in the project454061 Slight performance degradation compared to 1.5 inround-trip latency tests was observed454332 Missing Mesh-specific API for generating andreceiving scan response data for GATT provisioningservice advertisementsUse the LE GAP API467080 BTMESH_HEAP_SIZE macro does not take modelcounts into account Manually edit the macro to handle model counts if some models are present more than once468761 Serialization/deserialization buffers in mesh_libare too small for many properties Manually increase buffer sizes in mesh_lib functions470417 Polymorphic GATT database capability settingsclash with Mesh proxy and provisioning servicesDeprecated Items 5 Deprecated ItemsNoneRemoved Items 6 Removed ItemsNone.Using This Release 7 Using This ReleaseThis release contains the following•Silicon Labs Bluetooth mesh stack library•Bluetooth sample applicationsIf you are a first time user, see QSG148: Getting Started with Bluetooth® Mesh Software Development.7.1 Installation and UseA registered account at Silicon Labs is required in order to download the Silicon Labs Bluetooth SDK. You can register at https:///apex/SL_CommunitiesSelfReg?form=short.Stack installation instruction are covered in QSG148: Getting Started with Bluetooth® Mesh Software Development.Use the Bluetooth mesh SDK with the Silicon Labs Simplicity Studio V4 development platform. Simplicity Studio ensures that most soft-ware and tool compatibilities are managed correctly. Install software and board firmware updates promptly when you are notified. Documentation specific to the SDK version is installed with the SDK. Additional information can often be found in the knowledge base articles (KBAs). API references and other information about this and earlier releases is available on https:///.7.2 SupportDevelopment Kit customers are eligible for training and technical support. You can use the Silicon Labs Bluetooth mesh web page to obtain information about all Silicon Labs Bluetooth products and services, and to sign up for product support.You can contact Silicon Laboratories support at /support.Legal 8 Legal8.1 DisclaimerSilicon Labs intends to provide customers with the latest, accurate, and in-depth documentation of all peripherals and modules available for system and software implementers using or intending to use the Silicon Labs products. Characterization data, available modules and peripherals, memory sizes and memory addresses refer to each specific device, and "Typical" parameters provided can and do vary in different applications.Application examples described herein are for illustrative purposes only.Silicon Labs reserves the right to make changes without further notice and limitation to product information, specifications, and descrip-tions herein, and does not give warranties as to the accuracy or completeness of the included information. Silicon Labs shall have no liability for the consequences of use of the information supplied herein. This document does not imply or express copyright licenses granted hereunder to design or fabricate any integrated circuits. The products are not designed or authorized to be used within any Life Support System. A "Life Support System" is any product or system intended to support or sustain life and/or health, which, if it fails, can be reasonably expected to result in significant personal injury or death. Silicon Labs products are not designed or authorized for military applications. Silicon Labs products shall under no circumstances be used in weapons of mass destruction including (but not limited to) nuclear, biological or chemical weapons, or missiles capable of delivering such weapons.8.2 Trademark InformationSilicon Laboratories Inc.®, Silicon Laboratories®, Silicon Labs®, SiLabs® and the Silicon Labs logo®, Bluegiga®, Bluegiga Logo®, Clockbuilder®, CMEMS®, DSPLL®, EFM®, EFM32®, EFR, Ember®, Energy Micro, Energy Micro logo and combinations thereof, "the world’s most energy friendly microcontrollers", Ember®, EZLink®, EZRadio®, EZRadioPRO®, Gecko®, ISOmodem®, Micrium, Preci-sion32®, ProSLIC®, Simplicity Studio®, SiPHY®, Telegesis, the Telegesis Logo®, USBXpress®, Zentri, Z-Wave and others are trade-marks or registered trademarks of Silicon Labs.ARM, CORTEX, Cortex-M3 and THUMB are trademarks or registered trademarks of ARM Holdings.Keil is a registered trademark of ARM Limited. All other products or brand names mentioned herein are trademarks of their respective holders.。
Discriminatively Trained Sparse Code Gradientsfor Contour DetectionXiaofeng Ren and Liefeng BoIntel Science and Technology Center for Pervasive Computing,Intel LabsSeattle,W A98195,USA{xiaofeng.ren,liefeng.bo}@AbstractFinding contours in natural images is a fundamental problem that serves as thebasis of many tasks such as image segmentation and object recognition.At thecore of contour detection technologies are a set of hand-designed gradient fea-tures,used by most approaches including the state-of-the-art Global Pb(gPb)operator.In this work,we show that contour detection accuracy can be signif-icantly improved by computing Sparse Code Gradients(SCG),which measurecontrast using patch representations automatically learned through sparse coding.We use K-SVD for dictionary learning and Orthogonal Matching Pursuit for com-puting sparse codes on oriented local neighborhoods,and apply multi-scale pool-ing and power transforms before classifying them with linear SVMs.By extract-ing rich representations from pixels and avoiding collapsing them prematurely,Sparse Code Gradients effectively learn how to measure local contrasts andfindcontours.We improve the F-measure metric on the BSDS500benchmark to0.74(up from0.71of gPb contours).Moreover,our learning approach can easily adaptto novel sensor data such as Kinect-style RGB-D cameras:Sparse Code Gradi-ents on depth maps and surface normals lead to promising contour detection usingdepth and depth+color,as verified on the NYU Depth Dataset.1IntroductionContour detection is a fundamental problem in vision.Accuratelyfinding both object boundaries and interior contours has far reaching implications for many vision tasks including segmentation,recog-nition and scene understanding.High-quality image segmentation has increasingly been relying on contour analysis,such as in the widely used system of Global Pb[2].Contours and segmentations have also seen extensive uses in shape matching and object recognition[8,9].Accuratelyfinding contours in natural images is a challenging problem and has been extensively studied.With the availability of datasets with human-marked groundtruth contours,a variety of approaches have been proposed and evaluated(see a summary in[2]),such as learning to clas-sify[17,20,16],contour grouping[23,31,12],multi-scale features[21,2],and hierarchical region analysis[2].Most of these approaches have one thing in common[17,23,31,21,12,2]:they are built on top of a set of gradient features[17]measuring local contrast of oriented discs,using chi-square distances of histograms of color and textons.Despite various efforts to use generic image features[5]or learn them[16],these hand-designed gradients are still widely used after a decade and support top-ranking algorithms on the Berkeley benchmarks[2].In this work,we demonstrate that contour detection can be vastly improved by replacing the hand-designed Pb gradients of[17]with rich representations that are automatically learned from data. We use sparse coding,in particularly Orthogonal Matching Pursuit[18]and K-SVD[1],to learn such representations on patches.Instead of a direct classification of patches[16],the sparse codes on the pixels are pooled over multi-scale half-discs for each orientation,in the spirit of the Pbimage patch: gray, abdepth patch (optional):depth, surface normal…local sparse coding multi-scale pooling oriented gradients power transformslinear SVM+ - …per-pixelsparse codes SVMSVMSVM … SVM RGB-(D) contoursFigure 1:We combine sparse coding and oriented gradients for contour analysis on color as well as depth images.Sparse coding automatically learns a rich representation of patches from data.With multi-scale pooling,oriented gradients efficiently capture local contrast and lead to much more accurate contour detection than those using hand-designed features including Global Pb (gPb)[2].gradients,before being classified with a linear SVM.The SVM outputs are then smoothed and non-max suppressed over orientations,as commonly done,to produce the final contours (see Fig.1).Our sparse code gradients (SCG)are much more effective in capturing local contour contrast than existing features.By only changing local features and keeping the smoothing and globalization parts fixed,we improve the F-measure on the BSDS500benchmark to 0.74(up from 0.71of gPb),a sub-stantial step toward human-level accuracy (see the precision-recall curves in Fig.4).Large improve-ments in accuracy are also observed on other datasets including MSRC2and PASCAL2008.More-over,our approach is built on unsupervised feature learning and can directly apply to novel sensor data such as RGB-D images from Kinect-style depth ing the NYU Depth dataset [27],we verify that our SCG approach combines the strengths of color and depth contour detection and outperforms an adaptation of gPb to RGB-D by a large margin.2Related WorkContour detection has a long history in computer vision as a fundamental building block.Modern approaches to contour detection are evaluated on datasets of natural images against human-marked groundtruth.The Pb work of Martin et.al.[17]combined a set of gradient features,using bright-ness,color and textons,to outperform the Canny edge detector on the Berkeley Benchmark (BSDS).Multi-scale versions of Pb were developed and found beneficial [21,2].Building on top of the Pb gradients,many approaches studied the globalization aspects,i.e.moving beyond local classifica-tion and enforcing consistency and continuity of contours.Ren et.al.developed CRF models on superpixels to learn junction types [23].Zhu ed circular embedding to enforce orderings of edgels [31].The gPb work of Arbelaez puted gradients on eigenvectors of the affinity graph and combined them with local cues [2].In addition to Pb gradients,Dollar et.al.[5]learned boosted trees on generic features such as gradients and Haar wavelets,Kokkinos used SIFT features on edgels [12],and Prasad et.al.[20]used raw pixels in class-specific settings.One closely related work was the discriminative sparse models of Mairal et al [16],which used K-SVD to represent multi-scale patches and had moderate success on the BSDS.A major difference of our work is the use of oriented gradients:comparing to directly classifying a patch,measuring contrast between oriented half-discs is a much easier problem and can be effectively learned.Sparse coding represents a signal by reconstructing it using a small set of basis functions.It has seen wide uses in vision,for example for faces [28]and recognition [29].Similar to deep network approaches [11,14],recent works tried to avoid feature engineering and employed sparse coding of image patches to learn features from “scratch”,for texture analysis [15]and object recognition [30,3].In particular,Orthogonal Matching Pursuit [18]is a greedy algorithm that incrementally finds sparse codes,and K-SVD is also efficient and popular for dictionary learning.Closely related to our work but on the different problem of recognition,Bo ed matching pursuit and K-SVD to learn features in a coding hierarchy [3]and are extending their approach to RGB-D data [4].Thanks to the mass production of Kinect,active RGB-D cameras became affordable and were quickly adopted in vision research and applications.The Kinect pose estimation of Shotton et. ed random forests to learn from a huge amount of data[25].Henry ed RGB-D cam-eras to scan large environments into3D models[10].RGB-D data were also studied in the context of object recognition[13]and scene labeling[27,22].In-depth studies of contour and segmentation problems for depth data are much in need given the fast growing interests in RGB-D perception.3Contour Detection using Sparse Code GradientsWe start by examining the processing pipeline of Global Pb(gPb)[2],a highly influential and widely used system for contour detection.The gPb contour detection has two stages:local contrast estimation at multiple scales,and globalization of the local cues using spectral grouping.The core of the approach lies within its use of local cues in oriented gradients.Originally developed in [17],this set of features use relatively simple pixel representations(histograms of brightness,color and textons)and similarity functions(chi-square distance,manually chosen),comparing to recent advances in using rich representations for high-level recognition(e.g.[11,29,30,3]).We set out to show that both the pixel representation and the aggregation of pixel information in local neighborhoods can be much improved and,to a large extent,learned from and adapted to input data. For pixel representation,in Section3.1we show how to use Orthogonal Matching Pursuit[18]and K-SVD[1],efficient sparse coding and dictionary learning algorithms that readily apply to low-level vision,to extract sparse codes at every pixel.This sparse coding approach can be viewed similar in spirit to the use offilterbanks but avoids manual choices and thus directly applies to the RGB-D data from Kinect.We show learned dictionaries for a number of channels that exhibit different characteristics:grayscale/luminance,chromaticity(ab),depth,and surface normal.In Section3.2we show how the pixel-level sparse codes can be integrated through multi-scale pool-ing into a rich representation of oriented local neighborhoods.By computing oriented gradients on this high dimensional representation and using a double power transform to code the features for linear classification,we show a linear SVM can be efficiently and effectively trained for each orientation to classify contour vs non-contour,yielding local contrast estimates that are much more accurate than the hand-designed features in gPb.3.1Local Sparse Representation of RGB-(D)PatchesK-SVD and Orthogonal Matching Pursuit.K-SVD[1]is a popular dictionary learning algorithm that generalizes K-Means and learns dictionaries of codewords from unsupervised data.Given a set of image patches Y=[y1,···,y n],K-SVD jointlyfinds a dictionary D=[d1,···,d m]and an associated sparse code matrix X=[x1,···,x n]by minimizing the reconstruction errorminY−DX 2F s.t.∀i, x i 0≤K;∀j, d j 2=1(1) D,Xwhere · F denotes the Frobenius norm,x i are the columns of X,the zero-norm · 0counts the non-zero entries in the sparse code x i,and K is a predefined sparsity level(number of non-zero en-tries).This optimization can be solved in an alternating manner.Given the dictionary D,optimizing the sparse code matrix X can be decoupled to sub-problems,each solved with Orthogonal Matching Pursuit(OMP)[18],a greedy algorithm forfinding sparse codes.Given the codes X,the dictionary D and its associated sparse coefficients are updated sequentially by singular value decomposition. For our purpose of representing local patches,the dictionary D has a small size(we use75for5x5 patches)and does not require a lot of sample patches,and it can be learned in a matter of minutes. Once the dictionary D is learned,we again use the Orthogonal Matching Pursuit(OMP)algorithm to compute sparse codes at every pixel.This can be efficiently done with convolution and a batch version of the OMP algorithm[24].For a typical BSDS image of resolution321x481,the sparse code extraction is efficient and takes1∼2seconds.Sparse Representation of RGB-D Data.One advantage of unsupervised dictionary learning is that it readily applies to novel sensor data,such as the color and depth frames from a Kinect-style RGB-D camera.We learn K-SVD dictionaries up to four channels of color and depth:grayscale for luminance,chromaticity ab for color in the Lab space,depth(distance to camera)and surface normal(3-dim).The learned dictionaries are visualized in Fig.2.These dictionaries are interesting(a)Grayscale (b)Chromaticity (ab)(c)Depth (d)Surface normal Figure 2:K-SVD dictionaries learned for four different channels:grayscale and chromaticity (in ab )for an RGB image (a,b),and depth and surface normal for a depth image (c,d).We use a fixed dictionary size of 75on 5x 5patches.The ab channel is visualized using a constant luminance of 50.The 3-dimensional surface normal (xyz)is visualized in RGB (i.e.blue for frontal-parallel surfaces).to look at and qualitatively distinctive:for example,the surface normal codewords tend to be more smooth due to flat surfaces,the depth codewords are also more smooth but with speckles,and the chromaticity codewords respect the opponent color pairs.The channels are coded separately.3.2Coding Multi-Scale Neighborhoods for Measuring ContrastMulti-Scale Pooling over Oriented Half-Discs.Over decades of research on contour detection and related topics,a number of fundamental observations have been made,repeatedly:(1)contrast is the key to differentiate contour vs non-contour;(2)orientation is important for respecting contour continuity;and (3)multi-scale is useful.We do not wish to throw out these principles.Instead,we seek to adopt these principles for our case of high dimensional representations with sparse codes.Each pixel is presented with sparse codes extracted from a small patch (5-by-5)around it.To aggre-gate pixel information,we use oriented half-discs as used in gPb (see an illustration in Fig.1).Each orientation is processed separately.For each orientation,at each pixel p and scale s ,we define two half-discs (rectangles)N a and N b of size s -by-(2s +1),on both sides of p ,rotated to that orienta-tion.For each half-disc N ,we use average pooling on non-zero entries (i.e.a hybrid of average and max pooling)to generate its representationF (N )= i ∈N |x i 1| i ∈N I |x i 1|>0,···, i ∈N |x im | i ∈NI |x im |>0 (2)where x ij is the j -th entry of the sparse code x i ,and I is the indicator function whether x ij is non-zero.We rotate the image (after sparse coding)and use integral images for fast computations (on both |x ij |and |x ij |>0,whose costs are independent of the size of N .For two oriented half-dics N a and N b at a scale s ,we compute a difference (gradient)vector DD (N a s ,N b s )= F (N a s )−F (N b s ) (3)where |·|is an element-wise absolute value operation.We divide D (N a s ,N b s )by their norms F (N a s ) + F (N b s ) + ,where is a positive number.Since the magnitude of sparse codes variesover a wide range due to local variations in illumination as well as occlusion,this step makes the appearance features robust to such variations and increases their discriminative power,as commonly done in both contour detection and object recognition.This value is not hard to set,and we find a value of =0.5is better than,for instance, =0.At this stage,one could train a classifier on D for each scale to convert it to a scalar value of contrast,which would resemble the chi-square distance function in gPb.Instead,we find that it is much better to avoid doing so separately at each scale,but combining multi-scale features in a joint representation,so as to allow interactions both between codewords and between scales.That is,our final representation of the contrast at a pixel p is the concatenation of sparse codes pooled at all thescales s ∈{1,···,S }(we use S =4):D p = D (N a 1,N b 1),···,D (N a S ,N b S );F (N a 1∪N b 1),···,F (N a S ∪N b S ) (4)In addition to difference D ,we also include a union term F (N a s ∪N b s ),which captures the appear-ance of the whole disc (union of the two half discs)and is normalized by F (N a s ) + F (N b s ) + .Double Power Transform and Linear Classifiers.The concatenated feature D p (non-negative)provides multi-scale contrast information for classifying whether p is a contour location for a partic-ular orientation.As D p is high dimensional (1200and above in our experiments)and we need to do it at every pixel and every orientation,we prefer using linear SVMs for both efficient testing as well as training.Directly learning a linear function on D p ,however,does not work very well.Instead,we apply a double power transformation to make the features more suitable for linear SVMs D p = D α1p ,D α2p (5)where 0<α1<α2<1.Empirically,we find that the double power transform works much better than either no transform or a single power transform α,as sometimes done in other classification contexts.Perronnin et.al.[19]provided an intuition why a power transform helps classification,which “re-normalizes”the distribution of the features into a more Gaussian form.One plausible intuition for a double power transform is that the optimal exponent αmay be different across feature dimensions.By putting two power transforms of D p together,we allow the classifier to pick its linear combination,different for each dimension,during the stage of supervised training.From Local Contrast to Global Contours.We intentionally only change the local contrast es-timation in gPb and keep the other steps fixed.These steps include:(1)the Savitzky-Goley filter to smooth responses and find peak locations;(2)non-max suppression over orientations;and (3)optionally,we apply the globalization step in gPb that computes a spectral gradient from the local gradients and then linearly combines the spectral gradient with the local ones.A sigmoid transform step is needed to convert the SVM outputs on D p before computing spectral gradients.4ExperimentsWe use the evaluation framework of,and extensively compare to,the publicly available Global Pb (gPb)system [2],widely used as the state of the art for contour detection 1.All the results reported on gPb are from running the gPb contour detection and evaluation codes (with default parameters),and accuracies are verified against the published results in [2].The gPb evaluation includes a number of criteria,including precision-recall (P/R)curves from contour matching (Fig.4),F-measures computed from P/R (Table 1,2,3)with a fixed contour threshold (ODS)or per-image thresholds (OIS),as well as average precisions (AP)from the P/R curves.Benchmark Datasets.The main dataset we use is the BSDS500benchmark [2],an extension of the original BSDS300benchmark and commonly used for contour evaluation.It includes 500natural images of roughly resolution 321x 481,including 200for training,100for validation,and 200for testing.We conduct both color and grayscale experiments (where we convert the BSDS500images to grayscale and retain the groundtruth).In addition,we also use the MSRC2and PASCAL2008segmentation datasets [26,6],as done in the gPb work [2].The MSRC2dataset has 591images of resolution 200x 300;we randomly choose half for training and half for testing.The PASCAL2008dataset includes 1023images in its training and validation sets,roughly of resolution 350x 500.We randomly choose half for training and half for testing.For RGB-D contour detection,we use the NYU Depth dataset (v2)[27],which includes 1449pairs of color and depth frames of resolution 480x 640,with groundtruth semantic regions.We choose 60%images for training and 40%for testing,as in its scene labeling setup.The Kinect images are of lower quality than BSDS,and we resize the frames to 240x 320in our experiments.Training Sparse Code Gradients.Given sparse codes from K-SVD and Orthogonal Matching Pur-suit,we train the Sparse Code Gradients classifiers,one linear SVM per orientation,from sampled locations.For positive data,we sample groundtruth contour locations and estimate the orientations at these locations using groundtruth.For negative data,locations and orientations are random.We subtract the mean from the patches in each data channel.For BSDS500,we typically have 1.5to 21In this work we focus on contour detection and do not address how to derive segmentations from contours.pooling disc size (pixel)a v e r a g e p r e c i s i o na v e r a g e p r e c i s i o nsparsity level a v e r a g e p r e c i s i o n (a)(b)(c)Figure 3:Analysis of our sparse code gradients,using average precision of classification on sampled boundaries.(a)The effect of single-scale vs multi-scale pooling (accumulated from the smallest).(b)Accuracy increasing with dictionary size,for four orientation channels.(c)The effect of the sparsity level K,which exhibits different behavior for grayscale and chromaticity.BSDS500ODS OIS AP l o c a l gPb (gray).67.69.68SCG (gray).69.71.71gPb (color).70.72.71SCG (color).72.74.75g l o b a l gPb (gray).69.71.67SCG (gray).71.73.74gPb (color).71.74.72SCG (color).74.76.77Table 1:F-measure evaluation on the BSDS500benchmark [2],comparing to gPb on grayscaleand color images,both for local contour detec-tion as well as for global detection (-bined with the spectral gradient analysis in [2]).Recall P r e c i s i o n Figure 4:Precision-recall curves of SCG vs gPb on BSDS500,for grayscale and color images.We make a substantial step beyondthe current state of the art toward reachinghuman-level accuracy (green dot).million data points.We use 4spatial scales,at half-disc sizes 2,4,7,25.For a dictionary size of 75and 4scales,the feature length for one data channel is 1200.For full RGB-D data,the dimension is 4800.For BSDS500,we train only using the 200training images.We modify liblinear [7]to take dense matrices (features are dense after pooling)and single-precision floats.Looking under the Hood.We empirically analyze a number of settings in our Sparse Code Gradi-ents.In particular,we want to understand how the choices in the local sparse coding affect contour classification.Fig.3shows the effects of multi-scale pooling,dictionary size,and sparsity level (K).The numbers reported are intermediate results,namely the mean of average precision of four oriented gradient classifier (0,45,90,135degrees)on sampled locations (grayscale unless otherwise noted,on validation).As a reference,the average precision of gPb on this task is 0.878.For multi-scale pooling,the single best scale for the half-disc filter is about 4x 8,consistent with the settings in gPb.For accumulated scales (using all the scales from the smallest up to the current level),the accuracy continues to increase and does not seem to be saturated,suggesting the use of larger scales.The dictionary size has a minor impact,and there is a small (yet observable)benefit to use dictionaries larger than 75,particularly for diagonal orientations (45-and 135-deg).The sparsity level K is a more intriguing issue.In Fig.3(c),we see that for grayscale only,K =1(normalized nearest neighbor)does quite well;on the other hand,color needs a larger K ,possibly because ab is a nonlinear space.When combining grayscale and color,it seems that we want K to be at least 3.It also varies with orientation:horizontal and vertical edges require a smaller K than diagonal edges.(If using K =1,our final F-measure on BSDS500is 0.730.)We also empirically evaluate the double power transform vs single power transform vs no transform.With no transform,the average precision is 0.865.With a single power transform,the best choice of the exponent is around 0.4,with average precision 0.884.A double power transform (with exponentsMSRC2ODS OIS APgPb.37.39.22SCG.43.43.33PASCAL2008ODS OIS APgPb.34.38.20SCG.37.41.27Table2:F-measure evaluation comparing our SCG approach to gPb on two addi-tional image datasets with contour groundtruth: MSRC2[26]and PASCAL2008[6].RGB-D(NYU v2)ODS OIS AP gPb(color).51.52.37 SCG(color).55.57.46gPb(depth).44.46.28SCG(depth).53.54.45gPb(RGB-D).53.54.40SCG(RGB-D).62.63.54Table3:F-measure evaluation on RGB-D con-tour detection using the NYU dataset(v2)[27].We compare to gPb on using color image only,depth only,as well as color+depth.Figure5:Examples from the BSDS500dataset[2].(Top)Image;(Middle)gPb output;(Bottom) SCG output(this work).Our SCG operator learns to preservefine details(e.g.windmills,faces,fish fins)while at the same time achieving higher precision on large-scale contours(e.g.back of zebras). (Contours are shown in double width for the sake of visualization.)0.25and0.75,which can be computed through sqrt)improves the average precision to0.900,which translates to a large improvement in contour detection accuracy.Image Benchmarking Results.In Table1and Fig.4we show the precision-recall of our Sparse Code Gradients vs gPb on the BSDS500benchmark.We conduct four sets of experiments,using color or grayscale images,with or without the globalization component(for which we use exactly the same setup as in gPb).Using Sparse Code Gradients leads to a significant improvement in accuracy in all four cases.The local version of our SCG operator,i.e.only using local contrast,is already better(F=0.72)than gPb with globalization(F=0.71).The full version,local SCG plus spectral gradient(computed from local SCG),reaches an F-measure of0.739,a large step forward from gPb,as seen in the precision-recall curves in Fig.4.On BSDS300,our F-measure is0.715. We observe that SCG seems to pick upfine-scale details much better than gPb,hence the much higher recall rate,while maintaining higher precision over the entire range.This can be seen in the examples shown in Fig.5.While our scale range is similar to that of gPb,the multi-scale pooling scheme allows theflexibility of learning the balance of scales separately for each code word,which may help detecting the details.The supplemental material contains more comparison examples.In Table2we show the benchmarking results for two additional datasets,MSRC2and PAS-CAL2008.Again we observe large improvements in accuracy,in spite of the somewhat different natures of the scenes in these datasets.The improvement on MSRC2is much larger,partly because the images are smaller,hence the contours are smaller in scale and may be over-smoothed in gPb. As for computational cost,using integral images,local SCG takes∼100seconds to compute on a single-thread Intel Core i5-2500CPU on a BSDS image.It is slower than but comparable to the highly optimized multi-thread C++implementation of gPb(∼60seconds).Figure6:Examples of RGB-D contour detection on the NYU dataset(v2)[27].Thefive panels are:input image,input depth,image-only contours,depth-only contours,and color+depth contours. Color is good picking up details such as photos on the wall,and depth is useful where color is uniform(e.g.corner of a room,row1)or illumination is poor(e.g.chair,row2).RGB-D Contour Detection.We use the second version of the NYU Depth Dataset[27],which has higher quality groundtruth than thefirst version.A medianfiltering is applied to remove double contours(boundaries from two adjacent regions)within3pixels.For RGB-D baseline,we use a simple adaptation of gPb:the depth values are in meters and used directly as a grayscale image in gPb gradient computation.We use a linear combination to put(soft)color and depth gradients together in gPb before non-max suppression,with the weight set from validation.Table3lists the precision-recall evaluations of SCG vs gPb for RGB-D contour detection.All the SCG settings(such as scales and dictionary sizes)are kept the same as for BSDS.SCG again outperforms gPb in all the cases.In particular,we are much better for depth-only contours,for which gPb is not designed.Our approach learns the low-level representations of depth data fully automatically and does not require any manual tweaking.We also achieve a much larger boost by combining color and depth,demonstrating that color and depth channels contain complementary information and are both critical for RGB-D contour detection.Qualitatively,it is easy to see that RGB-D combines the strengths of color and depth and is a promising direction for contour and segmentation tasks and indoor scene analysis in general[22].Fig.6shows a few examples of RGB-D contours from our SCG operator.There are plenty of such cases where color alone or depth alone would fail to extract contours for meaningful parts of the scenes,and color+depth would succeed. 5DiscussionsIn this work we successfully showed how to learn and code local representations to extract contours in natural images.Our approach combined the proven concept of oriented gradients with powerful representations that are automatically learned through sparse coding.Sparse Code Gradients(SCG) performed significantly better than hand-designed features that were in use for a decade,and pushed contour detection much closer to human-level accuracy as illustrated on the BSDS500benchmark. Comparing to hand-designed features(e.g.Global Pb[2]),we maintain the high dimensional rep-resentation from pooling oriented neighborhoods and do not collapse them prematurely(such as computing chi-square distance at each scale).This passes a richer set of information into learn-ing contour classification,where a double power transform effectively codes the features for linear paring to previous learning approaches(e.g.discriminative dictionaries in[16]),our uses of multi-scale pooling and oriented gradients lead to much higher classification accuracies. Our work opens up future possibilities for learning contour detection and segmentation.As we il-lustrated,there is a lot of information locally that is waiting to be extracted,and a learning approach such as sparse coding provides a principled way to do so,where rich representations can be automat-ically constructed and adapted.This is particularly important for novel sensor data such as RGB-D, for which we have less understanding but increasingly more need.。
A Survey of Clustering Data Mining TechniquesPavel BerkhinYahoo!,Inc.pberkhin@Summary.Clustering is the division of data into groups of similar objects.It dis-regards some details in exchange for data simplifirmally,clustering can be viewed as data modeling concisely summarizing the data,and,therefore,it re-lates to many disciplines from statistics to numerical analysis.Clustering plays an important role in a broad range of applications,from information retrieval to CRM. Such applications usually deal with large datasets and many attributes.Exploration of such data is a subject of data mining.This survey concentrates on clustering algorithms from a data mining perspective.1IntroductionThe goal of this survey is to provide a comprehensive review of different clus-tering techniques in data mining.Clustering is a division of data into groups of similar objects.Each group,called a cluster,consists of objects that are similar to one another and dissimilar to objects of other groups.When repre-senting data with fewer clusters necessarily loses certainfine details(akin to lossy data compression),but achieves simplification.It represents many data objects by few clusters,and hence,it models data by its clusters.Data mod-eling puts clustering in a historical perspective rooted in mathematics,sta-tistics,and numerical analysis.From a machine learning perspective clusters correspond to hidden patterns,the search for clusters is unsupervised learn-ing,and the resulting system represents a data concept.Therefore,clustering is unsupervised learning of a hidden data concept.Data mining applications add to a general picture three complications:(a)large databases,(b)many attributes,(c)attributes of different types.This imposes on a data analysis se-vere computational requirements.Data mining applications include scientific data exploration,information retrieval,text mining,spatial databases,Web analysis,CRM,marketing,medical diagnostics,computational biology,and many others.They present real challenges to classic clustering algorithms. These challenges led to the emergence of powerful broadly applicable data2Pavel Berkhinmining clustering methods developed on the foundation of classic techniques.They are subject of this survey.1.1NotationsTo fix the context and clarify terminology,consider a dataset X consisting of data points (i.e.,objects ,instances ,cases ,patterns ,tuples ,transactions )x i =(x i 1,···,x id ),i =1:N ,in attribute space A ,where each component x il ∈A l ,l =1:d ,is a numerical or nominal categorical attribute (i.e.,feature ,variable ,dimension ,component ,field ).For a discussion of attribute data types see [106].Such point-by-attribute data format conceptually corresponds to a N ×d matrix and is used by a majority of algorithms reviewed below.However,data of other formats,such as variable length sequences and heterogeneous data,are not uncommon.The simplest subset in an attribute space is a direct Cartesian product of sub-ranges C = C l ⊂A ,C l ⊂A l ,called a segment (i.e.,cube ,cell ,region ).A unit is an elementary segment whose sub-ranges consist of a single category value,or of a small numerical bin.Describing the numbers of data points per every unit represents an extreme case of clustering,a histogram .This is a very expensive representation,and not a very revealing er driven segmentation is another commonly used practice in data exploration that utilizes expert knowledge regarding the importance of certain sub-domains.Unlike segmentation,clustering is assumed to be automatic,and so it is a machine learning technique.The ultimate goal of clustering is to assign points to a finite system of k subsets (clusters).Usually (but not always)subsets do not intersect,and their union is equal to a full dataset with the possible exception of outliersX =C 1 ··· C k C outliers ,C i C j =0,i =j.1.2Clustering Bibliography at GlanceGeneral references regarding clustering include [110],[205],[116],[131],[63],[72],[165],[119],[75],[141],[107],[91].A very good introduction to contem-porary data mining clustering techniques can be found in the textbook [106].There is a close relationship between clustering and many other fields.Clustering has always been used in statistics [10]and science [158].The clas-sic introduction into pattern recognition framework is given in [64].Typical applications include speech and character recognition.Machine learning clus-tering algorithms were applied to image segmentation and computer vision[117].For statistical approaches to pattern recognition see [56]and [85].Clus-tering can be viewed as a density estimation problem.This is the subject of traditional multivariate statistical estimation [197].Clustering is also widelyA Survey of Clustering Data Mining Techniques3 used for data compression in image processing,which is also known as vec-tor quantization[89].Datafitting in numerical analysis provides still another venue in data modeling[53].This survey’s emphasis is on clustering in data mining.Such clustering is characterized by large datasets with many attributes of different types. Though we do not even try to review particular applications,many important ideas are related to the specificfields.Clustering in data mining was brought to life by intense developments in information retrieval and text mining[52], [206],[58],spatial database applications,for example,GIS or astronomical data,[223],[189],[68],sequence and heterogeneous data analysis[43],Web applications[48],[111],[81],DNA analysis in computational biology[23],and many others.They resulted in a large amount of application-specific devel-opments,but also in some general techniques.These techniques and classic clustering algorithms that relate to them are surveyed below.1.3Plan of Further PresentationClassification of clustering algorithms is neither straightforward,nor canoni-cal.In reality,different classes of algorithms overlap.Traditionally clustering techniques are broadly divided in hierarchical and partitioning.Hierarchical clustering is further subdivided into agglomerative and divisive.The basics of hierarchical clustering include Lance-Williams formula,idea of conceptual clustering,now classic algorithms SLINK,COBWEB,as well as newer algo-rithms CURE and CHAMELEON.We survey these algorithms in the section Hierarchical Clustering.While hierarchical algorithms gradually(dis)assemble points into clusters (as crystals grow),partitioning algorithms learn clusters directly.In doing so they try to discover clusters either by iteratively relocating points between subsets,or by identifying areas heavily populated with data.Algorithms of thefirst kind are called Partitioning Relocation Clustering. They are further classified into probabilistic clustering(EM framework,al-gorithms SNOB,AUTOCLASS,MCLUST),k-medoids methods(algorithms PAM,CLARA,CLARANS,and its extension),and k-means methods(differ-ent schemes,initialization,optimization,harmonic means,extensions).Such methods concentrate on how well pointsfit into their clusters and tend to build clusters of proper convex shapes.Partitioning algorithms of the second type are surveyed in the section Density-Based Partitioning.They attempt to discover dense connected com-ponents of data,which areflexible in terms of their shape.Density-based connectivity is used in the algorithms DBSCAN,OPTICS,DBCLASD,while the algorithm DENCLUE exploits space density functions.These algorithms are less sensitive to outliers and can discover clusters of irregular shape.They usually work with low-dimensional numerical data,known as spatial data. Spatial objects could include not only points,but also geometrically extended objects(algorithm GDBSCAN).4Pavel BerkhinSome algorithms work with data indirectly by constructing summaries of data over the attribute space subsets.They perform space segmentation and then aggregate appropriate segments.We discuss them in the section Grid-Based Methods.They frequently use hierarchical agglomeration as one phase of processing.Algorithms BANG,STING,WaveCluster,and FC are discussed in this section.Grid-based methods are fast and handle outliers well.Grid-based methodology is also used as an intermediate step in many other algorithms (for example,CLIQUE,MAFIA).Categorical data is intimately connected with transactional databases.The concept of a similarity alone is not sufficient for clustering such data.The idea of categorical data co-occurrence comes to the rescue.The algorithms ROCK,SNN,and CACTUS are surveyed in the section Co-Occurrence of Categorical Data.The situation gets even more aggravated with the growth of the number of items involved.To help with this problem the effort is shifted from data clustering to pre-clustering of items or categorical attribute values. Development based on hyper-graph partitioning and the algorithm STIRR exemplify this approach.Many other clustering techniques are developed,primarily in machine learning,that either have theoretical significance,are used traditionally out-side the data mining community,or do notfit in previously outlined categories. The boundary is blurred.In the section Other Developments we discuss the emerging direction of constraint-based clustering,the important researchfield of graph partitioning,and the relationship of clustering to supervised learning, gradient descent,artificial neural networks,and evolutionary methods.Data Mining primarily works with large databases.Clustering large datasets presents scalability problems reviewed in the section Scalability and VLDB Extensions.Here we talk about algorithms like DIGNET,about BIRCH and other data squashing techniques,and about Hoffding or Chernoffbounds.Another trait of real-life data is high dimensionality.Corresponding de-velopments are surveyed in the section Clustering High Dimensional Data. The trouble comes from a decrease in metric separation when the dimension grows.One approach to dimensionality reduction uses attributes transforma-tions(DFT,PCA,wavelets).Another way to address the problem is through subspace clustering(algorithms CLIQUE,MAFIA,ENCLUS,OPTIGRID, PROCLUS,ORCLUS).Still another approach clusters attributes in groups and uses their derived proxies to cluster objects.This double clustering is known as co-clustering.Issues common to different clustering methods are overviewed in the sec-tion General Algorithmic Issues.We talk about assessment of results,de-termination of appropriate number of clusters to build,data preprocessing, proximity measures,and handling of outliers.For reader’s convenience we provide a classification of clustering algorithms closely followed by this survey:•Hierarchical MethodsA Survey of Clustering Data Mining Techniques5Agglomerative AlgorithmsDivisive Algorithms•Partitioning Relocation MethodsProbabilistic ClusteringK-medoids MethodsK-means Methods•Density-Based Partitioning MethodsDensity-Based Connectivity ClusteringDensity Functions Clustering•Grid-Based Methods•Methods Based on Co-Occurrence of Categorical Data•Other Clustering TechniquesConstraint-Based ClusteringGraph PartitioningClustering Algorithms and Supervised LearningClustering Algorithms in Machine Learning•Scalable Clustering Algorithms•Algorithms For High Dimensional DataSubspace ClusteringCo-Clustering Techniques1.4Important IssuesThe properties of clustering algorithms we are primarily concerned with in data mining include:•Type of attributes algorithm can handle•Scalability to large datasets•Ability to work with high dimensional data•Ability tofind clusters of irregular shape•Handling outliers•Time complexity(we frequently simply use the term complexity)•Data order dependency•Labeling or assignment(hard or strict vs.soft or fuzzy)•Reliance on a priori knowledge and user defined parameters •Interpretability of resultsRealistically,with every algorithm we discuss only some of these properties. The list is in no way exhaustive.For example,as appropriate,we also discuss algorithms ability to work in pre-defined memory buffer,to restart,and to provide an intermediate solution.6Pavel Berkhin2Hierarchical ClusteringHierarchical clustering builds a cluster hierarchy or a tree of clusters,also known as a dendrogram.Every cluster node contains child clusters;sibling clusters partition the points covered by their common parent.Such an ap-proach allows exploring data on different levels of granularity.Hierarchical clustering methods are categorized into agglomerative(bottom-up)and divi-sive(top-down)[116],[131].An agglomerative clustering starts with one-point (singleton)clusters and recursively merges two or more of the most similar clusters.A divisive clustering starts with a single cluster containing all data points and recursively splits the most appropriate cluster.The process contin-ues until a stopping criterion(frequently,the requested number k of clusters) is achieved.Advantages of hierarchical clustering include:•Flexibility regarding the level of granularity•Ease of handling any form of similarity or distance•Applicability to any attribute typesDisadvantages of hierarchical clustering are related to:•Vagueness of termination criteria•Most hierarchical algorithms do not revisit(intermediate)clusters once constructed.The classic approaches to hierarchical clustering are presented in the sub-section Linkage Metrics.Hierarchical clustering based on linkage metrics re-sults in clusters of proper(convex)shapes.Active contemporary efforts to build cluster systems that incorporate our intuitive concept of clusters as con-nected components of arbitrary shape,including the algorithms CURE and CHAMELEON,are surveyed in the subsection Hierarchical Clusters of Arbi-trary Shapes.Divisive techniques based on binary taxonomies are presented in the subsection Binary Divisive Partitioning.The subsection Other Devel-opments contains information related to incremental learning,model-based clustering,and cluster refinement.In hierarchical clustering our regular point-by-attribute data representa-tion frequently is of secondary importance.Instead,hierarchical clustering frequently deals with the N×N matrix of distances(dissimilarities)or sim-ilarities between training points sometimes called a connectivity matrix.So-called linkage metrics are constructed from elements of this matrix.The re-quirement of keeping a connectivity matrix in memory is unrealistic.To relax this limitation different techniques are used to sparsify(introduce zeros into) the connectivity matrix.This can be done by omitting entries smaller than a certain threshold,by using only a certain subset of data representatives,or by keeping with each point only a certain number of its nearest neighbors(for nearest neighbor chains see[177]).Notice that the way we process the original (dis)similarity matrix and construct a linkage metric reflects our a priori ideas about the data model.A Survey of Clustering Data Mining Techniques7With the(sparsified)connectivity matrix we can associate the weighted connectivity graph G(X,E)whose vertices X are data points,and edges E and their weights are defined by the connectivity matrix.This establishes a connection between hierarchical clustering and graph partitioning.One of the most striking developments in hierarchical clustering is the algorithm BIRCH.It is discussed in the section Scalable VLDB Extensions.Hierarchical clustering initializes a cluster system as a set of singleton clusters(agglomerative case)or a single cluster of all points(divisive case) and proceeds iteratively merging or splitting the most appropriate cluster(s) until the stopping criterion is achieved.The appropriateness of a cluster(s) for merging or splitting depends on the(dis)similarity of cluster(s)elements. This reflects a general presumption that clusters consist of similar points.An important example of dissimilarity between two points is the distance between them.To merge or split subsets of points rather than individual points,the dis-tance between individual points has to be generalized to the distance between subsets.Such a derived proximity measure is called a linkage metric.The type of a linkage metric significantly affects hierarchical algorithms,because it re-flects a particular concept of closeness and connectivity.Major inter-cluster linkage metrics[171],[177]include single link,average link,and complete link. The underlying dissimilarity measure(usually,distance)is computed for every pair of nodes with one node in thefirst set and another node in the second set.A specific operation such as minimum(single link),average(average link),or maximum(complete link)is applied to pair-wise dissimilarity measures:d(C1,C2)=Op{d(x,y),x∈C1,y∈C2}Early examples include the algorithm SLINK[199],which implements single link(Op=min),Voorhees’method[215],which implements average link (Op=Avr),and the algorithm CLINK[55],which implements complete link (Op=max).It is related to the problem offinding the Euclidean minimal spanning tree[224]and has O(N2)complexity.The methods using inter-cluster distances defined in terms of pairs of nodes(one in each respective cluster)are called graph methods.They do not use any cluster representation other than a set of points.This name naturally relates to the connectivity graph G(X,E)introduced above,because every data partition corresponds to a graph partition.Such methods can be augmented by so-called geometric methods in which a cluster is represented by its central point.Under the assumption of numerical attributes,the center point is defined as a centroid or an average of two cluster centroids subject to agglomeration.It results in centroid,median,and minimum variance linkage metrics.All of the above linkage metrics can be derived from the Lance-Williams updating formula[145],d(C iC j,C k)=a(i)d(C i,C k)+a(j)d(C j,C k)+b·d(C i,C j)+c|d(C i,C k)−d(C j,C k)|.8Pavel BerkhinHere a,b,c are coefficients corresponding to a particular linkage.This formula expresses a linkage metric between a union of the two clusters and the third cluster in terms of underlying nodes.The Lance-Williams formula is crucial to making the dis(similarity)computations feasible.Surveys of linkage metrics can be found in [170][54].When distance is used as a base measure,linkage metrics capture inter-cluster proximity.However,a similarity-based view that results in intra-cluster connectivity considerations is also used,for example,in the original average link agglomeration (Group-Average Method)[116].Under reasonable assumptions,such as reducibility condition (graph meth-ods satisfy this condition),linkage metrics methods suffer from O N 2 time complexity [177].Despite the unfavorable time complexity,these algorithms are widely used.As an example,the algorithm AGNES (AGlomerative NESt-ing)[131]is used in S-Plus.When the connectivity N ×N matrix is sparsified,graph methods directly dealing with the connectivity graph G can be used.In particular,hierarchical divisive MST (Minimum Spanning Tree)algorithm is based on graph parti-tioning [116].2.1Hierarchical Clusters of Arbitrary ShapesFor spatial data,linkage metrics based on Euclidean distance naturally gener-ate clusters of convex shapes.Meanwhile,visual inspection of spatial images frequently discovers clusters with curvy appearance.Guha et al.[99]introduced the hierarchical agglomerative clustering algo-rithm CURE (Clustering Using REpresentatives).This algorithm has a num-ber of novel features of general importance.It takes special steps to handle outliers and to provide labeling in assignment stage.It also uses two techniques to achieve scalability:data sampling (section 8),and data partitioning.CURE creates p partitions,so that fine granularity clusters are constructed in parti-tions first.A major feature of CURE is that it represents a cluster by a fixed number,c ,of points scattered around it.The distance between two clusters used in the agglomerative process is the minimum of distances between two scattered representatives.Therefore,CURE takes a middle approach between the graph (all-points)methods and the geometric (one centroid)methods.Single and average link closeness are replaced by representatives’aggregate closeness.Selecting representatives scattered around a cluster makes it pos-sible to cover non-spherical shapes.As before,agglomeration continues until the requested number k of clusters is achieved.CURE employs one additional trick:originally selected scattered points are shrunk to the geometric centroid of the cluster by a user-specified factor α.Shrinkage suppresses the affect of outliers;outliers happen to be located further from the cluster centroid than the other scattered representatives.CURE is capable of finding clusters of different shapes and sizes,and it is insensitive to outliers.Because CURE uses sampling,estimation of its complexity is not straightforward.For low-dimensional data authors provide a complexity estimate of O (N 2sample )definedA Survey of Clustering Data Mining Techniques9 in terms of a sample size.More exact bounds depend on input parameters: shrink factorα,number of representative points c,number of partitions p,and a sample size.Figure1(a)illustrates agglomeration in CURE.Three clusters, each with three representatives,are shown before and after the merge and shrinkage.Two closest representatives are connected.While the algorithm CURE works with numerical attributes(particularly low dimensional spatial data),the algorithm ROCK developed by the same researchers[100]targets hierarchical agglomerative clustering for categorical attributes.It is reviewed in the section Co-Occurrence of Categorical Data.The hierarchical agglomerative algorithm CHAMELEON[127]uses the connectivity graph G corresponding to the K-nearest neighbor model spar-sification of the connectivity matrix:the edges of K most similar points to any given point are preserved,the rest are pruned.CHAMELEON has two stages.In thefirst stage small tight clusters are built to ignite the second stage.This involves a graph partitioning[129].In the second stage agglomer-ative process is performed.It utilizes measures of relative inter-connectivity RI(C i,C j)and relative closeness RC(C i,C j);both are locally normalized by internal interconnectivity and closeness of clusters C i and C j.In this sense the modeling is dynamic:it depends on data locally.Normalization involves certain non-obvious graph operations[129].CHAMELEON relies heavily on graph partitioning implemented in the library HMETIS(see the section6). Agglomerative process depends on user provided thresholds.A decision to merge is made based on the combinationRI(C i,C j)·RC(C i,C j)αof local measures.The algorithm does not depend on assumptions about the data model.It has been proven tofind clusters of different shapes,densities, and sizes in2D(two-dimensional)space.It has a complexity of O(Nm+ Nlog(N)+m2log(m),where m is the number of sub-clusters built during the first initialization phase.Figure1(b)(analogous to the one in[127])clarifies the difference with CURE.It presents a choice of four clusters(a)-(d)for a merge.While CURE would merge clusters(a)and(b),CHAMELEON makes intuitively better choice of merging(c)and(d).2.2Binary Divisive PartitioningIn linguistics,information retrieval,and document clustering applications bi-nary taxonomies are very useful.Linear algebra methods,based on singular value decomposition(SVD)are used for this purpose in collaborativefilter-ing and information retrieval[26].Application of SVD to hierarchical divisive clustering of document collections resulted in the PDDP(Principal Direction Divisive Partitioning)algorithm[31].In our notations,object x is a docu-ment,l th attribute corresponds to a word(index term),and a matrix X entry x il is a measure(e.g.TF-IDF)of l-term frequency in a document x.PDDP constructs SVD decomposition of the matrix10Pavel Berkhin(a)Algorithm CURE (b)Algorithm CHAMELEONFig.1.Agglomeration in Clusters of Arbitrary Shapes(X −e ¯x ),¯x =1Ni =1:N x i ,e =(1,...,1)T .This algorithm bisects data in Euclidean space by a hyperplane that passes through data centroid orthogonal to the eigenvector with the largest singular value.A k -way split is also possible if the k largest singular values are consid-ered.Bisecting is a good way to categorize documents and it yields a binary tree.When k -means (2-means)is used for bisecting,the dividing hyperplane is orthogonal to the line connecting the two centroids.The comparative study of SVD vs.k -means approaches [191]can be used for further references.Hier-archical divisive bisecting k -means was proven [206]to be preferable to PDDP for document clustering.While PDDP or 2-means are concerned with how to split a cluster,the problem of which cluster to split is also important.Simple strategies are:(1)split each node at a given level,(2)split the cluster with highest cardinality,and,(3)split the cluster with the largest intra-cluster variance.All three strategies have problems.For a more detailed analysis of this subject and better strategies,see [192].2.3Other DevelopmentsOne of early agglomerative clustering algorithms,Ward’s method [222],is based not on linkage metric,but on an objective function used in k -means.The merger decision is viewed in terms of its effect on the objective function.The popular hierarchical clustering algorithm for categorical data COB-WEB [77]has two very important qualities.First,it utilizes incremental learn-ing.Instead of following divisive or agglomerative approaches,it dynamically builds a dendrogram by processing one data point at a time.Second,COB-WEB is an example of conceptual or model-based learning.This means that each cluster is considered as a model that can be described intrinsically,rather than as a collection of points assigned to it.COBWEB’s dendrogram is calleda classification tree.Each tree node(cluster)C is associated with the condi-tional probabilities for categorical attribute-values pairs,P r(x l=νlp|C),l=1:d,p=1:|A l|.This easily can be recognized as a C-specific Na¨ıve Bayes classifier.During the classification tree construction,every new point is descended along the tree and the tree is potentially updated(by an insert/split/merge/create op-eration).Decisions are based on the category utility[49]CU{C1,...,C k}=1j=1:kCU(C j)CU(C j)=l,p(P r(x l=νlp|C j)2−(P r(x l=νlp)2.Category utility is similar to the GINI index.It rewards clusters C j for in-creases in predictability of the categorical attribute valuesνlp.Being incre-mental,COBWEB is fast with a complexity of O(tN),though it depends non-linearly on tree characteristics packed into a constant t.There is a similar incremental hierarchical algorithm for all numerical attributes called CLAS-SIT[88].CLASSIT associates normal distributions with cluster nodes.Both algorithms can result in highly unbalanced trees.Chiu et al.[47]proposed another conceptual or model-based approach to hierarchical clustering.This development contains several different use-ful features,such as the extension of scalability preprocessing to categori-cal attributes,outliers handling,and a two-step strategy for monitoring the number of clusters including BIC(defined below).A model associated with a cluster covers both numerical and categorical attributes and constitutes a blend of Gaussian and multinomial models.Denote corresponding multivari-ate parameters byθ.With every cluster C we associate a logarithm of its (classification)likelihoodl C=x i∈Clog(p(x i|θ))The algorithm uses maximum likelihood estimates for parameterθ.The dis-tance between two clusters is defined(instead of linkage metric)as a decrease in log-likelihoodd(C1,C2)=l C1+l C2−l C1∪C2caused by merging of the two clusters under consideration.The agglomerative process continues until the stopping criterion is satisfied.As such,determina-tion of the best k is automatic.This algorithm has the commercial implemen-tation(in SPSS Clementine).The complexity of the algorithm is linear in N for the summarization phase.Traditional hierarchical clustering does not change points membership in once assigned clusters due to its greedy approach:after a merge or a split is selected it is not refined.Though COBWEB does reconsider its decisions,its。
Key Featuresand Benefits• Ultra-low power: 86 mW• Trimble quality at low cost• Aided GPS through TSIP for faster acquisition• Dual sensitivity modes with automatic switching• 12-channel simultaneous operation • Supports NMEA 0183, TSIP, TAIP and DGPS Lassen iQ GPS ModuleLow-power, high-quality GPS solution for your mobile productsT rimble’s Lassen® iQ module isone smart buy. It adds powerful,12-channel GPS functionalityto your mobile product in apostage-stamp-sized footprintwith ultra-low power consump-tion and extreme reliability—allat a very economical price.Designed for portable handheld,battery-powered applicationssuch as cell phones, pagers,PDAs, digital cameras, and manyothers, the module is also idealfor standard GPS applicationssuch as tracking.The 12-channel Lassen iQmodule is fully compatible withT rimble’s popular Lassen SQmodule. Using T rimble’s break-through, patented FirstGPS®architecture, the module deliverscomplete position, velocity andtime (PVT) solutions for use inthe host application.Powerful PerformanceThe Lassen iQ module fea-tures two GPS signal sensitivitymodes: Standard and Enhanced.With Enhanced mode enabled,the module automaticallyswitches to higher sensitivitywhen satellite signals are weak.The module also supports TSIPdownload of critical startupinformation for fast acquisition.This aided GPS (A-GPS) startupprovides hot start performancefor each power-up.The Lassen iQ module is the onlystamp-sized GPS product thatsupports the four most popu-lar protocols: DGPS (RTCM),TSIP(T rimble Standard InterfaceProtocol), TAIP (T rimble ASCIIInterface Protocol) and NMEA 0183.The Lassen iQ module combinesT rimble performance and qual-ity with low cost. With an MTBF(mean time between failures) fi gureof 60 years, it is one of the most reli-able GPS receivers on the market.HardwareA metal shield encloses themodule for protection and easeof handling. The package hasa small form factor, (approxi-mately 26 mm x 26 mm,including the shield). It typi-cally requires less than 90 mWof power at 3.3 VDC.The highly integrated moduleis a miniature board containingT rimble GPS hardware corebased on our Colossus® RFASIC and IO-TS digital signalprocessor (DSP), a 32-bit RISCCPU and fl ash memory.AntennasThe Lassen iQ module is com-patible with active, 3.3-VDCantennas. Three such antennasare available from T rimble andare recommended for use accord-ing to your application; see thereverse side for antenna details.The module provides both anten-na open and short detection plusantenna short protection.Starter KitThe Lassen iQ Starter Kit pro-vides everything you need toget started integrating state-of-the-art GPS capability into yourapplication.Lassen iQ GPS receiver with metal shieldLassen iQ GPS ModuleLow-power, high-quality GPS solution for your mobile productsVibration0.008 g 2/Hz 5 Hz to 20 Hz 0.05 g 2/Hz 20 Hz to 100 Hz–3 dB/octave 100 Hz to 900 HzOperating Humidity5% to 95% R.H. non-condensing, at +60° CEnclosureMetal enclosure with solder mounting tabs Dimensions26 mm W x 26 mm L x 6 mm H(1.02” W x 1.02” L x 0.24” H)Weight6.5 grams (0.2 ounce) including shieldnGothDEMI 7ptModuleLassen iQ module, in metal enclosure with soldermounting tabs Starter Kit Includes Lassen iQ module mounted on interface motherboard in a durable metal enclosure, AC/DC power converter, compact magnetic-mount GPS antenna, ultra-compact embedded antenna, serial interface cable, cigarette lighter adapter, TSIP , NMEA, and TAIP protocols, software toolkit and manual on CD-ROMAntenna Transition Cable, MCXRF cable for connecting antennas with MCX connector to on-module H.FL-RF connector. Cable length: 10 cmAntenna Transition Cable, SMARF cable for connecting antennas with SMA connector to on-module H.FL-RF connector.Cable length: 12.9 cm.Ultra-Compact Embedded Antenna3.3V active miniature unpackaged antennaCable length: 8 cmDim: 22 mm W x 21 mm L x 8 mm H (0.866” x 0.827” x 0.315”)Connector: HFL; mates directly to on-module RF connectorCompact Unpackaged Antenna3V active micropatch unpackaged antenna Cable length: 11 cmDim: 34.6 mm W x 29 mm L x 9 mm H (1.362” x 1.141” x 0.354”)Connector: MCX; mates through the optional RF transition cable to on-module RF connectorCompact Magnetic-Mount Antenna, MCX or SMA3V active micropatch antenna with magnetic mount Cable length: 5 mDim: 42 mm W x 50.5 mm L x 13.8 mm H (1.65” x 1.99” x 0.55”)Connectors: MCX or SMA, mates through the optional RF trasition cable to the module RF connectorSpecifi cations subject to change without notice.© C o p y r i g h t 2004, T r i m b l e N a v i g a t i o n L i m i t e d . A l l r i g h t s r e s e r v e d . T h e G l o b e a n d T r i a n g l e , T r i m b l e , C o l o s s u s , F i r s t G P S , a n d L a s s e n a r e t r a d e m a r k s o f T r i m b l e N a v i g a t i o n L i m i t e d r e g i s t e r e d i n t h e U n i t e d S t a t e s P a t e n t a n d T r a d e m a r k O f fi c e . A l l o t h e r t r a d e m a r k s a r e t h e p r o p e r t y o f t h e i r r e s p e c t i v e o w n e r s . T I D 13442 (9/04)• 12-channel simultaneous operation• Ultra-low power consumption: less than 90 mW (27 mA) @ 3.3 V • Dual sensitivity modes with automatic switching • Aided GPS through TSIP• Antenna open and short circuit detection and protection • Compact size: 26 mm W x 26 mm L x 6 mm H• Supports NMEA 0183, TSIP , TAIP , DGPS protocols • Trimble quality at low costGeneralL1 (1575.42 MHz) frequency, C/A code, 12-channel,continuous tracking receiverUpdate Rate TSIP @ 1 Hz; NMEA @ 1 HZ; TAIP @ 1 Hz Accuracy Horizontal: <5 meters (50%), <8 meters (90%) Altitude: <10 meters (50%), <16 meters (90%) Velocity: 0.06 m/sec PPS (static): ±50 nanosecondsAcquisition (Autonomous Operation in Standard Sensitivity Mode) Reacquisition: <2 sec. (90%) Hot Start: <10 sec (50%), <13 sec (90%) Warm Start: <38 sec (50%), <42 sec (90%) Cold Start: <50 sec (50%), <84 sec (90%)Cold start requires no initialization. Warm start implies last position, time and almanac are saved by backup power. Hot start implies ephemeris also saved.Operational (COCOM) LimitsAltitude: 18,000 mVelocity: 515 m/sEither limit may be exceeded, but not bothConnectorsI/O:8-pin (2x4) 2 mm male header, micro terminal strip ASP 69533-01 RF: Low-profi le coaxial connectorH.FL-R-SMT (10), 50 Ohm Serial Port 2 serial ports (transmit/receive)PPS3.3 V CMOS-compatible TTL-level pulse, once per secondProtocolsTSIP , TAIP , NMEA 0183 v3.0, RTCM SC-104 NMEA MessagesGGA, VTG, GLL, ZDA, GSA, GSV and RMC Messages selectable by TSIP commandSelection stored in fl ash memory- BFranGothDEMI 7ptPrime Power+3.0 VDC to 3.6 VDC (3.3 V typ.) Power ConsumptionLess than 90 mW (27 mA) @ 3.3 VBackup Power +2.5 VDC to +3.6 VDC (3.0V typ.)Ripple Noise Max 60 mV, peak to peak from 1 Hz to 1 MHz Antenna Fault Protection Open and short circuit detection and protectionOperating Temperature –40° C to +85° C Storage Temperature–55° C to +105° CT rimble Navigation Limited is not responsible for the operation or failure of operation ofGPS satellites or the availability of GPS satellite signals.Trimble Navigation Limited Corporate Headquarters 645 North Mary Avenue Sunnyvale, CA Trimble Navigation Europe Ltd, UKPhone: 44 1256-760-150Trimble Export Ltd, Korea Phone: 82-2-5555-361***********************Trimble Navigation Ltd, ChinaPhone: 86-21-6391-7814/iQ。
CMX-RTX Freescale Kinetis Challenge Version for theK60N ProcessorGetting Started GuideTRADEMARKS:K60N is a trademark of Freescale Semiconductor, Inc.CMX™ and CMX-RTX™ are trademarks of CMX Systems, Inc.The versions of the IAR tools used for CMX-RTX were:IAR Embedded Workbench for ARM v6.20.Freescale TWR-K60N512 boardCMX-RTX Freescale Kinetis Challenge Version for the K60N Processor (1)Getting Started Guide (1)Installation (1)CMX-RTX Freescale Kinetis Challenge Version limitations (2)Getting Started (2)The example application (3)Interrupt Service Routines (3)InstallationThe CMX-RTX evaluation software is distributed in the form of a setup.exe file. If you received your software via email in a .zip file, the zip file is password-protected only to make it more likely to make it through corporate email firewalls. The zip file password is “cmx”.Run setup.exe.The default installation directory, c:\cmx, may be changed to another location.Please check /Freescale_Kinetis_Challenge for updates to the CMX-RTX evaluation software.Caution: We recommend installing the software in a root-level directory. The reason for this is that you may experience problems linking if the software is installed in too “deep” a directory. Some of the tools have a finite limit on the length of a directory/path specification, and if this limit is exceeded, then you will experience problems. This kind of issue has become much more common in recent years, with the advent of long file names.What is installed:Folders:•Manual – We highly recommend reading the manual, ver53dmo.pdf.•CMXMOD – contains an evaluation version of the CMX-RTX library.•Confiig – contains linker scripts for the sample project.•Src – contains hardware specific files for the K60N512.Files:•Cmxdemo.c – an example program using CMX-RTX evaluation version.•Cmxsampa.c – SysTick handler for the sample project.•Cxfuncs.h – header file for CMX-RTX evaluation version.•install.log – Produced during the install, if you have an installation problem while running setup.exe, please email CMX technical support this file.•license.txt – the software license•unwise.exe – use this should you wish to uninstall the softwarePlease email CMX at the following email address to report bugs or problems with the CMX-RTX Evaluation Version: ***********CMX-RTX Freescale Kinetis Challenge Version limitationsThe CMX-RTX Evaluation Version has some limitations that the full CMX-RTX does not have. •There is a 30 minute time limit before the application will lock up. The time limit is based on a10 millisecond system timer interrupt interval. After the 30 minutes has expired the board maybe reset for another 30 minutes of running time.•The Freescale Kinetis Challenge version of the RTOS will count the number of times tasks are started or resumed. After 500,000 starts and resumes the application will lock up. •The CMX-RTX Freescale Kinetis Challenge version is not intended to be used in conjunction with the CMX-MicroNet Freescale Kinetis Challenge version.•No source code for the library or scheduler.•Low power function and time slicing are disabled in the scheduler.•No CMXBug, CMXTracker or CMX-UART support.•Only function K_Task_Create_Stack may be used to create tasks. Function K_Task_Create must not be used.•The only CMX functions that may be called from interrupts are K_OS_Tick_Update, K_Intrp_Event_Signal and K_Intrp_Semaphore_Post.•The RTOS configuration is fixed with the following values:Max tasks 5Max resources 4Max cyclic timers 2Max messages 32Max queues 1Max mailboxes 4Max semaphores 4interrupt stack size 384RTC scale 1Interrupt Pipe size 20CMX_RAM_INIT 1Getting StartedÎ The projects and example files are set up for the TWR-K60N512 board with the K60N processor in little-endian mode. If you are using a different board then modifications may need to be made to the projects and startup code.Î Unless the task does not end, function K_Task_End must be called at the end of a task. If this is not done, serious undesirable consequences may and will happen.Î Do not change any of the header files in the main directory. The cmxlib evaluation library has been built using those files so changing any of them would cause a conflict between the library code and the application code.The example applicationThe example application shows how to set up CMX-RTX, call various CMX-RTX functions from tasks and how to call certain CMX-RTX functions from an interrupt.Function K_OS_Init must be called before calling any other CMX function. This initializes the CMX variables and prepares the OS for use.Tasks may be created and started from other tasks but at least one task must be created and triggered before the OS is started. Function K_Task_Create_Stack must be used to create tasks. On processors such as the Kinetis, which have a stack that grows down, the task pointer parameter to K_Task_Create_Stack must point to the top of the stack buffer used. Task stacks must be 8-byte aligned. Setting task stacks improperly or using too small of a task stack will cause serious and potentially hard to find problems. Function K_Task_Start is used to start, or trigger, a task. See the CMX-RTX manual for details on these functions.Before starting the OS, semaphores, queues, etc. may be created and cyclic timers started. This may also be done from tasks if desired. An interrupt that occurs on a regular interval and calls function K_OS_Tick_Update must be set up before calling K_OS_Start. The K60N port uses the SysTick interrupt for the timer tick interrupt. The sample ISR code also calls functionK_Intrp_Semaphore_Post, that code may be removed for your own programs.Function K_OS_Start starts the OS and does not return. The task with the highest priority that is able to start will be the first task that is run. In the example application that task is task1. Interrupt Service RoutinesTo implement your own ISRs first add the handler function name to the appropriate spot in the vector table. See isr.h and cmxsampa.c for an example ISR handler. Again, the only CMX functions that may be called from interrupts are K_OS_Tick_Update, K_Intrp_Event_Signal and K_Intrp_Semaphore_Post.。
FlexRAN™ software on a single server setup based on 4th Gen Intel® Xeon® Scalable processor platformIntroductionThe Reference System Architectures (RAs) are forward-looking Kubernetes-cluster cloud native reference platforms aiming to ease the development and deployment of network and edge solutions. The RAs are automatically deployed using Ansible playbooks that are designed to optimally support diverse use cases across network locations.This document is a quick start guide for setting up and deploying FlexRAN™ software 1 as either container in a POD 2 or bare metal to be used as part of a 5G end-to-end setup or in a stand-alone manner in Timer Mode and xRAN Mode using the Container Bare Metal Reference Architecture (BMRA) on a single 4th Gen Intel® Xeon® Scalable processor -based platform.The BMRA can be implemented using a variety of Configuration Profiles. Each Configuration Profile prescribes a set of hardware/ software components and configuration specifications designed for specific use cases. This guide describes the implementation of BMRA with the Access Configuration Profile, designed specifically for vRAN and FlexRAN™ software setup. For more details on this setup and other Configuration Profiles, refer to the User Guides listed in the Reference Documentation section.Hardware and Software BOMFollowing is the list of the hardware and software components that are required for setting up FlexRAN™ software in Timer Mode on a single server: Ansible host Laptop or server running a Unix base distribution Target Server 4th Gen Intel® Xeon® Scalable processor with Intel® vRAN Boost - Quanta S6Q SDP, Archer City and Fox CreekPass platform with inbuilt FEC accelerator, OR4th Gen Intel® Xeon® Scalable processor server - Quanta S6Q SDP (1 socket SPR-MCC (6421N) platform)3rd Gen Intel® Xeon® Scalable processor server - Coyote Pass SDP (1 socket, 32 core ICX-SP (6338N) platform)FEC Accelerator Intel® vRAN Accelerator ACC100 Plugin Card on the target BBU server Note: The above is not required for 4th Gen Intel® Xeon® Scalable processor with Intel® vRAN BoostEthernet AdapterIntel® Ethernet Network Adapter E810-CQDA2 or Intel® Ethernet Controller XL710 on the target server OSUbuntu 22.04 LTS or RHEL 8.6 operating system with real-time kernel on the target server BMRA software https:///intel/container-experience-kits/Note: The FlexRAN™ software deployment in Timer mode only need one server platform. For testing the FlexRAN™ software in xRAN mode, two server platforms are required where the second server will emulate the Remote Radio Unit (oRU) as shown in Figure 1. 1 Intel, the Intel logo, and FlexRAN™ are trademarks of Intel Corporation or its subsidiaries.2 FlexRAN™ software in POD is only supported on the 3rd Gen Intel® Xeon® Scalable processor server of CPU SKU 1 socket 32 core 6338N in this release.Network and Edge Reference System Architecture with FlexRAN™ Software – Setup on a Single Server Quick Start GuideFigure 1: Example of xRAN test setup using FlexRAN™ softwareFor details of the software BOM for the FlexRAN™ software,refer to the BMRA user guide listed in the Reference Documentation section.Getting StartedDownload the following files from the Intel® Developer Zone portal:Download URLFlexRAN-22.11-L1.tar.gz_part0 https:///v1/dl/getContent/763142FlexRAN-22.11-L1.tar.gz_part1 https:///v1/dl/getContent/763143dpdk_patch-22.11.patch.zip https:///v1/dl/getContent/763144Note: The files above are only needed in you deploy FlexRAN™ software in host and not as container. To obtain the files, make sure you have an account in Intel Developer Zone Portal. These can be downloaded to your laptop and later transferred to the Linux server as mentioned in the steps below.Step 1 - Set Up the SystemRefer to Network and Edge Bare Metal Reference System Architecture User Guide Section 6.1.The below steps assume that both the Ansible host and target server are running Ubuntu as the operating system. For RHEL, use ‘yum’ or ‘dnf’ as the package manager instead of ‘apt’.Ansible Host1.Install necessary packages (some might already be installed):# sudo apt update# sudo apt install -y python3 python3-pip openssh-client gitbuild-essential# pip3 install --upgrade pip2.Generate an SSH keypair if needed (check /root/.ssh/):# ssh-keygen -t rsa -b 4096 -N "" -f ~/.ssh/id_rsa3.Copy the public key to the target server:# ssh-copy-id root@<target IP>4.Verify password-less connectivity to the target server:# ssh root@<target IP>Target Server(s)The following steps are required for all the target nodes: FlexRAN™ software node and oRU node.1.Install the Ubuntu 22.04 or RHEL 8.6 with Real-Time (RT) kernel. You can follow the steps here as a reference for Ubuntu.Network and Edge Reference System Architecture with FlexRAN™ Software – Setup on a Single Server Quick Start Guide2.Verify that the kernel is tagged as real-time kernel.# uname -ri5.15.0-1015-realtime x86_643.Install necessary packages (some might already be installed).# sudo apt install -y python3 openssh-server lshw4.As part of the configuration in Step 3, information about PCI devices for SR-IOV and FEC accelerator must be specified.5.Find the relevant Network PCI IDs (bus:device.function) using ‘lspci’ and note down the IDs for later when configuringhost_vars on the Ansible host.# lspci | grep Eth18:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 01) 18:00.1 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 01)6.Find the FEC accelerator card’s PCI IDs (domain:bus:device.function) using ‘lspci’ and confirm that the device ID is ‘0d5c’ andnote it down for later when configuring host_vars on the Ansible host.# lspci -D | grep -i acc0000:31:00.0 Processing accelerators: Intel Corporation Device 0d5c7.(Optional) In case the system has static IP and doesn’t lose IP after reboot# enable_dhclient_systemd_service: falseNote: The below steps 8 and 9 are for the Bare Metal (BM) deployment of FlexRAN™ software on the target server and not needed for POD deployment8.Copy the FlexRAN™ software packages and merge them into one final package.# mkdir -p /opt/cek/intel-flexran/# cat FlexRAN-22.11-L1.tar.gz_part0 FlexRAN-22.11-L1.tar.gz_part1 > /opt/cek/intel-flexran/FlexRAN-22.11.tar.gz9.Extract the FlexRAN-22.11 software, follow the ReadMe.txt and install the FlexRAN™ software.# cd /opt/cek/intel-flexran/# tar -xvf FlexRAN-22.11.tar.gz# cat ReadMe.txt# ./extract.shNote: During the installation, all EULA must be reviewed and “manually” accepted on theterminal screenStep 2 - Download and InstallRefer to Network and Edge Bare Metal Reference System Architecture User Guide: Section 2.5.Ansible Host1.Download the source code from the GitHub repository for the RA server.# git clone https:///intel/container-experience-kits/# cd container-experience-kits# git checkout v23.02# git submodule update --init2.Install the requirements needed by the deployment scripts.# pip3 install -r requirements.txt3.Unzip and copy the DPDK patch.# mkdir -p /opt/patches/flexran/dpdk-21.11/# cp dpdk_patch-22.11.patch /opt/patches/flexran/dpdk-21.11/Note: The above step is not needed for FlexRAN software in a POD deploymentNetwork and Edge Reference System Architecture with FlexRAN™ Software – Setup on a Single Server Quick Start Guide Step 3 – ConfigureRefer to Network and Edge Bare Metal Reference System Architecture User Guide: Section 13.3.The Access Edge configuration profile is used for FlexRAN™ software deployment.Configuring BMRA for FlexRAN™ SoftwareAnsible Host1.Generate the configuration files.# export PROFILE=access# make k8s-profile PROFILE=${PROFILE} ARCH=spr2.Update the inventory.ini file to match the target server’s hostname. The values for <bbu hostname> and <target IP> mustbe updated to match the target system. For xRAN test mode, the oRU node is also required.# cd container-experience-kits# vim inventory.ini[all]<bbu hostname> ansible_host=<bbu IP> ip=<bbu IP> ansible_user=root<oru hostname> ansible_host=<oru IP> ip=<oru IP> ansible_user=rootlocalhost ansible_connection=local ansible_python_interpreter=/usr/bin/python3[vm_host][kube_control_plane]<bbu hostname>[etcd]<bbu hostname>[kube_node]<bbu hostname>[oru]<oru hostname>[k8s_cluster:children]kube_control_planekube_node[all:vars]ansible_python_interpreter=/usr/bin/python3Note: The oRU node is needed only for xRAN test mode in BM deployment and can beskipped/commented for Timer mode testing3.ansible_python_interpreter=/usr/bin/python3Update the host_vars filename(s) with the target machine's hostname(s).# cp host_vars/node1.yml host_vars/<bbu hostname>.yml# cp host_vars/node1.yml host_vars/<oru hostname>.yml #In case of xRAN test mode in BMTo utilize features depending on SR-IOV, FEC accelerator, host_vars must be updated with information about the PCIdevices on the target server. The example below can be used as a reference for the configuration but should be updated to match the correct PCI IDs of the target server(s).4.Update host_vars/<bbu_hostname>.yml with PCI device information specific to the target server(s). You need 2 PFs and aminimum of 4VFs per PF.## host_vars/<bbu hostname>.yml ##dataplane_interfaces:- bus_info: "18:00.0"pf_driver: "iavf"default_vf_driver: "vfio-pci"sriov_numvfs: 4- bus_info: "18:00.1"pf_driver: "iavf"default_vf_driver: "vfio-pci"sriov_numvfs: 4Note: Be sure to remove the square brackets [ ] that follow the ‘dataplane_interfaces’ configuration option by default.Network and Edge Reference System Architecture with FlexRAN™ Software – Setup on a Single Server Quick Start Guide5.Make the below changes for enabling DPDK patch and adding the FEC acc card in host_vars/<bbu_hostname>.yml.## host_vars/<bbu hostname>.yml ### Wireless FEC H/W Accelerator Device (e.g. ACC100) PCI IDfec_acc: "dddd:bb:ss.f"dpdk_local_patches_dir: "/opt/patches/flexran"dpdk_local_patches_strip: 16.Make sure that the QAT is turned off on target(s) in host_vars/<bbu_hostname>.yml.## host_vars/<bbu hostname>.yml ##update_qat_drivers: falseopenssl_install: false7.Make sure the below parameters are set correctly in group_vars/all.yml.## group_vars/all.yml ##profile_name: accessconfigured_arch: sprpreflight_enabled: trueintel_sriov_fec_operator_enabled: true8.Add the target-hostname as a power_node in group_vars/all.yml for intel_power_manager.## group_vars/all.yml ##power_nodes: ["<bbu_hostname>"]9.Set the FlexRAN™ test mode in group_vars/all.yml as per your testing need.## group_vars/all.yml ##intel_flexran_enabled: true # if true, deploy FlexRANintel_flexran_mode: "timer" # supported values are "timer" and "xran"10.Set the FlexRAN™ deployment mode as HOST or POD in group_vars/all.yml based on the delpyment model## group_vars/all.yml ##intel_flexran_type: "host" # supported values are "host" and "pod"11.Set the below network interfaces in group_vars/all.yml for XRAN testing mode (ignore it for timer mode tests)#The below need to be set only for xran test mode. Refer to Figure 1 for more info.intel_flexran_bbu_front_haul: "0000:43:00.0"intel_flexran_bbu_ptp_sync: "0000:43:00.1"intel_flexran_oru_front_haul: "0000:4b:00.0"intel_flexran_oru_ptp_sync: "0000:4b:00.1"12.If the server is behind a proxy, update group_vars/all.yml by updating and uncommenting the lines for http_proxy,https_proxy, and additional_no_proxy.## Proxy configuration ##http_proxy: ":port"https_proxy: ":port"additional_no_proxy: ",mirror_ip"13.(optional) It is recommended that you check the dependencies of components enabled in group_vars and host_vars withthe packaged dependency checker:# ansible-playbook -i inventory.ini playbooks/preflight.yml14.Apply the patch for the Kubespray submodule.# ansible-playbook -i inventory.ini playbooks/k8s/patch_kubespray.ymlStep 4 – DeployRefer to Network and Edge Bare Metal Reference System Architecture User Guide: Section 2.5.5.Ansible HostNow the RA can be deployed by using the following command:# ansible-playbook -i inventory.ini playbooks/${PROFILE}.ymlNetwork and Edge Reference System Architecture with FlexRAN™ Software – Setup on a Single Server Quick Start Guide Step 5 – ValidateRefer to Network and Edge Bare Metal Reference System Architecture User Guide: Section 5.Ansible Host1.To interact with the Kubernetes CLI (kubectl), start by connecting to the target node in the cluster, which can be doneusing the following command:# ssh root@<target ip>2.Once connected, the status of the Kubernetes cluster can be checked.# kubectl get nodes -o wide# kubectl get pods --all-namespacesDeployment of FlexRAN™ software to be used in an end-to-end network is concluded here. Stand-alone Timermode and xRAN testing are described below.Target Server5.1 FlexRAN™ software on Bare Metal validation stepsTesting FlexRAN™ software in Timer Mode on the target:You will need two terminal windows on the target for running the FlexRAN™ software L1 and L2 applications.1.Run the FlexRAN™ software L1 app.# cd /opt/cek/intel-flexran/# source set_env_var.sh -d# cd bin/nr5g/gnb/l1#./l1.sh -e2.Open another terminal on target to run the Test MAC app.# cd /opt/cek/intel-flexran/# source set_env_var.sh -d# cd bin/nr5g/gnb/testmac#./l2.sh --testfile=spr-sp-eec/sprsp_eec_mu0_10mhz_4x4_hton.cfgTesting FlexRAN™ software in xRAN mode:You will need three terminal windows on the target for running the FlexRAN™ software in xRAN mode.1.Run the FlexRAN™ software L1 app# cd /opt/cek/intel-flexran/# source set_env_var.sh -d# cd bin/nr5g/gnb/l1/orancfg/sub3_mu0_10mhz_4x4/gnb#./l1.sh -oru2.Open another terminal on target to run the Test MAC app# cd /opt/cek/intel-flexran/# source set_env_var.sh -d# cd bin/nr5g/gnb/testmac# ./l2.sh --testfile=../l1/orancfg/sub3_mu0_10mhz_4x4/gnb/testmac_clxsp_mu0_10mhz_hton_oru.cfg3.You can then start the oRU server with the command below# cd /opt/cek/intel-flexran/bin/nr5g/gnb/l1/orancfg/sub3_mu0_10mhz_4x4/oru# ./run_o_ru.sh5.2 FlexRAN™ in POD validation steps (only supported on 3rd Gen Intel® Xeon® Scalable processor server) You can find the FlexRAN™ POD name using the below command:# kubectl get pods -A | grep flexranYou can check the status of the FlexRAN™ container applications running in the POD using the below command: # kubectl describe pod <flexran_pod-name>Testing FlexRAN™ software in Timer Mode in POD:Once the containers are created in the POD, the timer mode test will be running already.1.The status of the L1 app can be checked using the below command:# kubectl logs -f <flexran-pod-name> -c <flexran-l1-app>For example: kubectl logs -f flexran-dockerimage-release -c flexran-l1app2.The status of the L2 TestMAC app can be checked using the below command:# kubectl logs -f <flexran-pod-name> -c <flexran-testmac-app>For example: kubectl logs -f flexran-dockerimage-release -c flexran-testmacNetwork and Edge Reference System Architecture with FlexRAN™ Software – Setup on a Single Server Quick Start GuideTesting FlexRAN™ software in xRAN mode in POD:You will need three terminal windows on the target for running the FlexRAN™ software in xRAN mode.1.(Terminal 1) Run the FlexRAN™ software L1 app# kubectl exec -it <flexran-pod-name> -- bash# cd flexran/bin/nr5g/gnb/l1/orancfg/sub3_mu0_10mhz_4x4/gnb/# ./l1.sh -oru2.(Terminal 2) Open another terminal on target to run the Test MAC app# kubectl exec -it <flexran-pod-name> -- bash# cd flexran/bin/nr5g/gnb/testmac# ./l2.sh --testfile=testmac_clxsp_mu0_10mhz_hton_oru.cfg3.(Terminal 3) Open another terminal and then start the oRU server# kubectl exec -it pod-name -- bash# cd flexran/bin/nr5g/gnb/l1/orancfg/sub3_mu0_10mhz_4x4/oru/# ./run_o_ru.shReference DocumentationThe Network and Edge Bare Metal Reference System Architecture User Guide provides information and a full set of installation instructions for a BMRA.The Network and Edge Reference System Architectures Portfolio User Manual provides additional information for the Reference Architectures including a complete list of reference documents.The Intel FlexRAN™ docker hub provides additional information on running the FlexRAN™ software in a POD.Other collaterals, including technical guides and solution briefs that explain in detail the technologies enabled in the Reference Architectures are available in the following locations: Network & Edge Platform Experience Kits.Document Revision HistoryREVISION DATE DESCRIPTION001 July 2022 Initial release.002 October 2022 Updated Intel® FlexRAN™ software version to 22.07.0 with xRAN test mode and RHEL 8.6RT kernel support. 003 December 2022 Support for 4th Gen Intel® Xeon® Scalable processor with Intel® vRAN Boost CPU and Intel® FlexRAN™software version updated to 22.07.3.004 March 2023Updated Intel® FlexRAN™ software version to 22.11 and added support for running FlexRAN™ software in aPOD on the 3rd Gen Intel® Xeon® Scalable processor server.No product or component can be absolutely secure.Intel technologies may require enabled hardware, software, or service activation.Intel does not control or audit third-party data. You should consult other sources to evaluate accuracy.© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.0323/DN/WIT/PDF 737687-004US。
barert的用法-回复Barert作为一个自然语言处理工具,具有广泛的应用领域。
在本篇文章中,我们将一步一步地回答关于Barert的使用方法以及其在不同场景下的应用。
第一步:了解BarertBarert是一个基于机器学习技术的自然语言处理工具。
它使用深度学习模型和预训练的神经网络来提高文本相关任务的性能。
Barert可以用于文本分类、命名实体识别、情感分析、问答系统等多个领域。
第二步:安装和设置要使用Barert,首先需要安装相应的库。
Barert可以在Python中使用,可以通过pip命令安装相关库。
在安装完成后,还需要下载预训练的模型文件,这些模型文件可以从官方网站或GitHub上获得。
下载完成后,需要将模型文件放在指定的路径中。
第三步:导入库和模型在Python中,可以使用import语句导入Barert库。
然后,需要加载预训练的模型。
可以使用Barert的load_model函数,并将模型文件的路径作为参数传递给该函数。
一旦模型加载完成,就可以使用Barert来处理文本数据了。
第四步:文本预处理在使用Barert进行文本处理之前,通常需要对原始文本进行预处理。
这包括去除特殊字符、标点符号和停用词,进行分词等。
可以使用Python的字符串处理函数或其他文本处理库来实现这些操作。
第五步:应用BarertBarert支持多种自然语言处理任务,下面将介绍其中的几个。
1. 文本分类文本分类是将文本数据分为不同的类别或标签的任务。
可以使用Barert 来实现文本分类,通过训练一个分类器模型,并使用预训练的Barert模型作为特征提取器。
在训练过程中,将输入语料进行文本预处理,并将处理后的文本数据输入到Barert模型中。
可以使用不同的分类器算法来训练模型,如朴素贝叶斯、支持向量机等。
2. 命名实体识别命名实体识别是识别文本中的命名实体,如人名、地名、组织名等。
可以使用Barert来进行命名实体识别,通过训练一个命名实体识别模型,并使用预训练的Barert模型作为特征提取器。
SILICA | The Engineers of Distribution. SPEAr ® Embedded Processors ARM®9 and Cortex™-A9 based MPU family:Powerful Processing and Flexibility to serve a broad range of applications.ContentsSPEAr® – Embedded MicroprocessorSPEAr® Devices, based on ARM® Core Architecture,offer Substantial Processing Power and Wide Peripheral Support (5)SPEAr1310 – Embedded MicroprocessorDual ARM® Cortex™-A9 Cores Enable High-Performance,High-connectivity and Industrial Applications (6)SPEAr1340 – Embedded MicroprocessorDual ARM® Cortex™-A9 Core eMPU for User Interfaces/Multimedia in Computing and Industrial Applications (8)SPEAr300SPEAr® Embedded MPU with ARM926EJ-STM Core –A Smart Choice for VoIP, HMI and Security Applications (10)SPEAr310SPEAr® Embedded MPU with ARM926EJ-STM Core –A Smart Choice for Telecom and Connectivity Applications (12)SPEAr320SPEAr® Embedded MPU with ARM926EJ-STM Core –A Smart Choice for Factory Automation and HMI oriented Applications (14)SPEAr600SPEAr® Embedded MPU with Dual ARM926EJ-STM Core (16)Embedded applications today demand increasingly higher levels of performance and power efficiency for computing, communication, control, security and multimedia. ST’s SPEAr ® embedded MPUs meet these challenges head-on with state-of-the-art architecture, silicon technology and intellectual property, targeting networked devices used for communication, display and control of a broad range of applications.The SPEAr family of embedded microprocessors are based on ARM cores: a single ARM926EJ-S core for the SPEAr300 series, dual ARM926EJ-S cores for the SPEAr600, and dual Cortex™-A9 cores for the SPEAr1300 series.Key Features• The family presents a scalable processing power range, depending on the type and number of cores usedSPEAr® – Embedded MicroprocessorSPEAr® Devices, based on ARM Core Architecture, offer Substantial Processing Power and Wide Peripheral Support• Within a series, each device targets aspecific application segment, and offers peripherals and controllers in line with this specialization • All SPEAr embedded microprocessors embed the external memory management function via a dynamic memory controllerKey Benefits• ST’s low-power technology makes SPEArmicroprocessors extremely power-efficient, permitting portable applications to run longer without recharging, saving operating costs for end customers and allowing your applications to meet the most stringent regulatory standards• Standard core architecture is supported by a wide range of 3rd party tool providers, for easy developmentSPEAr Family of Embedded Microprocessors5ST’s SPEAr1310 is a part of the growing SPEAr family of embedded MPUs for networking. It offers an unprecedented combination of processing performance and extreme power reduction control for next-generation communication applications. The SPEAr1310 is based on ARM’s new multicore technology (Cortex-A9 SMP/AMP), and is manufactured using ST’s 55 nm HCMOS low-power silicon process.Key Features• CPU subsystem• Dual ARM Cortex-A9 cores, 600 MHz• Supports both symmetric (SMP) and asymmetric (AMP) multiprocessing• 32 + 32-Kbyte L1 instruction/data cache per core with parity check• Shared 512-Kbyte L2 cache (ECC protected)• Accelerator coherence port • Bus: 64-bit multilayer network-onchip • Memories• 32 Kbyte boot ROM • 32 Kbyte internal SRAM• Multiport controller (MPMC) for external DDR2-800/DDR3-1066 with ECCDual ARM® Cortex™-A9 Cores Enable High-Performance, ExtendedConnectivity for Industrial and Embedded Computer oriented Applications• Controller (FSMC) for external Flash and SRAM • Controller (SMI) for external serial • NOR Flash• Controls external peripherals• TFT LCD display up to 1920 x 1080 (60 Hz)• Touchscreen I/F • 9 x 9 keyboard • Memory card interface • Connectivity• Gigabit (with IEEE1588) and Fast Ethernet ports • 3x PCIe 2.0/SATA• 3x USB 2.0 (Host/OTG)• 2x CAN 2.0 a/b interfaces • 2x HDLC RS485• I²S, UART, I²C and SPI• Expansion interface (EXPI)• Security: C3 cryptographic accelerator • Power saving• Power islands for leakage reduction • IP clock gating for dynamic power reduction• Dynamic frequency scaling• Package: PBGA628 (23 x 23 mm 2, 0.8 mm pitch)OverviewThe SPEAr1310 combines two ARM Cortex-A9 cores witha DDR3 (third-generation, double-data-rate) memory interface. Together with ST’s low-power 55 nm HCMOS process technology, the SPEAr1310 delivers high computing power and customisability for a variety of embedded applications, with a high degree of cost competitiveness. The dual processors support both fully symmetric and asymmetric operations, at speeds of 600 MHz (industrial worst-case conditions) for an equivalent of 3000 DMIPS.control applications across the communication, computer peripherals and industrial automation markets. Cache memory coherency with hardware accelerators and I/O blocks increases throughput and simplifies software development. The accelerator coherency port (ACP), together with the device’s NoC routing capabilities, addresses the latest application requirements for hardware acceleration and I/O performance. ECC (Error Correction Coding) protection against soft and hard errors on both DRAM and L2 cache memories significantly improves the mean-time-between-failures for enhanced reliability.M e m o r y i n t e r f a c e sC o n n e c t i v i t y i n t e r f a c e sA p p l i c a t i o n s p e c i f i c I P sIn addition to unrivalled low power and multiprocessing capabilities, this new eMPU offers the innovative network-on-chip (NoC) technology. NoC is a flexible communications architecture that enables multiple and different traffic profiles while maximising data throughput in the mostperformance and power-efficient way.Equipped with an integrated DDR2/DDR3 memory controller and a full set of connectivity peripherals, including USB, SATA, PCIe (with integrated PHY) and Giga Ethernet MAC, the SPEAr1310 targets high-performance, embedded-SPEAr1310 Block DiagramST’s SPEAr1340 is a part of the growing SPEAr family of embedded MPUs. Combining dual Cortex™-A9 cores with the ARM ® Mali-200 GPU, it targets applications ranging from high-resolution video conferencing and security cameras, to webconnected devices. The SPEAr1340 is based on ARM’s new multi-core technology (Cortex™-A9 SMP/AMP), and manufactured with ST’s 55 nm HCMOS low-power silicon process.Key Features• CPU subsystem• Dual ARM ® Cortex™-A9 cores, 600 MHz• Supports both symmetric (SMP) and asymmetric (AMP) multiprocessing• 32 + 32-Kbyte L1 cache per core • Shared 512 Kbyte L2 cache • Accelerator coherence port • Bus: 64-bit multilayer NoC • Memories• 32 Kbyte boot ROM • 32 Kbyte internal SRAM• Multiport controller (MPMC) formexternal DDR2-800/DDR3-1066• Controller (FSMC) for external Flash and SRAM • Controller (SMI) for external serial NOR FlashDual ARM® Cortex™ A9 Core eMPU for User Interfaces/Multimedia in Computing and Industrial Applications• Connectivity• Giga/Fast Ethernet • 1x PCIe 2.0/SATA • 3x USB 2.0 (Host/OTG)• I²S, UART, and I²C• Controls external peripherals• TFT LCD display up to 1920 x 1080 (60 Hz)• Touchscreen I/F • 9 x 9 keyboard • Memory card interface • Multimedia• Mali-200 2D/3D GPU, up to 1080p, OpenGL ES 2.0, OpenVG 2.0• Multi-standard HD video decoder and encoder, up to 1080p• Digital video input port with alternate configuration for 4 camera interfaces • 7.1 multichannel surround audio • Security: C3 cryptographic accelerator • Power saving• Power islands for leakage reduction• IP clock gating for dynamic power reduction • Dynamic frequency scaling• Package: PBGA628 (23 x 23 mm 2, 0.8 mm pitch)OverviewST’s SPEAr1340 integrates a powerful ARM Mali 200 graphics processing unit for advanced 2D and 3D acceleration for user interfaces, navigation, browsing and gaming. The new device also embeds a hardware video encoder and a decoder supporting major compression standards (including H.264 and AVS), with video resolution up to 1080p and 30 frames per second. These capabilities enable multiple concurrent video flows in applications like surveillance and video-conferencing.Manufactured in ST’s low-power 55 nm HCMOS (high-speed CMOS) process technology, this new microprocessor benefits from the state-of-the-art SPEAr1300 architecture, which combines the unrivalled low-power and multi-processing capabilities of two ARM Cortex™-A9 cores with innovative Network-on-Chip (NoC) technology.Design supportInformation on development tools and evaluation boards, as well as downloads of the latest STLinux OS, firmware, and technical documentation, can be found on: /spearHardware implementations of graphic and video capabilities in the SPEAr1340 result in state-of-the art multimedia performance at ultra-low power consumption. Meanwhile, the two Cortex™-A9 cores are available to performing concurrent tasks as required. With its multiple interfaces, including I 2S and S/PDIF, the SPEAr1340 also provides excellent audio capabilities, handling up to 7.1 surround-sound configurations in both input and output paths.In security, the SPEAr1340 integrates a multi-standard cryptographic engine and One-Time Programmable (OTP) registers for unique identification and external flash memory anti-tamper protection.SPEAr1340 Block DiagramSPEAr® Embedded MPU with ARM926EJ-STM Core – A Smart Choice for VoIP, HMI and Security ApplicationsHighly integrated, the SPEAr300 is a 32-bit ARM926EJ-S -based eMPU for cost-sensitive applications requiring significant processing and connectivity capabilities at lower power consumption.The SPEAr300 delivers everything you want for an IP phone, human-machine interface, and security applications. However, this versatile device is also perfectly suited to many other embedded applications.Learn more about this and other SPEAr ® products, development kits, reference designs and our regional design-in support centers by visiting /spear .Key Features• ARM926EJ-S core runs up to 333 MHz • High-performance 8-channel DMA • Dynamic power-saving features • Memory:• 32-Kbyte ROM and and up to 57-Kbyte internal SRAM • LPDDR-333/DDR2-666 interface • Serial SPI Flash interface• Flexible static memory controller (FSMC) up to 16-bit data bus width, supporting external SRAM, NAND/NOR Flash memories, peripherals and FPGAs • SDIO/MMC card interface • Security:• Cryptographic accelerator (DES/3DES/AES/SHA1)• Connectivity:• USB 2.0 (2 hosts, 1 device)• Fast Ethernet (MII port)• SPI, I 2C, I 2S, UART and fast IrDA interfaces • Up to 8 I 2C/SPI chip selects•TDM bus (512 timeslots)SPEAr®: Flexible, powerful eMPUs with high connectivityEmbedded applications today demand increasingly higher levels of performance and power efficiency for computing, communication, control, security and multimedia. ST’s SPEAr ® family of embedded MPUs meet these challenges head-onwithstate-of-the-artarchitecture,silicontechnology and intellectual property, targeting networked devices used for communication, display and control.The new SPEAr300 delivers robust processing with a 333 MHz ARM926EJ-S core that supports complex operat-ing systems like Linux, sophisticated user interfaces and mi-crobrowsers. The CPU also offers 16 Kbytes of data cache, 16 Kbytes of instruction cache, JTAG and ETM (embedded trace macrocell) for debug operations.A set of tailored peripheral interfaces, hardware accelerators and controllers make the SPEAr300 a smart choice for HMI, security and IP phone applications.• Peripherals supported:• Camera interface (ITU-601/656 and CSI2 support)• LCD controller (resolutions up to 1024 x 768 and up to 24 bpp)• Touchscreen support • 9 x 9 keyboard controller• Glueless management of up to 8 SLICs/codecs • Miscellaneous functions:• Integrated real-time clock, watchdog, and system controller• 8-channel 10-bit ADC, 1 MSPS • 1-bit DAC• JPEG codec accelerator• 6 general-purpose 16-bit timers with capture mode and programmable prescaler• Up to 44 GPIOs with interrupt capability • Package: LFBGA289 (15 x 15 mm 2, pitch 0.8 mm)SPEAr300 Block DiagramHighly integrated, the SPEAr310 is a 32-bit ARM926EJ-S-based eMPU for cost-sensitive applications requiringsignificant processing and connectivity capabilities at lowerpower consumption.The SPEAr310 delivers everything you want for telecomapplications. However, this versatile device is also perfectlysuited to many other embedded applications. Learn moreabout this and other SPEAr®products, development kits,reference designs and our regional design-in supportcenters by visiting /spear.Key Features• ARM926EJ-S core runs up to 333 MHz• High-performance 8-channel DMA• Dynamic power-saving features• Memory:• 32-Kbyte ROM and up to 8-Kbyte internal SRAM• LPDDR-333/DDR2-666 interface• Serial SPI Flash interface• Flexible static memory controller (FSMC), up to 32-bitdata bus width, supporting external SRAM, NAND/NORFlash memories, peripherals and FPGAs• Connectivity:• USB 2.0 (2 hosts, 1 device)• 1 fast Ethernet MII port• 4 fast Ethernet SMII ports• SPI, I2C and fast IrDA interfaces• 6 UART interfaces• TDM bus (128 timeslots with 64 HDLC channels)• 2 HDLC ports with RS485 support• Security:• Cryptographic accelerator (DES/3DES/AES/SHA1)• Miscellaneous functions:• Integrated real-time clock, watchdog,and system controllerSPEAr® Embedded MPU with ARM926EJ-STM Core –A Smart Choice for Telecom and Connectivity Applicationssilicon technology and intellectual property, targetingnetworked devices used for communication, display andcontrol.The new SPEAr310 delivers robust processing with a333 MHz ARM926EJ-S core that supports complex operat-ing systems like Linux, sophisticated user interfaces andmicrobrowsers. The CPU also offers 16 Kbytes of data cache,16 Kbytes of instruction cache, JTAG and ETM (EmbeddedTrace Macrocell) for debug operations.A set of tailored peripheral interfaces, hardware acceleratorsand controllers make the SPEAr310 a smart choice fortelecom and connectivity applications.• 8-channel 10-bit ADC, 1 MSPS• JPEG codec accelerator• 6 general-purpose 16-bit timers with capture modeand programmable prescaler• Up to 102 GPIOs with interrupt capability• Package: LFBGA289 (15 x 15 mm2, pitch 0.8 mm)SPEAr®: Flexible, powerful eMPUs with high connectivityEmbedded applications today demand increasingly higherlevels of performance and power efficiency for computing,communication, control, security and multimedia.ST’s SPEAr®family of embedded MPUs meet thesechallenges head-on with state-of-the-art architecture,SPEAr310 Block DiagramSPEAr® Embedded MPU with ARM926EJ-STM Core – A Smart Choice for Factory Automation and HMI oriented ApplicationsHighly integrated, the SPEAr320 is a 32-bit ARM926EJ-S -based eMPU for cost-sensitive applications requiring significant processing and connectivity capabilities at lower power consumption.The SPEAr320 delivers everything you want for factory automation and consumer applications. However, this versatile device is also perfectly suited to many other embedded applications. Learn more about this and other SPEAr ® products, development kits, reference designs and our regional design-in support centers by visiting /spear .Key Features• ARM926EJ-S core runs up to 333 MHz • High-performance 8-channel DMA • Dynamic power-saving features • Memory:• 32-Kbyte ROM and up to 8-Kbyte internal SRAM• LPDDR-333/DDR2-666 interface • SDIO/MMC card interface • Serial SPI Flash interface• Flexible static memory controller (FSMC), up to 16-bit data bus width, supporting external SRAM, NAND/NOR Flash memories, peripherals and FPGAs • Security:• Cryptographic accelerator (DES/3DES/AES/SHA1)• Connectivity:• USB 2.0 (2 hosts, 1 device)• 2 fast Ethernet ports (MII/SMII ports)• 2 CAN interfaces• I 2S and fast IrDA interfaces• 3 SPI ports • 2 I 2C interfaces • 3 UART interfaces•1 standard parallel device portSPEAr®: Flexible, powerful eMPUs with high connectivityEmbedded applications today demand increasingly higher levels of performance and power efficiency for computing, communication, control, security and multimedia. ST’s SPEAr ® family of embedded MPUs meet these challenges head-onwithstate-of-the-artarchitecture,silicontechnology and intellectual property, targeting networked devices used for communication, display and control.The new SPEAr320 delivers robust processing with a 333 MHz ARM926EJ-S core that supports complex operat-ing systems like Linux, sophisticated user interfaces and microbrowsers. The CPU also offers 16 Kbytes of data cache, 16 Kbytes of instruction cache, JTAG and ETM (embedded trace macrocell) for debug operations. A set of tailored peripheral interfaces, hardware accelerators and control-lers make the SPEAr320 a smart choice for factory automa-tion and HMI oriented applications.• Peripherals supported:• LCD controller(resolutions up to 1024 x 768 and up to 24 bpp)• Touchscreen support • Miscellaneous functions:• Integrated real time clock, watchdog and system controller• 8-channel 10-bit ADC, 1 MSPS • 4 PWM timers• JPEG codec accelerator• 6 general-purpose 16-bit timers with capture mode and programmable prescaler• Up to 102 GPIOs with interrupt capability • Package: LFBGA289 (15 x 15 mm 2, pitch 0.8 mm)SPEAr320 Block DiagramSPEAr® Embedded MPU with Dual ARM926EJ-STM CoreThe SPEAr600 is a highly integrated eMPU with flexible memory support, powerful connectivity features and programmable LCD interface.High-performance dual 32-bit ARM926EJ-S CPU cores make this part the right choice for cost-sensitive applications that require extra computational power. The SPEAr600 is a versatile device which supports a wide range of embedded applications.Learn more about this and other SPEAr ® products, development kits, reference designs and our regional design-in support centers by visiting /spear .Key Features• Dual ARM926EJ-S cores run up to 333 MHz • High-performance 8-channel DMA • Dynamic power-saving features• Up to 733 DMIPS • Memory:• 32 Kbytes ROM and up to 8 Kbytes internal SRAM • External DRAM interface: 8/16-bit DDR1-400/DDR2-666• Flexible Static Memory Controller (FSMC) supporting parallel NAND Flash memory interface • Serial NOR Flash memory interface • Connectivity:• USB 2.0 (2 hosts, 1 device)• 1 Giga Ethernet (GMII port)• I 2C and fast IrDA interfaces • 3 SPI ports• 3 I²S interfaces (1 stereo input, 2 stereo outputs)• 2 UART interfaces • Peripherals supported:• LCD controller (resolutions up to 1024 x 768 and up to 24 bpp)• Touchscreen supportST’s SPEAr ® family of embedded MPUs meet these chal-lenges head-on with state-of-the-art architecture, silicon technology and intellectual property, targeting networked devices used for communication, display and control.The new SPEAr600 offers dual 333 MHz ARM926EJ-S cores that can support robust general-purpose processing and dedicated real-time processing together. The device sup-ports complex operating systems like Linux, sophisticated user interfaces and microbrowsers. Both processors offer 16 Kbytes of data cache, 16 Kbytes of instruction cache, JTAG and ETM (embedded trace macrocell) for debug operations.Furthermore, the SPEAr600 offers unique flexibility by externalising its local bus so that external peripherals can be added.• Miscellaneous functions:• Integrated real time clock, watchdog, and system controller • 8-channel 10-bit ADC, 1 MSPS• JPEG codec accelerator• 10 general-purpose 16-bit timers with capture mode and programmable prescalers• 10 GPIO bidirectional signals with interrupt capabilities • External 32-bit local bus• Package: PBGA420 (23 x 23 mm 2, pitch 1 mm)SPEAr®: Flexible, powerful eMPUs with high connectivityEmbedded applications today demand increasingly higher levels of performance and power efficiency for computing,communication, control, security and multimedia.SPEAr600 Block DiagramThink Microcontroller.Think Silica .SILICA | The Engineers of Distribution. SILICA OfficesAustriAAvnet EMG Elektronische Bauelemente GmbHSchönbrunner Str. 297 - 307 • A-1120 WienPhone: +43 1 86642-300 • Fax: +43 1 86642-350wien@BelgiumAvnet Europe Comm. VAEagle Building • Kouterveldstraat 20BB-1831 DiegemPhone: +32 2 709 90 00 • Fax: +32 2 709 98 10diegem@CzeCh repuBliC (slovAkiA)AvnetArgentinská 38/286 • CZ-170 00 Praha 7Phone: +420 2 34091031 • Fax: +420 2 34091030praha@DenmArkAvnet Nortec A/SEllekær 9 • DK-2730 HerlevPhone: +45 43 22 80 10 • Fax: +45 43 22 80 11herlev@FinlAnD (estoniA)Avnet Nortec OyPihatörmä 1B • FIN-02240 EspooPhone: +358 20 749 9200 • Fax: +358 20 749 9280 helsinki@FrAnCe (tunisiA)Avnet EMG France SAImmeuble Carnot Plaza • 14 Avenue CarnotF-91349 Massy CedexPhone: +33 1 64 47 29 29 • Fax: +33 1 64 47 00 84paris@Avnet EMG France SAParc Club du Moulin à Vent • Bât 4033, rue du Dr. G. Lévy • F-69693 Vénissieux Cedex Phone: +33 4 78 77 13 60 • Fax: +33 4 78 77 13 99lyon@Avnet EMG France SALes Peupliers II • 35, avenue des PeupliersF-35510 Cesson SévignéPhone: +33 2 99 83 84 85 • Fax: +33 2 99 83 80 83 rennes@Avnet EMG France SAParc de la Plaine 35 • avenue Marcel Dassault –BP 5867 • F-31506 Toulouse Cedex 5Phone: +33 5 62 47 47 60 • Fax: +33 5 62 47 47 61 toulouse@germAnyAvnet EMG GmbHGruber Str. 60 C • D-85586 PoingPhone: +49 8121 777 02 • Fax +49 8121 777 531 muenchen@Avnet EMG GmbHRudower Chaussee 12 a • D-12489 BerlinPhone: +49 30 214882-0 • Fax: +49 30 214882-33 berlin@Avnet EMG GmbHBerliner Platz 9 • D-44623 HernePhone: +49 2323 96466-0 • Fax: +49 2323 96466-60 herne@Avnet EMG GmbHWolfenbütteler Str. 22 • D-38102 Braunschweig Phone: +49 531 22073-0 • Fax: +49 531 2207335 braunschweig@Avnet EMG GmbHGutenbergstraße 15 • D-70771 Leinfelden-Echterdingen Phone: +49 711 78260-01 • Fax: +49 711 78260-200 stuttgart@ Avnet EMG GmbHCarl-Zeiss-Str. 14 - 18 • D-65520 Bad CambergPhone: +49 6434 9046 30 • Fax: +49 6434 90 46 33badcamberg@hungAryAvnetBudafoki út 91-93 • IP WEST/Building BH-1117 BudapestPhone: +36 1 43 67215 • Fax: +36 1 43 67213budapest@itAlyAvnet EMG Italy S.r.l.Via Manzoni 44, I-20095 Cusano Milanino MIPhone: +39 02 660 921 • Fax: +39 02 66092 333milano@Avnet EMG Italy S.r.l.Viale dell‘ Industria, 23 • I-35129 Padova (PD)Phone: +39 049 8073689 • Fax: +39 049 773464padova@Avnet EMG Italy S.r.l.Via Panciatichi, 40 • I-50127 Firenze (FI)Phone: +39 055 4360392 • Fax: +39 055 431035firenze@Avnet EMG Italy S.r.l.Via Scaglia Est, 144 • I-41100 Modena (MO)Phone: +39 059 351300 • Fax: +39 059 344993modena@Avnet EMG Italy S.r.l.Via Zoe Fontana, 220 • I-00131 Roma TecnocittàPhone: +39 06 4131151 • Fax: +39 06 4131161roma@Avnet EMG Italy S.r.l.Corso Susa, 242 • I-10098 Rivoli (TO)Phone: +39 011 204437 • Fax: +39 011 2428699torino@netherlAnDsAvnet B.V.Takkebijsters 2 • NL-4817 BL BredaPhone: +31 (0)76 57 22 700 • Fax: +31 (0)76 57 22 707breda@norwAyAvnet Nortec ASHagaløkkveien 7 • Postboks 63 • N-1371 AskerPhone: +47 6677 3600 • Fax: +47 6677 3677asker@polAnD (lAtviA/lithuAniA)Avnet EM Sp. z.o.o.Street Marynarska 11 • PL-02-674 Warszawa(Building Antares, 5th Floor)Phone: +48 22 25 65 760 • Fax: +48 22 25 65 766warszawa@portugAlAvnet Iberia S.A.Tower Plaza • Rot. Eng. Edgar Cardoso, 23Piso 14 • Sala EP-4400-676 Vila Nova de GaiaPhone: + 35 1 223 779 502 • Fax: + 35 1 223 779 503porto@russiA (BelArus, ukrAine)AvnetKorovinskoye Chaussee 10 • Building 2Office 25 • RUS-127486 MoscowPhone: +7 495 9371268 • Fax: +7 495 9372166moscow@AvnetPolustrovsky Prospect, 43, of.422RUS-195197 Saint PetersburgPhone: +7 (812) 635 81 11 • Fax: +7 (812) 635 81 12stpetersburg@sloveniA (BulgAriA, CroAtiA, BosniA, mACeDoniA,serBiA/montenegro, romAniA)AvnetDunajska c. 159 • SLO-1000 LjubljanaPhone: +386 (0)1 560 9750 • Fax: +386 (0)1 560 9878ljubljana@spAinAvnet Iberia S.A.C/Chile,10 • plta. 2ª, ofic 229 • Edificio Madrid 92E-28290 Las Matas (Madrid)Phone: +34 91 372 71 00 • Fax: +34 91 636 97 88madrid@Avnet Iberia S.A.C/Mallorca, 1 al 23 • 2ª plta.1A • E-08014 BarcelonaPhone: +34 93 327 85 30 • Fax: +34 93 425 05 44barcelona@Avnet Iberia S.A.Plaza Zabalgane, 12 • Bajo Izqda.E-48960 Galdàcano (Vizcaya)Phone: +34 944 57 27 77 • Fax: +34 944 56 88 55bilbao@sweDenAvnet Nortec ABEsplanaden 3D • BOX 1830 • S-17127 SolnaPhone: +46 8 587 461 00 • Fax: +46 8 587 461 01stockholm@switzerlAnDAvnet EMG AGGaswerkstr. 32 • CH-4900 LangenthalPhone: +41 62 919 55 55 • Fax: +41 62 919 55 00langenthal@turkey (greeCe, egypt)AvnetBayar Cad. Gülbahar Sok. Nr. 17/111-112TR- 34742 Kozytagi/IstanbulPhone: +90 216 361 89 58 • Fax: +90 216 361 89 27istanbul@uniteD kingDom (irelAnD)Avnet EMG Ltd.Avnet House • Rutherford CloseMeadway Stevenage, Herts • SG1 2EFPhone: +44 (0)1438 788310 • Fax: +44 (0)1438 788262stevenage@Avnet EMG Ltd.Unit A5, 5 Ashworth House • Deakins Business ParkThe Hall Coppice • Egerton, Bolton • BL7 9RPPhone: +44 (0)1204 590270 • Fax: +44 (0)1204 590299bolton@avnet.euAvnet EMG Ltd.Cherrycourt Way • Leighton BuzzardBedfordshire • LU7 4YYPhone: +44 (0)1525 858204 • Fax: +44 (0)1525 858280leightonbuzzard@avnet.euAvnet EMG Ltd.Unit 5B • Waltham Park • White WalthamBerkshire • SL6 3TNPhone: +44 (0)1628 512912 • Fax: +44 (0)1628 512999maidenhead@avnet.euAvnet EMG Ltd.Chancery House • 1 Premier WayAbbey Park, Romsey • SO51 9AQ SouthamptonPhone: +44 (0)2380 263516 • Fax: +44 (0)2380 263514eastleigh@avnet.eu07/2011。