Boolean Matching Using Generalized Reed-Muller Forms
- 格式:pdf
- 大小:45.52 KB
- 文档页数:16
boolean表达式通俗解释英文回答:A boolean expression is a statement or condition that evaluates to either true or false. It is often used in programming and logic to make decisions or control the flow of a program. In simple terms, a boolean expression is like a question that can be answered with a yes or no.For example, let's say I have a variable called "isRaining" that represents whether it is currently raining or not. I can use a boolean expression to check if it is raining or not. If the expression evaluates to true, it means it is raining. If it evaluates to false, it means it is not raining.Here's an example in code:boolean isRaining = true;if (isRaining) {。
System.out.println("I will bring an umbrella.");} else {。
System.out.println("I don't need an umbrella.");}。
In this example, the boolean expression `isRaining` is evaluated. Since it is true, the code inside the if statement is executed and "I will bring an umbrella." is printed. If `isRaining` was false, the code inside the else statement would be executed instead.Boolean expressions can also be combined using logical operators such as AND, OR, and NOT. These operators allow us to create more complex conditions. For example:boolean isRaining = true;boolean isCold = false;if (isRaining && !isCold) {。
类型转换中的异常处理及资源国际化Struts2的类型转换异常处理在视图页面中,用户的输入是很复杂的,偶然的输入错误或者恶意输入都会导致程序异常。
因此,必须对用户输入的数据进行校验。
例如,年龄信息必须是整数,一但用户输入了一个ABC,这时就需要进行数据类型校验。
Struts 2.0提供了类型转换异常处理机制,使用的是一个名字为conversionError的拦截器,这个拦截器被注册在默认的拦截器栈中。
如果Struts 2.0在类型转换过程中出现问题,这个拦截器就会进行拦截,并将异常信息封装成一个fieldError对象在视图页面上显示出来。
整个过程无须我们参与,Struts 2.0的类型转换器和conversionError拦截器会自动实现。
(一)简单类型转换异常主要是处理一些String、int、Date等数据类型之间的转换异常。
见例子“convError”,一个用户注册的功能:(1)Action的代码:package conv;import com.opensymphony.xwork2.ActionSupport;public class userAction extends ActionSupport{private String name;private int age;public String getName() {return name;}public void setName(String name) { = name;}public int getAge() {return age;}public void setAge(int age) {this.age = age;}public String addUser(){return SUCCESS;}}从上述代码中我们可以看到该Action仅,包含了name和age两个属性,其中age属性是int类型的。
(2)strust.xml配置文件代码:<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE struts PUBLIC"-//Apache Software Foundation//DTD Struts Configuration2.0//EN""/dtds/struts-2.0.dtd"><struts><constant name="struts.enable.DynamicMethodInvocation"value="false" /><constant name="struts.devMode" value="false" /><package name="Struts2.0_AddBook" extends="struts-default"> <action name="addUser" class="erAction"method="addUser"><result name="input">index.jsp</result></action></package></struts>在这个配置文件中的package包,继承了struts-default.xml文件,而struts-default.xml文件定义了Struts 2.0内建的拦截器,其中就包括我们上面提到的conversionError拦截器。
重载equals⽅法时要遵守的通⽤约定--⾃反性,对称性,传递性,⼀致性,⾮空性本⽂涉及到的概念1.为什么重载equals⽅法时,要遵守通⽤约定2.重载equals⽅法时,要遵守哪些通⽤约定为什么重载equals⽅法时,要遵守通⽤约定Object类的⾮final⽅法都有明确的通⽤约定,这些⽅法是被设计成被重载的。
重载时,如果不遵守通⽤约定,那么,其它依赖于这些通⽤约定的类(例如HashMap和HashSet)就⽆法结合该类⼀起正常⼯作----<<effective java>>quals⽅法实现了等价关系,重载时要遵守的通⽤约定:a.⾃反性(reflexive) 对于任何⾮null的引⽤值x, x.equals(x)必须返回true。
b.对称性(symmetric) 对于任何⾮null的引⽤值x和y,当且仅当y.equals(x)返回true时,x.equals(y)必须返回truec.传递性(transitive) 对于任何⾮null的引⽤值x,y和z,如果x.equals(y)返回true,并且y.equals(z)返回true,那么x.equals(z)返回trued.⼀致性 对于任何⾮null的引⽤值x和y,只要equals的⽐较操作在对象中所⽤的信息没有被修改,多次调⽤x.equals(y)就会⼀致地返回true,或者⼀致地返回falsee.对于任何⾮null的引⽤值x,x.equals(null)必须返回falsea.⾃反性基本上不会违背这⼀条规定。
如果违背了的话,将⼀个引⽤添加到⼀个集合中,然后,调⽤集合的contains(x)⽅法,它会返回false。
x.equals(x)不等于true,导致contains(x)⽅法返回false。
b.对称性对于任何⾮null的引⽤值x和y, x.equals(y)返回true, y.equals(x)也要返回truepublic final class CaseInsensitiveString {private final String s;public CaseInsensitiveString(String s) {if (s == null)throw new NullPointerException();this.s = s;}// Broken - violates symmetry!@Overridepublic boolean equals(Object o) {if (o instanceof CaseInsensitiveString)return s.equalsIgnoreCase(((CaseInsensitiveString) o).s);if (o instanceof String) // One-way interoperability!return s.equalsIgnoreCase((String) o);return false;}// This version is correct.// @Override public boolean equals(Object o) {// return o instanceof CaseInsensitiveString &&// ((CaseInsensitiveString) o).s.equalsIgnoreCase(s);// }public static void main(String[] args) {CaseInsensitiveString cis = new CaseInsensitiveString("Polish");String s = "polish";System.out.println(cis.equals(s) + " " + s.equals(cis));}}可以把上述的例⼦代码,代⼊对称性公式,CaseInsensitivesString为x, String为y, CaseInsensitivesString为y.x.equals(y),y.equals(x)都为true,当y是CaseInsensitivesString类型时;当y为String类型时,y.equals(x),就为false。
软件工程题目1.软件工程中的瀑布模型是一种什么类型的开发模型?o A. 线性顺序模型o B. 迭代模型o C. 增量模型o D. 敏捷模型答案: A解析: 瀑布模型是一种传统的线性顺序开发模型,它将软件开发过程划分为固定的阶段,每个阶段完成后才能进入下一个阶段。
2.以下哪种设计模式用于确保一个类只有一个实例?o A. 单例模式o B. 工厂模式o C. 适配器模式o D. 观察者模式答案: A解析: 单例模式是一种创建型设计模式,它保证一个类只有一个实例,并提供一个全局访问点。
3.在UML中,用于描述系统中的对象如何交互的图是什么?o A. 类图o B. 顺序图o C. 组件图o D. 部署图答案: B解析: 顺序图(或称时序图)用于展示系统中对象如何交互,以及交互的先后顺序。
4.以下哪个是C语言中的函数指针?o A. int *po B. int (*p)(int)o C. int p(int)o D. int &p答案: B解析: int (*p)(int)是一个函数指针的声明,它指向一个接受一个int参数并返回int的函数。
5.在软件工程中,“需求分析”阶段的目的是什么?o A. 确定系统功能o B. 设计系统架构o C. 编写代码o D. 测试软件答案: A解析: 需求分析阶段的目的是确定系统应该实现的功能,这是软件开发生命周期中的第一个阶段。
6.以下哪种设计模式用于解耦对象的创建过程?o A. 抽象工厂模式o B. 装饰者模式o C. 组合模式o D. 单例模式答案: A解析: 抽象工厂模式是一种创建型设计模式,用于解耦对象的创建过程,特别是在一个系统需要多个相关对象的集合时。
7.以下哪个C语言特性允许在单一语句中对多个条件进行测试?o A. for循环o B. if-else语句o C. switch语句o D. while循环答案: C解析: switch语句允许在单一语句中对多个条件进行测试,通常用于多条件选择。
elasticsearch-java多条件查询lambda表达式1. 引言1.1 概述本文将介绍如何使用elasticsearch-java库进行多条件查询,其中重点关注lambda表达式在查询中的应用。
Elasticsearch是一个强大的开源搜索引擎,用于快速、可扩展和分布式的全文搜索解决方案。
而elasticsearch-java是Elasticsearch官方提供的Java客户端,可以方便地与Elasticsearch进行交互。
1.2 文章结构文章将分为5个部分,每个部分从不同的角度介绍多条件查询和lambda表达式在elasticsearch-java中的应用。
首先,在"引言"部分我们将对文章进行概述并描述本文的结构。
接下来,在第二部分"正文"中,我们会简要介绍Elasticsearch和elasticsearch-java库,并详细讨论多条件查询的概念和应用场景。
第三部分将重点关注lambda表达式在Java中的作用和使用方法。
我们将简单介绍lambda表达式,并探讨它在多条件查询中的优势以及具体应用方法。
通过示例代码解析和实践经验总结,读者将更好地理解lambda表达式在多条件查询中的实际应用。
第四部分将详细阐述多条件查询的实现步骤和示例代码。
我们会逐步指导读者如何配置Elasticsearch客户端连接并设置索引信息,以及如何构建查询请求体对象。
最后,我们还会展示如何执行多条件查询和解析结果集。
最后一部分是结论部分,在这里我们将总结lambda表达式在elasticsearch-java多条件查询中的应用优势,并对未来研究方向进行一些探讨。
1.3 目的本文的目的是帮助读者了解elasticsearch-java库的基本概念和使用方法,并重点介绍lambda表达式在多条件查询中的应用。
通过学习本文,读者将能够编写出更高效、简洁且易于维护的代码,从而提升其Java开发技能,并深入理解多条件查询与lambda表达式之间的关系与优劣。
Package‘gelnet’October13,2022Version1.2.1Date2015-10-16License GPL(>=3)Title Generalized Elastic NetsDescription Implements several extensions of the elastic net regularizationscheme.These extensions include individual feature penalties for the L1term,feature-feature penalties for the L2term,as well as translation coefficientsfor the latter.Author Artem SokolovMaintainer Artem Sokolov<***********************>Depends R(>=3.1.0)Suggests knitr,rmarkdownVignetteBuilder knitrRoxygenNote5.0.1NeedsCompilation yesRepository CRANDate/Publication2016-04-0508:14:29R topics documented:adj2lapl (2)adj2nlapl (2)gelnet (3)gelnet.cv (5)gelnet.ker (6)gelnet.lin.obj (8)gelnet.logreg.obj (9)gelnet.oneclass.obj (10)L1.ceiling (11)Index1212adj2nlapl adj2lapl Generate a graph LaplacianDescriptionGenerates a graph Laplacian from the graph adjacency matrix.Usageadj2lapl(A)ArgumentsA n-by-n adjacency matrix for a graph with n nodesDetailsA graph Laplacian is defined as:l i,j=deg(v i),if i=j;l i,j=−1,if i=j and v i is adjacent tov j;and l i,j=0,otherwiseValueThe n-by-n Laplacian matrix of the graphSee Alsoadj2nlapladj2nlapl Generate a normalized graph LaplacianDescriptionGenerates a normalized graph Laplacian from the graph adjacency matrix.Usageadj2nlapl(A)ArgumentsA n-by-n adjacency matrix for a graph with n nodesDetailsA normalized graph Laplacian is defined as:l i,j=1,if i=j;l i,j=−1/deg(v i)deg(v j),ifi=j and v i is adjacent to v j;and l i,j=0,otherwiseValueThe n-by-n Laplacian matrix of the graphSee Alsoadj2nlaplgelnet GELnet for linear regression,binary classification and one-class prob-lems.DescriptionInfers the problem type and learns the appropriate GELnet model via coordinate descent.Usagegelnet(X,y,l1,l2,nFeats=NULL,a=rep(1,n),d=rep(1,p),P=diag(p),m=rep(0,p),max.iter=100,eps=1e-05,w.init=rep(0,p),b.init=NULL,fix.bias=FALSE,silent=FALSE,balanced=FALSE,nonneg=FALSE)ArgumentsX n-by-p matrix of n samples in p dimensionsy n-by-1vector of response values.Must be numeric vector for regression,factor with2levels for binary classification,or NULL for a one-class task.l1coefficient for the L1-norm penaltyl2coefficient for the L2-norm penaltynFeats alternative parameterization that returns the desired number of non-zero weights.Takes precedence over l1if not NULL(default:NULL)a n-by-1vector of sample weights(regression only)d p-by-1vector of feature weightsP p-by-p feature association penalty matrixm p-by-1vector of translation coefficientsmax.iter maximum number of iterationseps convergence precisionw.init initial parameter estimate for the weightsb.init initial parameter estimate for the bias termfix.bias set to TRUE to prevent the bias term from being updated(regression only)(de-fault:FALSE)silent set to TRUE to suppress run-time output to stdout(default:FALSE)balanced boolean specifying whether the balanced model is being trained(binary classi-fication only)(default:FALSE)nonneg set to TRUE to enforce non-negativity constraints on the weights(default:FALSE )DetailsThe method determines the problem type from the labels argument y.If y is a numeric vector,thena regression model is trained by optimizing the following objective function:1 2nia i(y i−(w T x i+b))2+R(w)If y is a factor with two levels,then the function returns a binary classification model,obtained by optimizing the following objective function:−1niy i s i−log(1+exp(s i))+R(w)wheres i=w T x i+bFinally,if no labels are provided(y==NULL),then a one-class model is constructed using the following objective function:−1nis i−log(1+exp(s i))+R(w)wheres i=w T x i In all cases,the regularizer is defined byR(w)=λ1j d j|w j|+λ22(w−m)T P(w−m)The training itself is performed through cyclical coordinate descent,and the optimization is termi-nated after the desired tolerance is achieved or after a maximum number of iterations.ValueA list with two elements:w p-by-1vector of p model weightsb scalar,bias term for the linear model(omitted for one-class models)See Alsogelnet.lin.obj,gelnet.logreg.obj,gelnet.oneclass.objgelnet.cv5 gelnet.cv k-fold cross-validation for parameter tuning.DescriptionPerforms k-fold cross-validation to select the best pair of the L1-and L2-norm penalty values. Usagegelnet.cv(X,y,nL1,nL2,nFolds=5,a=rep(1,n),d=rep(1,p),P=diag(p),m=rep(0,p),max.iter=100,eps=1e-05,w.init=rep(0,p),b.init=0,fix.bias=FALSE,silent=FALSE,balanced=FALSE)ArgumentsX n-by-p matrix of n samples in p dimensionsy n-by-1vector of response values.Must be numeric vector for regression,factor with2levels for binary classification,or NULL for a one-class task.nL1number of values to consider for the L1-norm penaltynL2number of values to consider for the L2-norm penaltynFolds number of cross-validation folds(default:5)a n-by-1vector of sample weights(regression only)d p-by-1vector of feature weightsP p-by-p feature association penalty matrixm p-by-1vector of translation coefficientsmax.iter maximum number of iterationseps convergence precisionw.init initial parameter estimate for the weightsb.init initial parameter estimate for the bias termfix.bias set to TRUE to prevent the bias term from being updated(regression only)(de-fault:FALSE)silent set to TRUE to suppress run-time output to stdout(default:FALSE)balanced boolean specifying whether the balanced model is being trained(binary classi-fication only)(default:FALSE)DetailsCross-validation is performed on a grid of parameter values.The user specifies the number of values to consider for both the L1-and the L2-norm penalties.The L1grid values are equally spaced on[0, L1s],where L1s is the smallest meaningful value of the L1-norm penalty(i.e.,where all the model weights are just barely zero).The L2grid values are on a logarithmic scale centered on1.ValueA list with the following elements:l1the best value of the L1-norm penaltyl2the best value of the L2-norm penaltyw p-by-1vector of p model weights associated with the best(l1,l2)pair.b scalar,bias term for the linear model associated with the best(l1,l2)pair.(omitted for one-classmodels)perf performance value associated with the best model.(Likelihood of data for one-class,AUC for binary classification,and-RMSE for regression)See Alsogelnetgelnet.ker Kernel models for linear regression,binary classification and one-class problems.DescriptionInfers the problem type and learns the appropriate kernel model.Usagegelnet.ker(K,y,lambda,a,max.iter=100,eps=1e-05,v.init=rep(0,nrow(K)),b.init=0,fix.bias=FALSE,silent=FALSE,balanced=FALSE)ArgumentsK n-by-n matrix of pairwise kernel values over a set of n samplesy n-by-1vector of response values.Must be numeric vector for regression,factor with2levels for binary classification,or NULL for a one-class task.lambda scalar,regularization parametera n-by-1vector of sample weights(regression only)max.iter maximum number of iterations(binary classification and one-class problems only)eps convergence precision(binary classification and one-class problems only) v.init initial parameter estimate for the kernel weights(binary classification and one-class problems only)b.init initial parameter estimate for the bias term(binary classification only)fix.bias set to TRUE to prevent the bias term from being updated(regression only)(de-fault:FALSE)silent set to TRUE to suppress run-time output to stdout (default:FALSE)balancedboolean specifying whether the balanced model is being trained (binary classi-fication only)(default:FALSE)DetailsThe entries in the kernel matrix K can be interpreted as dot products in some feature space φ.The corresponding weight vector can be retrieved via w = i v i φ(x i ).However,new samples can be classified without explicit access to the underlying feature space:w T φ(x )+b = iv i φT (x i )φ(x )+b =iv i K (x i ,x )+bThe method determines the problem type from the labels argument y.If y is a numeric vector,thena ridge regression model is trained by optimizing the following objective function:12nia i (z i −(w T x i +b ))2+w T w If y is a factor with two levels,then the function returns a binary classification model,obtained by optimizing the following objective function:−1niy i s i −log(1+exp(s i ))+w T w wheres i =w T x i +bFinally,if no labels are provided (y ==NULL),then a one-class model is constructed using the following objective function:−1n is i −log(1+exp(s i ))+w T w wheres i =w T x iIn all cases,w =iv i φ(x i )and the method solves for v i .ValueA list with two elements:v n-by-1vector of kernel weightsb scalar,bias term for the linear model (omitted for one-class models)See Alsogelnet8gelnet.lin.obj gelnet.lin.obj Linear regression objective function valueDescriptionEvaluates the linear regression objective function value for a given model.See details.Usagegelnet.lin.obj(w,b,X,z,l1,l2,a=rep(1,nrow(X)),d=rep(1,ncol(X)),P=diag(ncol(X)),m=rep(0,ncol(X)))Argumentsw p-by-1vector of model weightsb the model bias termX n-by-p matrix of n samples in p dimensionsz n-by-1response vectorl1L1-norm penalty scaling factorλ1l2L2-norm penalty scaling factorλ2a n-by-1vector of sample weightsd p-by-1vector of feature weightsP p-by-p feature-feature penalty matrixm p-by-1vector of translation coefficientsDetailsComputes the objective function value according to1 2nia i(z i−(w T x i+b))2+R(w)whereR(w)=λ1j d j|w j|+λ22(w−m)T P(w−m)ValueThe objective function value. See Alsogelnetgelnet.logreg.obj9 gelnet.logreg.obj Logistic regression objective function valueDescriptionEvaluates the logistic regression objective function value for a given model.See putes the objective function value according to−1niy i s i−log(1+exp(s i))+R(w)wheres i=w T x i+bR(w)=λ1j d j|w j|+λ22(w−m)T P(w−m)When balanced is TRUE,the loss average over the entire data is replaced with averaging over each class separately.The total loss is then computes as the mean over those per-class estimates. Usagegelnet.logreg.obj(w,b,X,y,l1,l2,d=rep(1,ncol(X)),P=diag(ncol(X)),m=rep(0,ncol(X)),balanced=FALSE)Argumentsw p-by-1vector of model weightsb the model bias termX n-by-p matrix of n samples in p dimensionsy n-by-1binary response vector sampled from0,1l1L1-norm penalty scaling factorλ1l2L2-norm penalty scaling factorλ2d p-by-1vector of feature weightsP p-by-p feature-feature penalty matrixm p-by-1vector of translation coefficientsbalanced boolean specifying whether the balanced model is being evaluatedValueThe objective function value.See Alsogelnet10gelnet.oneclass.obj gelnet.oneclass.obj One-class regression objective function valueDescriptionEvaluates the one-class objective function value for a given model See details.Usagegelnet.oneclass.obj(w,X,l1,l2,d=rep(1,ncol(X)),P=diag(ncol(X)),m=rep(0,ncol(X)))Argumentsw p-by-1vector of model weightsX n-by-p matrix of n samples in p dimensionsl1L1-norm penalty scaling factorλ1l2L2-norm penalty scaling factorλ2d p-by-1vector of feature weightsP p-by-p feature-feature penalty matrixm p-by-1vector of translation coefficientsDetailsComputes the objective function value according to−1nis i−log(1+exp(s i))+R(w)wheres i=w T x iR(w)=λ1j d j|w j|+λ22(w−m)T P(w−m)ValueThe objective function value.See AlsogelnetL1.ceiling11 L1.ceiling The largest meaningful value of the L1parameterDescriptionComputes the smallest value of the LASSO coefficient L1that leads to an all-zero weight vector fora given linear regression problem.UsageL1.ceiling(X,y,a=rep(1,nrow(X)),d=rep(1,ncol(X)),P=diag(ncol(X)),m=rep(0,ncol(X)),l2=1,balanced=FALSE)ArgumentsX n-by-p matrix of n samples in p dimensionsy n-by-1vector of response values.Must be numeric vector for regression,factor with2levels for binary classification,or NULL for a one-class task.a n-by-1vector of sample weights(regression only)d p-by-1vector of feature weightsP p-by-p feature association penalty matrixm p-by-1vector of translation coefficientsl2coefficient for the L2-norm penaltybalanced boolean specifying whether the balanced model is being trained(binary classi-fication only)(default:FALSE)DetailsThe cyclic coordinate descent updates the model weight w k using a soft threshold operator S(·,λ1d k) that clips the value of the weight to zero,whenever the absolute value of thefirst argument falls be-lowλ1d k.From here,it is straightforward to compute the smallest value ofλ1,such that all weights are clipped to zero.ValueThe largest meaningful value of the L1parameter(i.e.,the smallest value that yields a model with all zero weights)Indexadj2lapl,2adj2nlapl,2,2,3gelnet,3,6–10gelnet.cv,5gelnet.ker,6gelnet.lin.obj,4,8gelnet.logreg.obj,4,9gelnet.oneclass.obj,4,10L1.ceiling,1112。
一、概述在Java编程语言中,general是一个常见的术语,它通常用来描述一般性的概念或一般化的特性。
在Java中,general一般可以指代多种不同的含义,包括泛型、一般类或对象等。
本篇文章将从不同的角度来解析general在Java中的意思及其相关概念。
二、泛型1. 泛型的概念泛型是Java编程语言中的一个重要特性,它允许我们定义一种数据类型,这种数据类型可以在使用时进行具体化,从而实现对不同类型数据的通用操作。
使用泛型可以使代码更加灵活和可复用,同时也可以提高代码的安全性和可读性。
2. 泛型的语法在Java中,泛型通常使用尖括号<>来表示,其中放置泛型类型参数。
`List<String>`表示一个元素类型为String的列表。
在方法声明和类声明中,可以使用泛型来定义泛型形参,从而使得方法或类可以接受不同类型的参数。
3. 泛型的应用泛型在Java中被广泛应用于集合类、类库以及框架中,通过泛型可以实现对不同类型数据的统一管理和处理。
泛型的引入大大提高了Java编程的灵活性和安全性,使得代码变得更加健壮和可靠。
三、一般类1. 一般类的概念一般类是指普通的、非泛型的类,它们可以用来描述具体的实体、对象或数据结构。
一般类是Java编程中最常见的类,它们可以包含字段、方法、构造函数等成员,用于表示具体的实现逻辑或数据结构。
2. 一般类的特性一般类可以具有各种不同的特性,包括继承、封装、多态等。
通过定义一般类,我们可以实现对具体对象或数据结构的描述和操作,从而实现程序的功能逻辑。
3. 一般类的实例在Java编程中,我们经常会创建各种不同类型的一般类,并通过它们来实现程序的具体功能。
一般类的设计和实现是Java程序开发中的重要内容,良好的一般类设计可以提高代码的可维护性和扩展性。
四、一般对象1. 一般对象的概念一般对象是指普通的、非特定类型的对象,它们可以用来表示实际的实体或数据。
Boolean Matching Using Generalized Reed-Muller FormsIn this paper we present a new method for Boolean matching of completely specified Boolean functions. The canonical Generalized Reed-Muller forms are used as a powerful analysis tool. In-put permutation, input and output negation for matching are handled simultaneously. To reduce the search space for input correspondence, we have developed a method that can detect symme-tries of any number of inputs simultaneously. Experiments on MCNC benchmark circuits are very encouraging.1 Introductionmented by any of the library cells, perhaps with inverters on some of the input or output lines. In logic verification, descriptions of the logic from different stages of the design process are com-pared to check if they represent the same function. Conclusive answer to either problem requires extensive computation. This is due to the fact that, in most cases, the input variable correspon-dence is not known in advance. In recent years, Boolean matching has been proposed, where net-works are converted to their Boolean function representations and matching is decided by the equivalence check on the appropriate functions. There are several approaches to expedite the decision of equivalence. Signatures of the functions and of the individual variables are widely uti-lized for this purpose [2], [3], [6], [7], [9], [10]. Symmetry property among variables is another important characteristic [8], [12].In this paper, we propose a uniform and efficient method for solving the problem of Boolean matching. Our method is based on the Generalized Reed-Muller(GRM) representations of Bool-ean functions. For fixed polarities of all variables, GRM forms are canonical representations of Boolean functions. Therefore, the number of cubes with different lengths for the function and for each variable can be used as meaningful signatures. A certain kind of Binary Decision Diagram [5], [11] to represent GRMs is used in our method. With this data structure, all our operations are efficiently carried out in the Berkeley SIS BDD[1] package without any extra implementation.In the remainder of this paper, we will discuss only the single-output functions. Multi-output functions are handled by treating each output function independently. For the purpose of technol-ogy mapping, the majority of library cells are single output functions.2 Problem Formulationwo n -input Bool-ean functions are equivalent if one can be transformed into the other by one or more of the follow-ing transformations:(P1) input permutations;(P2) input negation, also known as phase assignment of input variables; and(P3) output negation or phase assignment of the output.Functions are np-equivalent under P1 and P2.npn-equivalent functions allow all three transformations.The problem addressed in this paper is stated as follows. Given two completely specified Boolean functions f and g which have the same number of inputs, we ask if f and g are npn -equiv-alent, and if they are, we wish to know the transformation.3 Definitions and Terminology|f|denotes the number of the on-set minterms of f . The length of a cube p , denoted |p|, is the number of literals in the cube.S(p)denotes the set of variables in a cube p .A variable x iis . Otherwise, x i is unbalanced .A function f is neutral ,if |f| = 2n-1.f is odd , if |f| is an odd integer. Otherwise, it is even.A Boolean difference of f with respect to a variable x i , denoted is defined as . It can be computed from the formula . The defi-nition of Boolean difference with respect to an arbitrary cube p = t i t j ...t k , is recursively defined as:.The following properties of the Boolean difference operator follow directly from the defini-tion. (a) (b). These properties imply that, for two arbitrary cubes p 1 and p 2,12n ,,,()f x i f x i=f x i B f x 1…x i …x n ,,,,()f x 1…x i …x n ,,,,()⊕f x i B f x i f x i ⊕=f p B…f t i B ()t j B ()…()t k B =f x iB f x i B =f x i x j B f x j x i B =, if S(p 1) = S(p 2).Using the Shannon expansion a function can be expressed as , or equivalently as . By applying the identity , we can derive(c) or (d).Each equation has two terms, one contains the literal t i and the other does not. We will call these pole-branch and dc-branch , respectively. The process of XORing the two cofactors is referred to as folding.With each n -input function f we associate a binary n -dimensional polarity vector . An entry of the vector is 0(1) if the corresponding variable in GRM form is in the negative (positive) polarity .For each variable x i in f , the major pole (M-pole) is one if and is zero if .The minor pole (m-pole) is the one corresponding to the smaller of the two. W e will call the polar-ity vector M-pole (m-pole) with respect to f , denote M (m), when each variable is assigned the M-pole (m-pole). Note that, for balanced variables the M-pole/m-pole can not be decided. The M-pole/m-pole always exist for odd functions, since every variable is unbalanced.4 Generalized Reed-Muller Forms4.1 Preliminarieseither positive or negative polarity in all cubes.We say that matching condition or equivalence of GRM forms is fulfilled, if the variables in two GRMs can be matched such that all the cubes match.The key question in our method is how to select each variable’s polarity before the two func-tions are matched. There are 2n possible combinations of polarities for n variables. Any Boolean function can be represented in 2n GRM forms. When there is no confusion in the context, we will use f V to represent the GRM form of a function f under the polarity vector V . Note that the number of cubes for a function varies with different polarity vectors. The selection of polarities will deter-mine the GRM forms that will be used for matching. To generate compatible GRMs, a consistent rule should be applied in the assignment of polarities for all variables. For unbalanced variables,f p 1B f p 2B =fx i f x i x i f x i+=f x i f x i x i f x i ⊕=x i 1x i ⊕=f x i f x i B f x i ⊕=f x i f x i B f x i ⊕=f x i f x i >f x i f x i <we use M-pole (or m-pole) for every variable to generate the GRM forms for f and g.To determine the equivalence of two functions, both functions will be transformed to their canonical GRM forms. A set of signatures, discussed later, will then be used to indicate any dis-crepancies. If all signatures match, then we can compare the actual functions in the GRM forms. At this stage, the concern is on matching of variables and cubes. After the functions are matched, the phase assignment of input variables can be decided with the comparison of polarities between the corresponding variables of the two functions. Different polarity between the corresponding variables means an inverter is needed to bring them to a common phase.Note that, in our method, we do not have to consider the input negation as a separate task and perform additional computations. The input negation, similarly the output negation, are deter-mined as a side effect of the matching condition.4.2 Functional decision diagramFunctional Decision Diagram(FDD) [5]. It can be derived efficiently [5], [11] and the size is, in general, smaller than that of the conventional ROBDD. It is a binary acyclic graph in which nodes are labeled0 or1 and each non-terminal node is labeled with a variable. The two edges for each non-terminal node have attribute0 or1. Order in which the variables appear along each path is fixed and the graph has no isomorphic sub-graphs. The root of the graph represents the function. A polarity vector is maintained with the FDD. For each non-terminal node labeled x i, the edge corresponding to the polarity of x i is the pole-branch and indicates that the corresponding literal appears in the cube. The edge with an attribute opposite to the polarity of x i is the dc-branch and indicates a missing literal, i.e.x i does not exist in the cube. Each path which starts from the root and terminates at the terminal one node represents a set of cubes in the GRM form of f. Any missing node, corresponding to the variable x j, in the path represents two cubes in the GRM. One cube contains x j with the appropriate polar-ity and the other cube does not have x j. Therefore, a path with k non-terminal nodes stands for a set of2n-k cubes in the GRMRefitem 11.An important operation in our Boolean matching method is the equivalence checking of twoGRM forms. It is executed on FDD similarly to the equivalence checking of two functions in ROBDD forms. Assume the variable orderings are matched in the two FDDs. Starting from the root, the equivalence check is recursively called at each branch of a node and terminates at the leaf nodes of the two FDDs. At each node, we check first if the variables corresponding to the nodes in the two FDDs are the same. Then the polarity of the presently processed variable x i is retrieved from the polarity vectors for each FDD and the dc and pole branches are identified. Then we check the equivalence of the corresponding dc and pole branches with recursive calls. Note that all operations and FDD representations can reside in an ROBDD package.4.3 Prime cubes in the GRM formsprime [4] in . We observe that every variable in a prime cube can assume either polarity without violating the definition of prime cubes. In [4], Csanky et.al. have proved that all the prime cubes occur in every GRM form of f , i.e. all prime cubes are essential in the GRM forms. Polarities of the variables in prime cubes follow that of the polarity vector of the residing GRM form.The fact that all prime cubes are essential makes the set of prime cubes very unique in identi-fying a function.The detection of the prime cubes is very straight forward. Csanky et.al.[4] proved that p is a prime if, and only if p is the only cube that contains all of S(p). In other words, all the cubes in any GRM form with maximum cardinality are primes. These might not be all the primes. In the func-tion ,x 2x 3and x 3x 4 are both primes,x 1 is also a prime but not one of the largest cardinality. Assume that p is a prime cube, then any cube p’ that satisfies is not a prime. Ignore all the longest primes and all the cubes that are composed only of subsets of their literals. If there is any cube left, again, we will look for the longest cubes.This process will continue until all the cubes are accounted for.4.4 Theoremspoles exist. The proofs are omitted due to space limitations.p 1f x 1x 2x 3x 2x 3x 4⊕⊕⊕=S p ′()S p ()⊂Theorem 1Two functions f and g are np -equivalent if, and only if, their GRM forms under M-pole (m-pole) are equivalent.Theorem 2Let f be a Boolean function and f be its complement. Then for any polarity vector V , we have . Let M and m be the M-pole and m-pole of f , respectively. Then the m-pole vector for f is M and the M-pole vector for f is m , i.e.the roles of the major pole and minor pole vectors are reversed in the complement.5 The Signatures for Boolean Matching and Symmetry Detection5.1 Signatures from on-set weightROBDD. For each variable x i , the two weights and are called positive cofactor weight (pcw) and negative cofactor weight(ncw), respectively.On the functional level, there are two types of signatures from this source. The first one,func-tional weight(fw), is the value |f|. The second one,weight distribution vector(wd), is the set of val-ues indicating how many different pcw and ncw pairs there are in the function. For the complement function, we need to compute the functional weight for complement(fw c) and weight distribution vector for complement(wdc), in case if the output negation is possible.fwc can be computed from the original fw , since the on-set of f is the off-set of f and vise versa .On the variable level, the (ncw, pcw) pair is the signature for each variable.Theorem 3Let f and g be np -equivalent and assume x i and y j are the matching variables from f and g , respectively. Then f and g have the same pairs of numbers for pcw and ncw .5.2 Signatures from GRM formdescribed below require O(kn) time complexity, where k is the number of nodes in the FDD and n is the number of variables.5.2.1 Distributions of cubes with different lengthsof variable inclusion count, VIC = (a ij ),where a ij is the number of cubes of length i that contain variable x j . At the same time we computef V f V 1⊕=f x i f x iincrementally the number of cubes of each length for the entire function. At the functional level,this becomes an n element vector FC , where each entry i contains the number of cubes in the GRM of length i . Note that if the cube 1 is in the GRM form we need to store the information in a separate location.The second array on the functional level is FVC computed from the VIC by summing up entries of each column, so each entry of FVC is the number of cubes containing variable x i .5.2.2 Distributions of cubes with related variablesn by n symmetric matrix, the incidence matrix, INC = (a ij ), where a ij is the number of cubes containing both variables x i and x j . The diagonal entry a ii is 0 if single literal cube x i is not present. Else it is1. This is a signature set at the variable level.A functional level signature FINC is also generated from INC . We compute an n element array by summing up each row (or column), except the diagonal entry . Each entry in FINC repre-sents the total frequency of occurrences of each variable.5.2.3 Prime cubesj represents the cubes of various lengths that contain x j , for each variable x j . The number of prime cubes, that contains x i , is the last nonzero number in the column. This value is saved separately as an array PCV of all variables. On the functional level,PC is the total number of prime cubes. This is easily derivable from PCV .There are two more matrices on the variable level, they are the versions of VIC and INC cal-culated only on prime cubes. They are: (1) A n by n matrix,PCvic = (a ij ), where a ij is the number of prime cubes of length i that contain x j .(2) A n by n matrix PCinc = (a ij ), where a ij is the num-ber of prime cubes that contains both variables x i and x j .6 Symmetries and Linear Variablesi j .Equivalence between any two of the four cofactors, with the choice of negating one of them, form 12 different symmetry relations (6 from both positive, 6 from negating one of the two). Theoreti-f x i x j f x i x j f x i x j f x i x j ,,,cally, any one or more of the 12 cases can indicate certain relationship between the two variables.We can use some of them to partition the input variable set into different equivalence classes.Checking all the symmetries can be time consuming. We have discovered that four among them are very closely related and can be verified simultaneously in the GRM forms[11]. These four types of symmetries form transitive relations and are very useful for the purpose of matching.6.1 Positive symmetry6.1.1 The nonequivalence-symmetryNE -symmetry) in variables x i and x j ,denoted as x i NE x j or {x i , x j },if f remains invariant when the two variables are interchanged, or equivalently, if .Note that {x i , x j } is the same as {x ,x } in terms of the definition.To detect the NE -symmetry in the GRM form, note that if, and only if. When the polarities of x i and x j are the same,x i NE x j can be detected in the GRM [12].6.1.2 The equivalent-symmetryE -symmetry) with respect to x i and x j . It is denoted x i E x j , or {x i ,x }({x , x j }).To detect E -symmetry in the GRM form, note that if, and only if. When the polarities of x i and x j are different,x i E x j can be detected in the GRM [12].Theorem 4[8] (Transitivity) If x i E x j and x j E x k , then x i NE x k .6.1.3 Mixed symmetriesE -symmetry tells us that NE and E symmetries are related. Using GRM forms, we can detect both types of symmetry by applying the same procedure. The only dif-ference is in the polarity combinations of the two variables. Therefore, we can group variables with the two types of symmetries together, i.e. If {x i ,x } and {x j ,x }, then {x i ,x , x k }({x , x j ,x }) is a positive symmetric set of three variables.Theorem 5If x i NE x j and x i E x j , then x i and x j are both balanced variables.This theorem sets up a necessary condition for two variables to be both E and NE symmetric.f x i x j f x i x j =f x i x j f x i x j =f x i x j f x i x j ⊕f x i x j f x i x j ⊕=f x i x j f x i x jfx i x j f x i x j =f x i x j f x i x j ⊕f x i x j f x i x j ⊕=Theorem 6Suppose both x i and x j are unbalanced and both variables have M-pole (m-pole)in the polarity vector V . Then f V will show the symmetry, if, and only if x i NE x j or x i E x j .This theorem makes the detection of positive symmetries uniformly span across unbalanced variables. M-pole GRM is enough to conclude any positive symmetry among multiple variables.We do not have to check the NE -symmetry and E -symmetry separately for each pair of variables as the conventional method requires.Theorem 7Any positive symmetry occurs between x i and x j in f if, and only if, they are sym-metric in the complement f .6.1.4 Total symmetry totally symmetric if every pair of variables in the function is positive symmet-ric. This implies that every pair of variables will be symmetric in a GRM form. For functions with M-pole vector, the following theorem makes checking for total symmetry very simple.Theorem 8Suppose a totally symmetric function f has M-pole vector M . Then in f M , for each i from 1 to n ,f M either contains no cube of length i or it contains all cubes of length i .To check for total symmetry, we only need to verify, for each i from 1to n,if f M contains 0 orcubes of length i , where is the combination of n choose i .6.2 Negative symmetry 6.2.1 The skew-nonequivalence-symmetryNE -symmetric) with respect to x i and x j . It is denoted by x i !NE x j .To detect skew-NE -symmetry in the GRM form, note that if, and only if. When the polarities of x i and x j are the same,x i !NE x j can be detected in the GRM [12].The only difference in the GRM form between NE -symmetry and skew-NE -symmetry is the extra term 1in the above discussion. This term 1is a single literal cube x i or x j in the whole func-tion. The detection can be done similarly to the NE -symmetry.Theorem 9(Transitivity) Any two of the conditions x i !NE x j ,x j !NE x k , and x i NE x k impliesC i n C i n x i x jf x i x j fx i x j f x i x j =f x i x j f x i x j ⊕f x i x j f x i x j 1⊕⊕=the third.6.2.2 The skew-equivalence-symmetryE -symmetry) with respect to x i and x j . It is denoted by x i !E x j .To detect skew-E -symmetry in the GRM form, note that if, and only if. When the polarities of x i and x j are different,x i !E x j can be detected in the GRM.The only difference in the GRM form between E -symmetry and skew-E -symmetry is the extra term 1. This term 1is a single literal cube x i or x j in the whole function.Theorem 10(Transitivity) Any two of the conditions x i !E x j ,x j !E x k , and x i NE x k implies the third.6.2.3 Mixed symmetriesi j and x i !E x j both hold, then f is a neutral function.This theorem sets up a necessary condition for two variables to hold both !E and !NE symme-tries. Only the variables in a neutral function need to be checked for both symmetries.Theorem 12Any two of the conditions x i !E x j ,x j !NE x k , and x i E x k implies the third.The following theorem is needed for matching in the application of technology mapping where negation of the output is allowed and has to be managed.Theorem 13Any negative symmetry occurs between x i and x j in f if, and only if, it occurs in the complement f .6.3 Checking all symmetry types among variable pairssame, i.e. point to the same subFDD. One branch is the one with x i and without x j , i.e. simi-larly to taking cofactor on the ROBDD, where t i stands for the pole -branch of x i and dc stands for the dc -branch of x j . The other is , where dc stands for the dc -branch of x i and t j stands for the pole -branch of x j . Negative symmetry is checked after we add(XOR) a 1 to any one of the two branches discussed above.f x i x j f x i x j=fx i x j f x i x j =f x i x j f x i x j ⊕f x i x j f x i x j 1⊕⊕=f t i dc ,f dc t j,To check for symmetries in the GRM form, we first partition the variables by their signatures and then check symmetries only on certain candidate groups of variables.The number of GRMs needed for symmetry detection depends on the polarity vectors we choose. For checking NE and skew-NE symmetries, the polarities of the two variables need to be the same in the GRM, i.e. both 1 or both 0. For checking E and skew-E symmetries, the polarities of the two variables need to be different in the GRM, i.e.1 and 0, or 0 and 1. There are four possi-ble combinations of polarities between any two variables ,namely,00, 01, 10, 11.We need one combination from 00 and 11 and a second combination from 01 and 10. It can be shown that any n vectors, where the i th and (i+1)th vectors differ only in the i th entry, is sufficient. These n polar-ity vectors contain, between any two variables, three out of four of the desired combinations[12]. =1 and the function can be expressedas or , where the function g is independent of x i . Variables that satisfy this condition are called linear in f . In ,x 2 is a linear variable.Linear variables are very easy to detect in any GRM form, since the variable can only have one cube of length one in any GRM form. They also have strong properties. First of all, linear variables must be balanced. Secondly, linear variables are all NE -symmetric and E -symmetric to each other in the function. Hence, once the set of linear variables is detected, it does not have to be checked for any type of symmetry. The third property of the linear variables is that they are prime cubes by definition.A linear function is of the form , where c 0 =0 or 1. A linear function is always neutral and all dependent variables in a linear function are balanced . These two properties make linear functions excellent choice for breaking the balanced variables while searching for a unique GRM forms.7 Boolean Matching Procedurenpn -equivalent classes of functions are iden-tified with GRM forms. For Boolean matching, our goal is to (1) differentiate every variable in ax if x if x iB f x i g ⊕=f x i g ⊕=f x 2x 1x 3⊕=f c 0x 1…x n ⊕⊕⊕=function in a unique way and (2) apply (1) to determine whether two Boolean functions are npn-equivalent. If a unique GRM can be derived, then the signatures obtained from the GRM can be used for (1). V ariables with identical signatures will be further checked for any symmetry. If all pairs of variables with identical signatures are symmetric in any one of the four symmetry types, no further identification on the variables is needed. (2) can be accomplished on the GRMs of the two functions with the variables ordered in the same way.7.1 Deciding polarities for all variablesfollowing. The M-pole is the choice for each unbalanced variable. If all variables have M-poles, then the function can be folded and a unique GRM is obtained. If there exist balanced variables and unbalanced variables, then the unbalanced variables will be folded first. The function is now an XOR sum of cubes of mixed polarities. Counting the occurrences of x i and x in all cubes can still show the weight unbalance or balance of the remaining variables. Note that the polarity vec-tor obtained with this process is still consistent, as long as the rule is applied to every variable. The function can be folded with respect to the variables whose polarities were decided to obtain a unique GRM. This process is repeated if a function still has balanced variables. Now, either (1) polarities of all the variables have been decided or (2) a set of variables, with nondecreasing size, remain balanced through out this process.7.2 Adding linear functionanced as described above. To break the balance, a linear function that contains all and only the balanced variables is added to the function. The new function is used to determine the polarities of the balanced variables.7.3 Additional GRMsThe first case is for the output negation. If a unique GRM has been obtained, then a new GRM will be determined with the polarity of every variable reversed. This is based on Theorem 2. The signatures should be generated from both GRMs and they are compared during the Booleanmatching process to decide which output phase should be used. Note that all the symmetry condi-tions remains the same, because of Theorem 7 and Theorem 13.The second case which requires additional GRMs is for more symmetry checking. This is when there are variables that can not be differentiated or we need to determine all four symmetry types for all pairs of variables. The maximum number of GRMs is n . Note that each new GRM can be incrementally computed from the original GRM.The last case is when a unique GRM can not be obtained. The problem is with the balanced variables. If none of the aforementioned methods can break the balance, then some exhaustive search is needed. Instead of exhaustive permutation among subset of variables with identical sig-natures, we can derive a minimum set of GRMs that can still manage the matching of npn -equiv-alent functions. A set of GRMs can be derived as follows. Adding a single literal cube to the function f , i.e. is created for each balanced variables x i . After obtaining the GRM for g i , adding x i back to g i will derive f . In technology mapping, for hard-to-match functions, the set of GRMs and their signatures are computed beforehand. A function can match the library cell if it can match any one of the GRMs.In the worst case, all variables are balanced through out the process and we will need n GRMs. Another n GRMs are needed for output negation and 2n is the upper bound in the number of GRMs needed for Boolean matching of npn -equivalent functions. However, this is a pessimis-tic upper bound since balanced variables are likely to form some type of symmetries among them-selves.8 Experimental Resultscases. The test was run on a DEC5000. Each MCNC benchmark was treated as a set of single out-put functions and tested separately. Our intention was to differentiate all variables in the func-tions. For logic verification, this is certainly not necessary , as long as every variable can be differentiated in one of the output functions, it would have sufficient information to order the vari-ables for the entire circuit and the rest can be done on the reordered ROBDDs of the source andg i f x i ⊕=target circuits. In the cases we tested, most of the variables are differentiated in few of the output functions. As stated earlier, we do compute functional level signatures for output matching.The program terminates when all variables are differentiated in the cofactor weight or in a unique GRM with the detection of symmetries. Additional GRMs are generated for symmetry check if some variables are not yet differentiated.Table 1 lists the MCNC benchmark cases. Column#I and#O stand for the number of primary input and primary output, respectively.#h is the number of output functions that contain non-dif-ferentiable variables. The column time is the average time per output function for each bench-mark. Note that the vest majority of the output functions have unique GRM. The rest of the functions with all variables differentiated have up to four GRMs.For the benchmarks with hard output functions, we have also investigated all variables for the purpose of logic verification. Table 2, column#hi, shows the sizes of each subset of variables that are not differentiated in any output function. Multiple subsets of the same size are shown with the number of sets outside the parentheses.9 Conclusionfunctions are used as tools for matching under input permutation, input negation and output nega-tion. We also incorporate signatures and symmetries for Boolean matching. The signatures obtained from the GRM forms are also used in the symmetry detection. W e have shown that all four types between every pair of variables can be found by checking at most n GRMs. The total symmetry of functions can be checked with simple arithmetic computation.With our method, most of the npn-equivalent classes only need two GRMs for the purpose of Boolean matching.ReferencesIEEE Trans. Computers, vol. C-35, pp. 677-691, Aug. 1986.[2] J.B. Burch and D.E. Long, “Efficient Boolean Function Matching”,Proc. Intl. Conference.。