当前位置:文档之家› Linear phase transition in random linear constraint satisfaction problems, Probability Theo

Linear phase transition in random linear constraint satisfaction problems, Probability Theo

Linear phase transition in random linear constraint satisfaction problems, Probability Theo
Linear phase transition in random linear constraint satisfaction problems, Probability Theo

a r X i v :m a t h .P R /0210470 v 2 14 M a y 2003Linear Phase Transition in Random Linear Constraint

Satisfaction Problems

David Gamarnik ?October 31,2006Keywords:Random K-SAT,Satis?ability Threshold,Linear Programming,Random Graphs Abstract Our model is a generalized linear programming relaxation of a much studied random K-SAT problem.Speci?cally,a set of linear constraints C on K variables is ?xed.From a pool of n variables,K variables are chosen uniformly at random and a constraint is chosen from C also uniformly at random.This procedure is repeated m times independently.We are interested in whether the resulting linear programming problem is feasible.We prove that the feasibility property experiences a linear phase transition,when n →∞and m =cn for a constant c .Namely,there exists a critical value c ?such that,when c c ?,the ”distance”to feasibility is at least a positive constant independent of n .Our result is obtained using the combination of a powerful local weak convergence method developed in Aldous [Ald92],[Ald01],Aldous and Steele [AS03],Steele [Ste02]and martingale techniques.By exploiting a linear programming duality,our theorem implies the following result in the context of sparse random graphs G (n,cn )on n nodes with cn edges,where edges are equipped with randomly generated weights.Let M (n,c )denote maximum weight matching in G (n,cn ).We prove that when c is a constant and n →∞,the limit lim n →∞M (n,c )/n,exists,with high probability.We further extend this result to maximum weight b -matchings also in G (n,cn ).1Introduction The primary objective of the present paper is studying randomly generated linear programming prob-

lems.We are interested in scaling behavior of the corresponding objective value and some phase transition properties,as the size of the problem diverges to in?nity.Our random linear programming problems are generated in a speci?c way.In particular,our linear programs have a ?xed number of variables per constraint and the number of variables and constraints diverges to in?nity in such a way that their ratio stays a constant.

Our motivation to consider this speci?c class of random linear programs has several sources.The main motivation is recent explosion of interest in random instances of boolean satis?ability (K-SAT)problems and ensuing phase transition phenomenon.The main outstanding conjecture in this ?eld states that the satis?ability property of random K-SAT problem experiences a linear phase transition as the function of the ratio of the number of clauses to the number of variables.Our linear programming problem can be viewed as a generalized linear programming relaxation of the integer programming formulation of such random K-SAT problem.

Tightly related to the K-SAT problem are problems of maximal cardinality cuts,independent sets, matchings and other objects,in sparse random graphs G(n,?cn?),which are graphs on n nodes and ?cn?edges selected uniformly at random from all the possible edges,and where c>0is some?xed constant.For future we drop the annoying notation?·?,assuming that cn is always an integer.It is easy to show that all of these objects scale linearly in n.It is conjectured that the size of each such object divided by n converges to a constant,independent of n.This convergence is established only for the case of maximal matchings using direct methods[KS81],where the limit can be computed explicitly, but is open in other cases.

The main result of this paper states that the objective value of the random linear programming problem we consider,when divided by the number of variables converges with high probability(w.h.p.) to a certain limit.As a corollary we prove that,suitably de?ned,distance to feasibility in the same random linear programming problem experiences a linear phase transition,just as conjectured for random K-SAT problem.Furthermore,we show that,in a special case,the dual of this random linear programming problem is a linear programming relaxation of the maximum cardinality matching and more generally b-matching(de?ned later)problems in G(n,cn).We show that these relaxations are asymptotically tight as the number of nodes n diverges to in?nity.As a corollary of our main result,we prove that maximum cardinality b-matching when divided by n converge to a constant.These results hold even in the weighted version,where edges are equipped with randomly independently generated non-negative weights.

Our proof technique is a combination of a very powerful local weak convergence method and martin-gale techniques.The local weak convergence method was developed in Aldous[Ald92],[Ald01],Aldous and Steele[AS03],Steele[Ste02].The method was speci?cally used by Aldous for proving theζ(2)con-jecture for the random assignment problem.It was used in[Ald92]to prove that the expected minimum weight matching in a complete bipartite graph converges to a certain https://www.doczj.com/doc/7614050905.html,ter Aldous proved [Ald01]that this limit is indeedζ(2),as conjectured earlier by Mezard and Parisi[MP87].Since then the local weak convergence method was used for other problems(see[AS03]for a survey),and seems to be a very useful method for proving existence of limits in problems like the ones we described,and in some instances also leads to actually computing the limits of interest.By an analogy with the per-colation literature,we call these problems existence and computation of scaling limits in large random combinatorial structures.Such questions,many of them open,abound in percolation literature.For example the existence of limits of crossing probabilities in critical percolation have been established in several percolation models like triangular percolation using conformal invariance techniques[Sch01], [SW01],but are still open in the case of other lattices,like rectangular bond and site critical percolation, see Langlands[LPSA94].Whether a local weak convergence is a useful technique for addressing these questions seems worth investigation.

To the extend that we know,our result is the?rst application of the local weak convergence method to establishing phase transitions in random combinatorial structures.In the following section we describe in details randomly generated combinatorial problems we mentioned above,describe the existing results in the literature and list some outstanding conjectures.In Section3we describe our model and state our main results.We also give a short summary of the proof steps.Sections4,6,7are devoted to the proof of our main result.Section8,is devoted to the applications to the maximum weight matching and b-matching in sparse random graphs.Section9is devoted to conclusions and some open problems.

2Background:random K-SAT,sparse random graphs and scaling limits

2.1Random K-SAT problem

A satis?ability or K-SAT problem is a boolean constraint satisfaction problem with a special form.A collection of n variables x1,x2,...,x n with values in{0,1}is?xed.A boolean formulae of the form

C1∧C2∧···∧C m is constructed,where each C i is a disjunctive clause of the form x i

1∨ˉx i

2

∨ˉx i

3

∨···∨x i

K

,

where exactly K variables are taken from the pool x1,...,x n,some with negation,some without.The formulae is de?ned to be satis?able if an assignment of variables x i,i=1,...,n to0or1can be constructed such that all the clauses take value1.The K-SAT problem is one of the most famous combinatorial optimization problem,see[PS98].It is well known that the satis?ability problem is solvable in polynomial time for K=2,and is NP-complete for K≥3.

Recently we have witnessed an explosion of interest in random instances of the K-SAT problem. This was motivated by computer science,arti?cial intelligence and statistical physics investigations, with phase transition phenomena becoming the focus of a particular attention.A random instance of a K-SAT problem with m clauses and n variables is obtained by selecting each clause uniformly at random from the entire collection of2K n K

that for K=2and every c>1,there exists a constantα(c)>0such that N(n,cn)≤(c?α(c))n, w.h.p.The following conjecture from([CGHS03])then naturally extends Conjecture1.

Conjecture2Assuming Conjecture1holds,lim n→∞N(n,cn)

n ,lim

n→∞

E[CUT(n,c)]

The scaling limit of the form(1)is in fact proven for maximum cardinality matchings M(n,c)by Karp and Sipser[KS81].The result was strengthened later by Aronson,Frieze and Pittel[APF98]. Karp and Sipser proved that,almost surely,

(2)lim

n→∞M(n,c)

2

,

whereγ?(c)is the smallest root of the equation x=c exp(?c exp(?x))andγ?(c)=c exp(?γ?(c).Their algorithmic method of proof is quite remarkable in its simplicity.We brie?y describe the argument below and explain why,however,it does not apply to the case of weighted matchings.Suppose we are given a(non-random)graph G.Then the following algorithm?nds a maximum matching(clearly there could be many maximum matchings):while the graph contains any leaves,pick any leaf of a tree and the corresponding edge.Mark the edge and delete the two nodes incident to the edge and all the other leaves that share the(deleted)parent with the selected leaf.Delete all the edges incident to these leaves and delete the edge between the parent and its own parent.Repeat while there are still leaves in the graph.When the graph has no more leaves select a maximum size matching in the remaining graph.It is a fairly easy exercise to prove that this remaining matching plus the set of marked edges is a maximum matching.This fact is used to prove(2).

One notes,however,that when edges of the graph are equipped with some weights,the Karp-Sipser algorithm does not necessarily work anymore.Occasionally it might be better to include an edge between a parent of a leaf and and a parent of a parent of a leaf and,as a result,not include the edge incident to the leaf.Therefore,the Karp-Sipser algorithm may produce a strictly suboptimal matching and the results(2)do not hold for the weighted case.Moreover,it is not clear how to extend the Karp-Sipser heuristic to b-matchings.In this paper we prove the convergence(2)for the maximum weight b-matchings.The proof uses the main result of the paper and the linear programming duality, though we are not able to compute the limits.Naturally,our result applies to the non-weighted case–maximum cardinality b-matching.To the best of our knowledge this is a new result.

The case of maximum weight matching with random weights is treated by Aldous and Steele[AS03] for the case of a randomly generated tree on n nodes.That is,consider a tree selected uniformly at random from the set of all possible n n?2labelled trees.The limit of the sort(2)is proven and computed using the local weak convergence method,when the edges of this tree are equipped with exponentially distributed random weights.The tree structure of the underlying graph helps very much the analysis. In our case,however,the random graph G(n,cn)contains a linear size non-tree”giant”component, [JLR00],when c>1/2,and the results of([AS03])are not applicable.

Yet another scaling limit question is the existence of the limits for probability of k-colorability in G(n,cn).A graph is de?ned to be k≥2colorable if there exists a function mapping vertices of G to colors1,2,...,k such that no two nodes connected by an edge have the same color.The following conjecture proposed by Erdos is found in Alon and Spencer[AS92].

Conjecture3For every positive integer k≥2there exists a critical value c?k such that the graph G(n,cn)is w.h.p.k-colorable for cc?k.

This conjecture is very similar in spirit to Conjecture1.For a survey of existing results see a recent Molloy’s survey[Mol01].For related results see also[AM02a],[COMS03].

3Model and the main results

There is a natural way to describe a K-SAT problem as an integer programming problem.The variables are x i,i=1,2,...,n which take values in{0,1}.Each clause C j is replaced by a linear constraint of the

form x i

1+(1?x i

2

)+x i

3

+...≥1,where term(1?x)replacesˉx.For example a clause C=x3∨x7∨ˉx2∨ˉx4

in a4-SAT problem is replaced by a constraint x3+x7+(1?x2)+(1?x4)≥1.It is easy to check that an assignment of x2,x3,x4,x7to0and1gives C value1if and only if the corresponding constraint is satis?ed.Clearly,these constraints can be created for all the possible clauses.In the present paper we study the linear programming(LP)relaxation of this integer programming problem,where the restriction x i∈{0,1}is replaced by a weaker restriction x i∈[0,1].Note,that this relaxation by itself is not interesting,as the assignment x i=1/2for all i=1,2,...,n makes all of the linear constraints feasible.However,the problem becomes non-trivial when we generalize the types of constraints that can be generated on the variables x i,and this is described in the following subsection.

3.1Random K-LSAT problem

Our setting is as follows.Consider a?xed collection of K variables y1,y2,...,y K which take values in some bounded interval B1x≤y i≤B2x and a?xed collection C of linear constraints on these variables: K k=1a rk y k≤b r,r=1,2,...,|C|,where the values a rk,b r are arbitrary?xed reals.The r-th constraint can also be written in a vector form a r y≤b r,where a r=(a r1,...,a rK)and y=(y1,...,y K).We?x c>0and let m=cn,where n is a large integer.A random instance of a linear constraint satisfaction problem with n+m variables x1,...,x n,ψ1,...,ψm and m constraints is constructed as follows.For each j=1,2,...,m we perform the following operation independently.We?rst select K variables

x i

1,x i

2

,...,x i

K

uniformly at random from x i,i=1,2,...,n.Whether the variables are selected with

or without replacement turns out to be irrelevant to the results of this paper,as it is the case for random K-SAT problem.However,the order with which the variables are selected is relevant,since the constraints are not necessarily symmetric.Then we select1≤r≤|C|also uniformly at random.We then generate a constraint

(3)C j:

K

k=1

a rk x i

k

≤b r+ψj.

Here is an example of an instance with K=3,n=10,m=4,|C|=2.Say the?rst constraint C1is 2y1+3y2?y3≤5,and the second constraint C2is?y1+y2+4y3≤2.An example of an instance where?rst three constraints are type C1and the fourth is type C2is

(2x5+3x4?x9≤5+ψ1)∧(2x1+3x3?x4≤5+ψ2)∧

(2x2+3x1?x10≤5+ψ3)∧(?x5+x8+4x7≤2+ψ4).

The central question is what are the optimal values of B1x≤x i≤B2x,ψj≥0,which minimize the sum ψj subject to the constraints C j.That is,we consider the following linear programming problem: (4)Minimize 1≤j≤mψj,subject to:C1,C2,...,C m,x i∈[B1x,B2x],ψj≥0.

In words,we are seeking a solution x j which is as close to satisfying the constraints K k=1a ri k x i k≤b r as possible.If the optimal value of this linear programming problem is zero,that isψj=0for all j, then all of these constraints can be satis?ed.Naturally,the objective value of the linear program(4) is a random variable.We denote this random variable by LP(n,c).Note,that the linear program(4) is always feasible,by makingψj su?ciently large.In fact,clearly,in the optimal solution we must

haveψj=max(0, K k=1a ri k x i k?b r).We refer to the linear program(4)as a random linear constraint satisfaction(LSAT)problem,or random K-LSAT problem.

The following conditions on the set of constraints C will be used below.

?Condition A.For any constraint a r y≤b r,1≤r≤|C|for any k≤K and for any value z∈[B1x,B2x]there exist values y1,...,y K∈[B1x,B2x]such that y k=z and the constraint is satis?ed.

?Condition B.There exist a positive integer l and a constantν>0such that for any K-dimensional cube I of the form 1≤k≤K[i k l],B1x≤i k

Theorem1For every c≥0,the limit

GLP(n,c)

(8)lim

n→∞

?f(c)|>?}→0

n

as n→∞.

Our?rst application of Theorem1is the following result.It establishes a linear phase transition property for the random K-LSAT problem.Recall that LP(n,c)is the optimal value of the linear programming problem(4).

Theorem2There exists a constant c?K>0such that,w.h.p.as n→∞,

LP(n,c)

(9)lim

n→∞

>0,

n

for all c>c?K.Moreover,if Condition A holds,then c?K>0,and if Condition B holds,then c?K<+∞.

In what sense does the theorem above establish a linear phase transition?It is conceivable that for a collection of constraints C,the following situation occurs:there exist two constants c1>c2 and two sequences n(1)t,n(2)t,t=0,1,2,...,such that for c=c1the corresponding optimal val-ues(4)of the random K-LSAT problem satisfy w.h.p.lim t LP(n(1)t,c)/n(1)t=0,but for c=c2, lim inf t LP(n(2)t,c)/n(2)t≥δ(c)>0.In other words,the critical density c oscillates between di?erent values.This is precisely the behavior that Conjectures1and2rule out for random K-SAT problem. Our theorem states that such a thing is impossible for the random K-LSAT problem.There exists a linear function c?K n such that,w.h.p.,below this function the instance is very close to being feasible, but above this function the scaled”distance”min(1/n) ψj to feasibility is at least a positive constant.

The statement of the theorem above does not fully match its analogue,Conjecture1,as,using the auxiliary variablesψj we converted the feasibility problem to the optimality problem.Now consider the collection of constraints C j whereψj are set to be zero,and we ask the question whether the collection of constraints in(4)has a feasible solution.We suspect that this problem does experience a linear phase transition,but we do not have a proof at the present time.

Conjecture4Let c?K be the value introduced in Theorem2.Then,w.h.p.as n→∞,the random K-LSAT problem with cn constraints is satis?able if cc?K.

In this paper we use local weak convergence method to prove Theorem1.While our approach is very much similar to the one used in[Ald92],there are several distinctive features of our problem.In particular,we do not use an in?nite tree construction and instead consider a sequence of?nite depth trees with some corresponding sequence of probability measures.Then we use a Martingale Convergence Theorem for the”projection”step.This simpli?es the proofs signi?cantly.

3.2Maximum weighted b-matching

We return to the setting of Subsection2.2.We have a sparse random graph G(n,cn),where c is a positive constant.The edges of these graph are equipped with random weights W i,j which are selected independently from a common distribution P{W i,j≤t}=w edge(t),0≤t≤B w<∞,where[0,B w] is the support of this distribution.Again let M w(n,c,b)denote the maximum weight b-matching in G(n,cn),where b≥1is an integer.

Theorem3For every c>0the limit

(11)lim

n→∞

M w(n,c,b)

n

.

Our goal is to show that in fact convergence lim n E[GLP(n,c)]

n i

converges toλ(c).Let

X1,...,X n

i ,Ψ1,...,Ψcn

i

∈[B1x,B2x]n i×[0,∞)cn i denote a(random)optimal assignment which

achieves the optimal value GLP(n i).For each n i we pick a variable x1from the pool x1,...,x n

i (the actual index is irrelevant)and consider its depth d neighborhood appropriately de?ned,where

d is some?xed constant.W

e then consider the optimal solution(X(n i,d),Ψ(n i,d))restricted to

this d-neighborhood.We consider the probability distribution P(d,n i)which describes the joint

probability distribution for the values of(X i,Ψj,W j)for X i,Ψj in the d-neighborhood as well as

the graph-theoretic structure of this neighborhood.

We show that for each?xed d,the sequence of probability measures P(d,n i)is tight in its corre-

sponding probability space.As a result,there exists subsequence of n i along which the probability

distribution P(d,n i)converges to a limiting probability distribution P(d)for every?xed d.More-

over,we show that the subsequence can be selected in such a way that the resulting probability

distributions are https://www.doczj.com/doc/7614050905.html,ly,for every d

(13)E[w x X1+

K jΨj]is smaller than λ(c)+?,when n and d are su?ciently large.We sum the expectation above over all x i and observe that each constraint belongs in the sum j of exactly K variables x i.Then the sum of these expectations is E[w x 1≤i≤n X i+wψ 1≤j≤cnΨj]which is exactly the objective function.We use this to conclude that the expected value of the objective function is at most(λ(c)+?)n.

4Poisson trees and some preliminary results

We begin by showing that in order to prove Theorem1,it su?ces to prove the existence of a limit(8)for the expected value of the optimal cost GLP(n,c).Indeed,note that given an instance of a linear program (6),if we change one of the constraints C j to any other constraint from the pool C and change the value of W j to any other value in[?B w,B w],and leave all the other constraints intact,then the optimal value GLP(n,c)changes by at most a https://www.doczj.com/doc/7614050905.html,ing a corollary of Azuma’s inequality(see Corollary 2.27[JLR00]for the statement and a proof),we obtain that P{|GLP(n,c)/n?E[GLP(n,c)/n]|>?} converges to zero exponentially fast for any?>0.Then the convergence lim n→∞E[GLP(n,c)]/n implies the convergence of GLP(n,c)/n holds w.h.p.Thus,from now on we concentrate on proving the existence of the limit

E[GLP(n,cn)]

(14)lim

n

A random instance of a linear program(6)naturally leads to a sparse weighted K-hypergraph structure on a node set x1,...,x n.Speci?cally,for each constraint C j,1≤j≤cn we create a K-edge

(x i

1,...,x i

K

,r,w j),if C j contains exactly variables x i

1

,...,x i

K

in this order,the constraint is type

r,1≤r≤|C|and the corresponding random variable W j has value w j.This set of edges completely speci?es the random instance.We?rst study the distribution of the number of edges containing a given ?xed node x=x1,...,x n.

Lemma4Given node x from the pool x1,...,x n,the number of edges(constraints C j)containing x is distributed as Pois(cK),in the limit as n→∞.

Proof:The probability that a given edge does not contain x is1?K/n if variables are taken without replacement and((n?1)/n)K=1?K/n+o(1/n)if taken with replacement.The probability that exactly s edges contain x is then asymptotically cn s (K/n)s(1?K/n)cn?s.When s is a constant and n→∞,this converges to(cK)s

n

.

Proof:Fix any r variables,say x1,...,x r and let us compute the expected number of cycles C1,...,C r such that C j and C j+1share variable x j,j=1,...,r,(where C r+1is identi?ed with C1). Constraint C j contains variables x j?1,x j(x0=x r).There are at most n K?2/(K?2)!choices for other variables in C j.For each such choice,the probability that a constraint consisting of exactly this selection of variables is present in the random instance is at most(cn)/(n K/K!)=K!c/n K?1.Finally, there are at most n r ways to select(with order)r variables x1,...,x https://www.doczj.com/doc/7614050905.html,bining,we obtain that, asymptotically,the expected number of length-r cycles is at most

n r n K?2n K?1 r<(K2c)r.

By Lemma4,for any?xed variable x the expected number of variables with distance at most r′from x is asymptotically at most(cK)r′.Each size r cycle contains at most rK variables.Therefore,the total

expected number of variables with distance at most r′to some size r cycle is at most(K2c)r(rK)(cK)r′= rc r+r′K2r+r′.2 Applying Lemmas4,5we obtain the following proposition,which is well known in the context of random graphs[JLR00].

Proposition1Given a variable x,the number of constraints in B(x,1,n)is distributed as Pois(cK), in the limit as n→∞.Also B(x,d,n)is distributed as a depth-d Poisson tree.That is,if?B(x,d,n) contains r constraints and r(K?1)variables,then the number of constraints in?B(x,d+1,n)is distributed as Pois(crK(K?1)).Moreover,these constraints do not share variables,other than variables in?B(x,d,n).In short,the constraints in B(x,d,n)are distributed as?rst d steps of a Galton-Watson (branching)process with outdegree distribution Pois(cK),in the limit as n→∞.

We?nish this section by showing how Theorem1implies Theorem2.We noted before that the linear program(4)is a special case of the linear program(6),via setting W j=0with probability one and by setting w x=0,wψ=1.Assuming limit f(c)de?ned in Theorem1exists,let us?rst show that it is a non-decreasing function of c.This is best seen by the following coupling arguments.For any c1

Let c?K≡sup{c≥0:f(c)=0}.Clearly the set in this de?nition includes c=0,and therefore is non-empty.The de?nition of c?K of course includes the possibilities c?K=0,c?K=∞and clearly satis?es (9)and(10).

The proof of the second part of Theorem2follows from the following proposition.

Proposition2Under Condition A,for any c<1/K2,a random K-LSAT instance has the optimal value E[LP(n,c)]=O(1).Under Condition B,there existsˉc K>0such that for all c>ˉc K,there exists δ(c)>0for which E[LP(n,c)]≥δ(c)n.

Proof:Suppose Condition A holds.The technique we use is essentially”set one variable at a time”algorithm,used in several papers on random K-SAT problem to establish lower bounds on critical values of c,see,for example[AS00].Set variable x1to any value in[B1x,B2x](actual value is irrelevant).For every constraint containing x1(if any exist),set other variables in this constraint so that this constraints are satis?ed with corresponding values ofψequal to zero.This is possible by Condition A.For every variable which is set in this step,take all the other constraint containing this variable(if any exist)and set its variables so that again these constraints are satis?ed withψ=0.Continue in this fashion.If at any stage some newly considered constraint C j contains some variable which was set in prior stages,setψto the minimum value which guarantees to satisfy this constraint.At the worst we putψ=max r(|b r|+ k|a rk|max(|B1x|,|B2x|).Note,however,that this situation occurs only if C j belongs to some cycle.Applying the?rst part of Lemma5,for c<1/K2the total expected number of constraints which belong to at least one cycle is at most ∞r=1r(K2c)r=O(1).Therefore,the total expected number of constraints,for which we setψpositive,is also O(1)and E[LP(n,c)]=O(1).

Suppose now Condition B holds.The proof is very similar to proofs of upper bounds on critical thresholds for random K-SAT problems and uses the following moment argument.Consider an instance of random K-LSAT with n variables m=cn constraints,where c is a very large constant,to be speci?ed later.Consider any n-dimensional cube I of the form 1≤k≤n[i k l],B1x≤i k

w.l.g.,x j 1,...,x j K be the variables which belong to the j -th constraint,1≤j ≤m .By Condition B ,with

probability at least 1/|C|,C j is such that the corresponding ψj must be at least ν.We obtain that the expected cost E [LP (n,c )]≥ν(1/|C|)m ,when x ∈I .Moreover,since the events ”C j is such a constraint”

are independent for all j ,then,applying Cherno?bound,P {LP I (n,c )≤(1/2)ν(1/|C|)m }≤e

?m 8|C|

.By taking m =cn with

c su?ciently large,we obtain that the probability above is exponentially small in n .25Limiting probability distributions

In this and the following two sections we prove the existence of the limit

(14).Fix c >0and take n to be large.We assume that the labelling (order)of the variables x 1,...,x n is selected independently from all the other randomness of the instance.In graph-theoretic sense we have a random labelled hypergraph with labels of the nodes independent from the edges of the graph.We noted above that

(15)ψj ≤max 1≤r ≤|C| k

(|B 1x |+|B 2x |)K |a rk |+|b r | ≡B ψ.Let,X (n ),Ψ(n )denote an optimal (random)assignment of the random linear programming problem

(6),where X (n )=(X 1,...,X n ),Ψ(n )=(Ψ1,...,Ψm ).That is x i =X i ,ψj =Ψj achieve the objective value GLP (n,c ).If the set of optimal solutions contains more than one solution (in general it can be uncountable)select a solution X (n ),Ψ(n )uniformly at random from this set.De?ne

(16)λ(c )=lim inf n E [GLP (n,c )]

n t =λ(c ).

Fix a variable x from the set of all n t x -variables.Since labelling is chosen independent from the instance,we can take x =x 1,w.l.g.Denote by X (d,n t ),Ψ(d,n t ),W (d,n t )the collection of X,Ψand W -variables which belong to the neighborhood B (x 1,d,n t ).In particular X 1∈X (d,n t )and the number of Ψvariables is the same as the number of W variables which is the number of constraints C j in B (x 1,d,n t ).Denote by P (d,n t )the joint probability distribution of (B (x 1,d,n t ),X (d,n t ),Ψ(d,n t ),W (d,n t )).We omit x =x 1in the notation P (d,n t )since,by symmetry,this joint distribution is independent of the

choice of x .The support of this probability distribution is X (d )≡∪(T,[B 1x ,B 2x ]|T |×[0,B ψ]E (T )×[?B w ,B w ]E (T ))∪E ,where the ?rst union runs over depth-d rooted trees T with root x 1,|T |is number

of nodes in T ,E (T )is the number of constraints in T ,and E is a singleton event which represents the event that B (x 1,d,n t )is not a tree and contains a cycle.In particular,X (d )is a countable union of compact sets.We have from Proposition 1that lim t →∞P (d,n t )(E )=0.Observe that X (d )?X (d +1)for all d .As we showed in Proposition 1,the marginal distribution of T with respect to P (d,n t )is depth-d Poisson tree,in the limit as t →∞.

Recall,that a sequence of probability measures P n de?ned on a joint space X is said to be weakly converging to a probability measure P if for any event A ?X ,lim n →∞P n (A )=P (A ).We also need the following de?nition and a theorem,both can be found in [Dur96].

De?nition1A sequence of probability measures P n on X is de?ned to be tight if for every?>0there exists a compact set K?X such that lim sup n P{X\K}

Theorem6Given a tight sequence of probability measures P n on X there exists a probability measure P on X and a subsequence P n

that weakly converges to P.

t

The following proposition is a key result of this section.

Proposition3For each d=1,2,...there exists a probability measure P(d)on X(d)such that P(d,n t) weakly converges to P(d).The sequence of probability measures P(d),d=1,2,...is consistent in the sense that for every d

(18)E[w x X1+

To complete the proof of the proposition we need to establish(18).Note that in random instances of linear program(6),with n t x-variables,when we sum the expression w x X i+wψ

K jΨj]=λ(c),where the expectation is with respect to measure P(d,n t).Since all the random variables involved in w x X i+wψ

K jΨj]=λ(c).2 6Filtration and a martingale with respect to P(d)

Let us summarize the results of the previous section.We considered a sequence of spaces X(1)?X(2)?X(3)?···,where X(d)=∪(T,[B1x,B2x]|T|×[0,Bψ]|E(T)|×[?B w,B w]|E(T)|)and the union runs over all depth-d trees T.Recall,that the event E was used before to generically represent the event that B(x,d,n)is not a tree.The probability of this event is zero with respect to the limiting probability distribution P(d)we constructed,so we drop this event from the space X(d).We have constructed a probability measure P(d)on each X(d)as a weak limit of probability measures P(d,n?t),de?ned on d-neighborhoods of a variable x1.Denote by(T(d),X(d),Ψ(d),W(d))the random vector distributed according to P(d).Note,that T(d)and W(d)are independent from each other,?rst distributed as a depth-d Poisson tree,second distributed as i.i.d.with the distribution function P{W≤t},yet X(d) andΨ(d)depend on both T(d)and W(d).

Using Kolmogorov’s extension theorem,the probability measures P(d)can be extended to a prob-ability measure P on space∪(T,[B1x,B2x]∞×[0,Bψ]∞)×[?B w,B w]∞),where the union runs over all ?nite and in?nite trees T.This extension is used in Aldous[Ald92]to construct a spatially invariant measure on in?nite trees.For our purpose this extension,while possible,is not necessary,and instead we use a martingale convergence techniques.

Note,that,since the measures P(d)are consistent,they correspond to a certain?ltration on the increasing sequence of sets X(d),d=1,2,....Then we can look at E[X1|(T(d),W(d))]as a stochastic process indexed by d=1,2,....Another way of looking at this stochastic process is as follows.A root x=x1is?xed.At time d=1we sample constraints C j containing x1with the corresponding values of W j using the P(1)distribution.Recall that the number of constraints is distributed as Pois(cK),the type of each constraint is chosen uniformly at random from|C|types,and W j values are selected i.i.d. using distribution function P{W≤t}.For this sample we have a conditional probability distribution of(X(1),Ψ(1)),that is of x andψvariables in1-neighborhood of x1.Then,for each variable in this depth-1tree except for x1we again sample constraints and W j values to obtain a depth-2Poisson tree T(2)and W(2).We obtain a distribution of(X(2),Ψ(2))conditioned on T(2),W(2),and so on.On every step we reveal a deeper layer of the tree and and the corresponding values of W j-s to obtain a new conditional probability distribution for(X(d),Ψ(d)).

The technical lemma that follows is needed to prove Proposition4below.This lemma is sometimes de?ned as tower property of conditional expectations.The proof of it can be found in many books on probability and measure theory,see for example Theorem1.2,Chapter4in[Dur96].

Lemma7Let X and Y be dependent in general random variables,where X∈R and Y takes values in some general set Y.Let f:Y→Y be a measurable function.Then E[E[X|Y]|f(Y)]=E[X|f(Y)].

The conditional expectations in this lemma are understood as follows.E[X|Y]is an expectation of X conditioned on theσY-algebra generated by the random variable Y.Similarly,E[X|f(Y)]is an

expectation of X conditioned on theσf(Y)-algebra generated by the random variable f(Y).Of course σf(Y)?σY.

As above let x1be the root of our random tree T(d)and let X1be the corresponding random variable from the vector(T(d),X(d),Ψ(d),W(d)).Conditioned on the event that x1belongs to at least one constraint C j,select any such a constraint and let,w.l.g.x2,...,x K be the variables in this constraint.For every k=2,...,K and every d≥2,consider B(x k,d)≡T(x k,d),the d-neighborhood of the variable x k.One way to introduce this neighborhood formally is to select any d′>d+1,sample T(d′)from P(d′)and then consider the d-neighborhood around x k,as a subtree of the tree T(d′).But since,by consistency,the distribution of these neighborhood is the same for all d′>d+1,then we can simply speak about selecting T(x k,d).Note,that T(x1,d)=T(d).Let W(x k,d)denote the collection of W j variables corresponding to constraints in T(x k,d).To simplify the notations,we will let T W(x k,d) stand for the pair(T(x k,d),W(x k,d)).

Proposition4For every k=1,2,...,K a random sequence E[X k|T W(x k,d)],d=1,2,...is a mar-tingale with values in[B1x,B2x].Moreover,the sequence E[X k|T W(x d mod(K),d)],d=1,2,...is also a martingale.

Remark:To understand better the meaning of the?rst part of the proposition,imagine that we ?rst sample depth-1tree T W(1).In case this tree is not trivial(T W(1)={x1}),we?x a constraint from this tree and variable x k from this constraint.Then we start revealing trees T W(x k,2),T W(x k,3),.... This de?nes the stochastic process of interest.

The second part states that even if we reveal trees rooted in a round-robin fashion at variables x1,x2,...,x K,x1,x2,...,then we still obtain a martingale.That is the sequence E[X k|T W(x1,1)], E[X k|T W(x2,2)],...,E[X k|T W(x K,K)],E[X k|T W(x1,K+1)],E[X k|T W(x2,K+2)],...,is a martin-gale.

Proof:Recall,that the optimal values X(d)are a weak limit of optimal values X(d,n?t),t= 1,2,....Then,every X i∈[B1x,B2x]almost surely.To prove the martingale property we use Lemma 7,where X=X k,Y=T W(x k,d+1)and f is a projection function which projects a depth-d+1tree T W(x k,d+1)onto a depth-d tree T W(x k,d)by truncating the d+1-st layer.Applying the lemma, we obtain E[E[X k|T W(x k,d+1)]|T W(d)]=E[X k|T W(x k,d)],which means precisely that the sequence is a martingale.The proof of the second part is exactly the same,we just observe that T W(x1,1)?T W(x2,2)?···?T W(x d mod(K),d)?···,and we let f to be a projection of T W(x d+1mod(K),d+1) onto T W(x d mod(K),d).2 The following is a classical result from the probability theory.

Theorem8[Martingale Convergence Theorem].Let X n,n=1,2,...be a martingale which takes values in some?nite interval[?B,B].Then X n converges almost surely to some random variable X.As a consequence,|X n+1?X n|converges to zero almost surely.

A proof of this fact can be found in[Dur96].Here we provide,for completeness,a very simple proof of a weak version of MCT,stating that|X n+1?X n|converges to zero in probability.This is the only version we need for this paper.We have

(19)0≤E[(X n+1?X n)2]=E[X2n+1]?2E[X n+1X n]+E[X2n].

But E[X n+1X n]=E[(E[X n+1|X n])X n]=E[X2n],where E[X n+1|X n]=X n is used in the second equality. Combining with(19),we obtain0≤E[(X n+1?X n)2]≤E[X2n+1]?E[X2n]and therefore E[X2n]is an increasing subsequence.Since E[X2n]≤B2,then this sequence is converging and,from(19),E[(X n+1?

X n)2]converges to zero.Applying Markov’s bound,we obtain that|X n+1?X n|converges to zero in probability.

Let J(x1)denote the set of constraints C j in T(d)containing x1and let J(x1)=|J(x1)|.Select again any constraint from J(x1).We denote this constraint by C(x1)and let it be 1≤k≤K a rk y k≤b r+W j+ψj for some1≤r≤|C|,in an expanded form.Again let x2,x3,...,x K denote the other variables in this constraint.Recall that the labelling of variables in this constraint was done consistently with the labelling of the tree T,and,as a result,it is independent from the measure P(d).Let σ:{1,2,...,K}→{1,2,...,K}specify the matching between the order in the constraint and in the tree.That is,our constraint C(x1)is 1≤k≤K a rk xσ(k)≤b r+W j+ψj.From(7)we obtain that,for the limiting probability measure P(d),we also haveΨj=max(0, k a rk Xσ(k)?b r?W j)almost surely. LetˉX k≡E[X k|T W(x k,d)]and let

(20)ˉΨj≡max(0, 1≤k≤K a rkˉXσ(k)?b r?W j).

That isˉΨj is the optimal value to be assigned toψj when the variables x k take valuesˉX k.Observe that,just likeˉX k,ˉΨj is a random variable.Its value is determined by the random tree T(x k,d)and the values of W j,and,importantly,it is di?erent fromΨj.The following proposition is the main technical result of this section.Jumping ahead,the usefulness of this proposition is that if we assign the value ˉX

to each variable x k and takeψj=ˉΨj for each constraint containing x1,then we obtain that the k

corresponding objective value of the linear program(6)”per variable”x1almost achieves valueλ(c), provided d is su?ciently large.We will use this in the following section for the projection step,where for a random instance of linear program(6),we assign valueˉX i for every variable x i,1≤i≤n and obtain an objective value close toλ(c)n,for arbitrary large n.

Proposition5For every?>0there exists su?ciently large d=d(?)such that E[w xˉX1+wψ

as claimed.

For every constraint C j∈J(x1)introduce

(22)?Ψj≡max(0, k a rk E[Xσ(k)|T W(x1,d)]?b r?W j),

where k a rk xσ(k)≤b r+W j+ψj is the expanded form of the constraint C j.That is?Ψj is de?ned just likeˉΨj,except for conditioning is done on the tree T W(x1,d)instead of T W(x k,d).Applying(20),(21) and recalling?Ψj,ˉΨj∈[0,Bψ],we obtain that for our constraint C j

|a rk|}

(23)P{|?Ψj?ˉΨj|>3δK max

rk

Recall,that,according the measure P(d),J(x1)is distributed as Pois(cK).Select a value M(δ)>0 su?ciently large,so that P{J(x1)>M(δ)}<δ.Conditioning on the event J(x1)≤M(δ)we obtain P j|?Ψj?ˉΨj|>3δM(δ)K max rk|a rk| J(x1)≤M(δ)

≤P ?C j∈J(x1):|?Ψj?ˉΨj|>3δK max rk|a rk| J(x1)≤M(δ)

(24)

≤M(δ)P |?Ψj?ˉΨj|>3δK max rk|a rk| J(x1)≤M(δ)

M(δ)P |?Ψj?ˉΨj|>3δK max rk|a rk|

1?δ

where the summation is over constraints in J(x1),the last inequality holds providedδ<1/K,and j in(24)corresponds to any?xed constraint in J(x1).Combining with the event J(x1)>M,we obtain from above that,without any conditioning,

P{ j|?Ψj?ˉΨj|>3δM(δ)K max rk|a rk|}

Since?Ψj,ˉΨj∈[0,Bψ],then the bound above implies

(25) E[ j?Ψj]?E[ jˉΨj] ≤BψM(δ)(K+2)δ+3δM(δ)K max rk|a rk|.

Our?nal step is to relate?Ψj to the random variablesΨj,where(X(d),Ψ(d))are drawn according to the probability distribution P(d).The distinction between?Ψj andΨj is somewhat subtle and we expand on it here.As we observed above,the values ofΨ(d)are determined almost surely by Ψj=max(0, k a rk Xσ(k)?b r?W j),with respect to the measure P(d),when the corresponding constraint is k a rk xσ(k)≤b r+W j+ψj.These values ofΨj are di?erent,however,from?Ψj which are obtained by?rst taking the expectations of X k conditioned on trees T W(x1,d)and then setting?Ψj according to(22).Naturally,Ψj and?Ψj are related to each other.Note,that for every constraint C j almost surely

(26)Ψj≥ k a rk Xσ(k)?b r?W j,Ψj≥0.

By taking the conditional expectations and using the linearity of inequalities above,we obtain (27)E[Ψj|T W(x1,d)]≥ k a rk E[Xσ(k)|T W(x1,d)]?b r?W j,E[Ψj|T W(x1,d)]≥0,

where we use the trivial equality E[W j|T W(x1,d)]=W j.From the de?nition of?Ψj in(22),we obtain then E[Ψj|T W(x1,d)]≥?Ψj almost surely,with respect to the random variables T W(x1,d).As a result j E[Ψj|T W(x1,d)]≥ j?Ψj,where the summation is over constraints in J(x1).Therefore,

(28)E[ jΨj]≥E[ j?Ψj].

Recall from the last part of Proposition3,that with respect to measure P(d),

E[w x X1+

K j

ˉΨ

j]

≤E[w x X1+

Proposition6For every?>0,the solution constructed above has expected cost at most(λ(c)+?)n for all su?ciently large n.

Since?was an arbitrary constant and sinceλ(c)was de?ned by(16)the proposition shows that the assignment above satis?es lim n→∞E[GLP(n,c)]/n=λ(c).Therefore the proposition implies Theorem 1.

Proof:Select one of the n variables x i uniformly and random.W.l.g.assume it is x1.Fix?0>0 very small.We claim that when n is su?ciently large,we have

(30)E[w x X1(n)+

K

j

Ψj(n)|E n]

≤w x max{|B1x|,|B2x|}+wψBψE[|J(x1,n)|E n]

=w x max{|B1x|,|B2x|}+wψBψ

E[|J(x1,n)1{E n}]

n3P{E n},

(31)

where we used the fact that J(x1,n)≤cn with probability one.But since P{E n}=O(1/n),we obtain that E[(w x X1(n)+wψ

实验5 JAVA常用类

山西大学计算机与信息技术学院 实验报告 姓名学号专业班级 课程名称 Java实验实验日期成绩指导教师批改日期 实验5 JAVA常用类实验名称 一.实验目的: (1)掌握常用的String,StringBuffer(StringBuilder)类的构造方法的使用;(2)掌握字符串的比较方法,尤其equals方法和==比较的区别; (3)掌握String类常用方法的使用; (4)掌握字符串与字符数组和byte数组之间的转换方法; (5)Date,Math,PrintWriter,Scanner类的常用方法。 二.实验内容 1.二进制数转换为十六进制数(此程序参考例题249页9. 2.13) 程序源代码 import java.util.*; public class BinToHexConversion{ //二进制转化为十六进制的方法 public static String binToHex(String bin){ int temp; //二进制转化为十六进制的位数 if(bin.length()%4==0) temp = bin.length()/4; else temp = bin.length()/4 + 1; char []hex = new char[temp]; //十六进制数的字符形式 int []hexDec = new int[temp];//十六进制数的十进制数形式 int j = 0; for(int i=0;i=0&&dec<10) return (char)('0'+dec-0); else if(dec>=10&&dec<=15) return (char)('A'+dec-10); else return '@'; }

三角函数,反三角函数公式大全

三角函数公式 两角和公式 sin(A+B) = sinAcosB+cosAsinB sin(A-B) = sinAcosB-cosAsinB cos(A+B) = cosAcosB-sinAsinB cos(A-B) = cosAcosB+sinAsinB tan(A+B) = tanAtanB -1tanB tanA + tan(A-B) =tanAtanB 1tanB tanA +- cot(A+B) =cotA cotB 1-cotAcotB + cot(A-B) =cotA cotB 1 cotAcotB -+ 倍角公式 tan2A = A tan 12tanA 2 - Sin2A=2SinA?CosA Cos2A = Cos 2A-Sin 2A=2Cos 2A-1=1-2sin 2A 三倍角公式 sin3A = 3sinA-4(sinA)3 cos3A = 4(cosA)3-3cosA tan3a = tana ·tan(3π+a)·tan(3 π -a) 半角公式 sin( 2A )=2cos 1A - cos(2A )=2cos 1A + tan(2A )=A A cos 1cos 1+- cot(2 A )= A A cos 1cos 1-+ tan(2 A )=A A sin cos 1-=A A cos 1sin + 和差化积 sina+sinb=2sin 2b a +cos 2b a - sina-sinb=2cos 2b a +sin 2b a - cosa+cos b = 2cos 2b a +cos 2b a - cosa-cosb = -2sin 2b a +sin 2 b a - tanA+tanB=sin(A+B)/cosAcosB tanA-tanB=sin(A-B)/cosAcosB ctgA+ctgB=sin(A+B)/sinAsinB -ctgA+ctgB=sin(A+B)/sinAsinB 积化和差 sinasinb = - 21[cos(a+b)-cos(a-b)] cosacosb = 2 1 [cos(a+b)+cos(a-b)]

小白学Python笔记系列3.3.1 math库与random库

数学库及其使用 math库中常用的数学函数 函数数学表示含义 圆周率piππ的近似值,15位小数 自然常数e e e的近似值,15位小数 ceil(x) x 对浮点数向上取整 floor(x) x 对浮点数向下取整 pow(x,y)x y计算x的y次方 log(x)lg x以e为基的对数, log10(x)log10x以10为基的对数, sqrt(x)平方根 x

数学库及其使用 函数数学表示 含义exp(x)e的x次幂,degrees(x)将弧度值转换成角度radians(x)将角度值转换成弧度 sin(x)sin x 正弦函数cos(x)cos x 余弦函数tan(x)tan x 正切函数 asin(x)arcsin x 反正弦函数,x ?[-1.0,1.0]acos(x)arccos x 反余弦函数,x ?[-1.0,1.0]atan(x) arctan x 反正切函数,x ?[-1.0,1.0] math库中常用的数学函数

随机数库及其使用 random库中常用的函数 函数含义 seed(x)给随机数一个种子值,默认随机种子是系 统时钟 random()生成一个[0, 1.0)之间的随机小数 uniform(a,b)生成一个a到b之间的随机小数 randint(a,b)生成一个a到b之间的随机整数 randrange(a,b,c)随机生成一个从a开始到b以c递增的数 choice()从列表中随机返回一个元素 shuffle()将列表中元素随机打乱 sample(,k)从指定列表随机获取k个元素

随机数库及其使用 示例

java基础笔试题(答案已整理)

Java基础试题 一:选择题(1*30=30)(题目写在答题纸上面) 1:Java 提供哪几种运算符多选( abcd )。 A)算术运算符B)位运算符 C)关系运算符D)逻辑运算符E)条件运算符 2:https://www.doczj.com/doc/7614050905.html,ng包的()方法比较二个对象是否相等返回true.。(b) A:toString() B:equals() C:compare D:以上都不正确 3:下列对Java 的变量与函数说法正确的是多选(ace )。 A)变量是用来保存数据的B)变量是用来实现操作过程的C)函数是用来实现操作过程的D)函数是用来保存数据的E)函数的参数是数据的入口 4:已知:int[] a = new int[100];在下列给出的数组元素中,非法的是。(d) A:a[0] B:a[1] C:a[99] D:a[100] 5:在java中,一个类可同时定义许多同名的方法,在这些方法的形式参数个数,类型或顺序各不相同,传值也可以各不相同。这种面向对象程序的特性称为。(c) A:隐藏B:覆盖C:重载D:Java不支持此特性 6:()是一组常量和抽象方法的集合。(d) A:实例B:类C:包D:接口 7:下面关于数组说法正确的是多选(abcde)。 A)一维数组实质上是相同类型变量的列表 B)创建一个数组首先定义数组变量所需的类型 C)char c[]=new char[26];可声明一个含有26 个元素的char型数组 D)当为一个多维数组的时候分配内存时,仅需要为第一指定内存,然后再分配其他维的存E)int twain[][] = new int[4][5];可声明一个二维数组 8:Java源文件和编译后的文件扩展名分别为。(b) A:.class和.java B:.java各.class C:.class和.class D:.java和.java 9:设x=5;则y=x--和y=--x的结果,使y分别为。(c) A:5,5 B:5,6 C:5,4 D:4,4 10:若x是float类变量,x=10/4;则x 的值是。(b) A:2 B:2.0 C:2,5 D:编译错误 11:.下面方法中,用于调度线程使其运行的是?多选(bc ) A. init() B. start() C. run() D. resume() E. sleep() 12.下面哪种情况能实现自动转换多选(ace )。 A)byte 型转换成int 型B)int 型转换成byte 型 C)float 型转换成double型D)double 型转换成int 型E)char型转换成int 型 13:下列那些是正确的JAVA字符串?多选(abd )。 A. "\"\"" B. "Oxzabc" C. "\"\" D. "\t\t\r\n" E. "boolean"5 14:在使用super 和this关键字时,以下描述正确的是。(a) A::在子类构造方法中使用super()显示调用父类的构造方法,super()必须写在子类构造方法的第一行,否则编译不通过 B:super()和this()不一定要放在构造方法内第一行

JAVA中常用类的常用方法

JAVA屮常用类的常用方法 一.java?丨ang.Object 类 1、clone()方法 创建丼返M此对象的一个副木。要进行“克隆”的对象所属的类必须实现https://www.doczj.com/doc/7614050905.html,ng. Cloneable 接口。 2、equals(Objectobj)方法 0 功能:比较引用类型数据的等价性。 0 等价标准.?引用类型比较引用,基木类型比较值。 0 存在特例.?对File、String、Date及封装类等类型来说,是比较类型及对象的内稃而+ 考虑引用的是否为同一实例。 3、finalize〇方法 当垃圾丨"丨收器确定>(、存在对该对象的更多引用时,由对象的垃圾丨"丨收器调用此方法。 4、hashCode〇方法返 回该对象的哈希码值。 5、notify〇方法 唤醒在此对象监视器上等待的中?个线祝。 6、notifyAII〇方法 唤醒在此对象监视器上等待的所有线程= 7、toString()方法 返W该对象的字符串表示。在进行String与其它类型数据的连接操作时,&动调用tostringo 方法。可以根据耑要重写toStringO方法。 8、wait()方法 在其他线程调用此对象的n〇tify()方法或notifyAIIO方法前,异致当前线程等待。 二、字符串相关类 I String 类 charAt(int index)返回指定索引处的char值。compareTo{String anotherString)按字

典顺序比较两个字符串。compareTolgnoreCase(Stringstr)按字典顺序比较两个字 符串,不考虑人小写。concat(String str)将指定字符串连接到此字符串的结尾。 endsWith(String suffix)测试此字符串是否以指定的〗?缀结束。equals{Object anObject)将此字符串与指定的对象比较。 equalslgnoreCase(String anotherString)将此String 与另一个String 比较,考虑人小'与’。indexOf(int ch)返H指定字符在此字符串屮第一次出现处的索引。 indexOf(String str)返回第一次出现的指定子字符串在此字符串屮的索引, lastlndexOf(intch)返回指定字符在此字符串中最后??次出现处的索引。 length()返|n丨此字符串的长度。 replace(char oldChar, char newChar) 返回一个新的字符串,它是通过用newChar替换此字符串中出现的所有oldChar得到的。 split(String regex)根据给定正则表达式的匹配拆分此字符串。startsWith{String prefix)测试此字符 串是否以指定的前缀开始。substring(int beginlndex) 返回一个新的字符串,它是此字符串的一个子字符串。该子字符串始于指定索引处的字符,一直到此字符串末尾。 substring(int beginlndex, int endlndex) 返回一个新字符串,它是此字符串的一个子字符串。该子字符串从指定的beginlndex 处开始,一直到索引endlndex-1处的字符。 t〇CharArray()将此字符串转换为一个新的字符数组。

反三角函数公式(完整)

反三角函数 分类 反正弦 反余弦 余弦函数x y cos =在]0[π,上的反函数,叫做反余弦函数。记作x cos arc ,表示一个 余弦值为x 的角,该角的范围在]0[π,区间内。定义域]11[, - , 值域]0[π,。 反正切 反余切 余切函数y=cot x 在)0(π,上的反函数,叫做反余切函数。记作x arc cot ,表示一个余切值为x 的角,该角的范围在)0(π,区间内。定义域R ,值域)0(π,。

反正割 反余割 运算公式 余角关系 2 arccos sin arc π = +x x 2 cot tan arc π =+x arc x 2 csc ec a π = +x arc x rcs 负数关系 x x sin arc )sin(arc -=- x x rc arccos )cos(a -=-π x x tan arc )tan(arc -=- x rc x c cot a )(ot arc -=-π

x rc x sec a )(arcsec -=-π x arc x c sec )(sc arc -=- 倒数关系 x arc x csc )1 arcsin(= x arc x sec )1 arccos(= x arc x arc x cot 2cot )1arctan(-==π x x x arc arctan 23arctan )1cot(-=+=ππ x x arc arccos )1 sec(= x x arc arcsin )1 csc(= 三角函数关系

加减法公式 1. ) 10,0()11arcsin(arcsin arcsin ) 10,0()11arcsin(arcsin arcsin ) 10()11arcsin(arcsin arcsin 22222 2 222222>+<<-+---=+>+>>-+--=+≤+≤-+-=+y x y x x y y x y x y x y x x y y x y x y x xy x y y x y x ,,或ππ 2. ) 10,0()11arcsin(arcsin arcsin ) 10,0()11arcsin(arcsin arcsin ) 10()11arcsin(arcsin arcsin 22222 2 222222>+><-----=->+<>----=-≤+≥---=-y x y x x y y x y x y x y x x y y x y x y x xy x y y x y x ,,或ππ 3. ) 0() 11arccos(2arccos arccos ) 0() 11arccos(arccos arccos 2 2 22<+----=+≥+---=+y x x y xy y x y x x y xy y x π 4. ) () 11arccos(arccos arccos ) () 11arccos(arccos arccos 2 2 22y x x y xy y x y x x y xy y x <--+=-≥--+-=- 5. ) 1,0(1arctan arctan arctan ) 1,0(1arctan arctan arctan ) 1(1arctan arctan arctan ><-++-=+>>-++=+<-+=+xy x xy y x y x xy x xy y x y x xy xy y x y x ππ

随机振动(振动频谱)计算(Random Vibration)

Random Vibration 1. 定义 1.1 功率谱密度 当波的频谱密度乘以一个适当的系数后将得到每单位频率波携带的功率,这被称为信号的功率谱密度(power spectral density, PSD)。 功率谱密度谱是一种概率统计方法,是对随机变量均方值的量度。 1.2 均方根 均方根(RMS)是指将N项的平方和除于N后,开平方的结果。均方根值也是有效值,如对于220交流电,示波器显示的有效值或均方根值为220V。 2. 加速度功率谱密度 2.1 单位 加速度单位:m/s^2或g 加速度功率谱密度单位:(m/s^2)^2/Hz或g^2/Hz Hz单位为:1/s, 所以加速度功率谱密度单位也可写为:m^2/s^3 2.2功率谱密度函数 功率谱密度函数曲线的纵坐标是(g2/Hz)。功率谱曲线下的面积就是随机加速度的总方差(g2): σ2= ∫Φ(f)df 其中:Φ(f)........功率谱密度函数 σ ............. 均方根加速度 3. 计算示例 随机振动100-2000HZ,功率谱密度为0.01g^2/Hz,则其加速度峰值计算如下: σ2=0.01*(2000-100)=19 σ=4.36g 峰值加速度不大于3倍均方根加速度:13.08g

4、SAE J 1455 随机振动要求 4.1功率谱图 4.1.1 Vertical axis 4.1.2 Transverse axis 4.1.3 Longitudinal axis

4.2 Vertical axis加速度计算 功率谱曲线下的面积:σ2=(40-5)0.016+0.5*(500-40)*0.016=4.24σ=2.06g 峰值加速度不大于3倍均方根加速度:6.18g 5. FGE随机振动要求 5.1功率谱图

JAVA编程中常用的英文单词词汇汇总..

Java基础常见英语词汇(共70个) OO: object-oriented ,面向对象 OOP: object-oriented programming,面向对象编程 JDK:Java development kit, java开发工具包 JVM:java virtual machine ,java虚拟机 Compile:编绎 Run:运行 Class:类 Object:对象 System:系统 out:输出 print:打印 line:行 variable:变量 type:类型 operation:操作,运算 array:数组 parameter:参数 method:方法 function:函数 member-variable:成员变量 member-function:成员函数 get:得到 set:设置 public:公有的 private:私有的 protected:受保护的 default:默认 access:访问 package:包 import:导入 static:静态的 void:无(返回类型) extends:继承 parent class:父类 base class:基类 super class:超类 child class:子类

derived class:派生类 override:重写,覆盖 overload:重载 final:最终的,不能改变的 abstract:抽象 interface:接口 implements:实现 exception:异常 Runtime:运行时 ArithmeticException:算术异常ArrayIndexOutOfBoundsException:数组下标越界异常NullPointerException:空引用异常ClassNotFoundException:类没有发现异常NumberFormatException:数字格式异常(字符串不能转化为数字) Try:尝试 Catch:捕捉 Finally:最后 Throw:抛出 Throws: (投掷)表示强制异常处理 Throwable:(可抛出的)表示所有异常类的祖先类 Lang:language,语言 Util:工具 Display:显示 Random:随机 Collection:集合 ArrayList:(数组列表)表示动态数组 HashMap: 散列表,哈希表 Swing:轻巧的 Awt:abstract window toolkit:抽象窗口工具包Frame:窗体 Size:尺寸 Title:标题 Add:添加 Panel:面板 Layout:布局 Scroll:滚动 Vertical:垂直 Horizonatal:水平 Label:标签 TextField:文本框 TextArea:文本域 Button:按钮

常用反三角函数公式表

反三角函数公式

反三角函数图像与特征 1 :

反三角函数的定义域与主值范围 式中n为任意整数.

反三角函数的相互关系 sin x = x-x3/3!+x5/5!-...(-1)k-1*x2k-1/(2k-1)!+... (-∞= -1 And x < -0.5 Then ArcSin = -Atn(Sqr(1 - x * x) / x) - 2 * Atn(1) If x >= -0.5 And x <= 0.5 Then ArcSin = Atn(x / Sqr(1 - x * x))

If x > 0.5 And x <= 1 Then ArcSin = -Atn(Sqr(1 - x * x) / x) + 2 * Atn(1) End Function ArcCos(x) 函数 功能:返回一个指定数的反余弦值,以弧度表示,返回类型为Double。 语法:ArcCos(x)。 说明:其中,x的取值范围为[-1,1],x的数据类型为Double。 程序代码: Function ArcCos(x As Double) As Double If x >= -1 And x < -0.5 Then ArcCos = Atn(Sqr(1 - x *x) / x) + 4 * Atn(1) If x >= -0.5 And x <= 0.5 Then ArcCos = -Atn(x/ Sqr(1 - x * x)) + 2 * Atn(1) If x> 0.5 And x <= 1 Then ArcCos = Atn(Sqr(1 - x*x) / x) End Function

java随机函数用法Random

java随机函数用法Random import java.util.Random; public class RandomNumber{ public static void main(String[] args) { // 使用https://www.doczj.com/doc/7614050905.html,ng.Math的random方法生成随机数System.out.println("Math.random(): " + Math.random()); // 使用不带参数的构造方法构造java.util.Random对象System.out.println("使用不带参数的构造方法构造的Random对象:"); Random rd1 = new Random(); // 产生各种类型的随机数 // 按均匀分布产生整数 System.out.println("int: " + rd1.nextInt()); // 按均匀分布产生长整数 System.out.println("long: " + rd1.nextLong()); // 按均匀分布产生大于等于0,小于1的float数[0, 1) System.out.println("float: " + rd1.nextFloat());

// 按均匀分布产生[0, 1)范围的double数 System.out.println("double: " + rd1.nextDouble()); // 按正态分布产生随机数 System.out.println("Gaussian: " + rd1.nextGaussian()); // 生成一系列随机数 System.out.print("随机整数序列:"); for (int i = 0; i < 5; i++) { System.out.print(rd1.nextInt() + " "); } System.out.println(); // 指定随机数产生的范围 System.out.print("[0,10)范围内随机整数序列: "); for (int i = 0; i < 10; i++) { // Random的nextInt(int n)方法返回一个[0, n)范围内的随机数 System.out.print(rd1.nextInt(10) + " "); } System.out.println(); System.out.print("[5,23)范围内随机整数序列: "); for (int i = 0; i < 10; i++) {

Java常用类库介绍

教学内容 第七讲Java常用类库介绍 7.1 Java类库的结构 类库就是Java API(Application Programming Interface,应用程序接口),是系统提供的已实现的标准类的集合。在程序设计中,合理和充分利用类库提供的类和接口,不仅可以完成字符串处理、绘图、网络应用、数学计算等多方面的工作,而且可以大大提高编程效率,使程序简练、易懂。 Java类库中的类和接口大多封装在特定的包里,每个包具有自己的功能。表7.1列出了Java中一些常用的包及其简要的功能。其中,包名后面带“. *”的表示其中包括一些相关的包。有关类的介绍和使用方法,Java中提供了极其完善的技术文档。我们只需了解技术文档的格式就能方便地查阅文档。 表7.1Java提供的部分常用包 注:在使用Java时,除了https://www.doczj.com/doc/7614050905.html,ng外,其他的包都需要import语句引入之后才能使用。 7.2 https://www.doczj.com/doc/7614050905.html,ng包中的常用类

https://www.doczj.com/doc/7614050905.html,ng是Java语言最广泛使用的包。它所包括的类是其他包的基础,由系统自动引入,程序中不必用import语句就可以使用其中的任何一个类。https://www.doczj.com/doc/7614050905.html,ng中所包含的类和接口对所有实际的Java程序都是必要的。下面我们将分别介绍几个常用的类。 7.2.1 String类和StringBuffer类 许多语言中,字符串是语言固有的基本数据类型。但在Java语言中字符串通过String类和StringBuffer类来处理。 1.String类 Java语言中的字符串属于String类。虽然有其它方法表示字符串(如字符数组),但Java使用String 类作为字符串的标准格式。Java编译器把字符串转换成String对象。String对象一旦被创建了,就不能被改变。如果需要进行大量的字符串操作,应该使用StringBuffer类或者字符数组,最终结果可以被转换成String格式。 (1)创建字符串 创建字符串的方法有多种方式,通常我们用String类的构造器来建立字符串。表6.2列出了String 类的构造器及其简要说明。 表7.2 String类构造器概要 【例7.1】使用多种方法创建一个字符串并输出字符串内容。 public class StrOutput { public static void main(Sring[] args) { //将字符串常量作为String对象对待,实际上是将一个String对象赋值给另一个 String s1 = "Hello,java!"; //声明一个字符串,然后为其赋值 String s2; s2 = "Hello,java!";

引用java中随机函数的使用

引用java中随机函数的使用 引用 axunlb的java中随机函数的使用 java中随机函数的使用 Random N = new Random(1000);中的1000产生的随机数在0到1000之间,参数用于指定随机数产生的范围 方法1 (数据类型)(最小值+m()*(最大值-最小值+1)) 例: (int)(1+m()*(10-1+1)) 从1到10的int型随数 方法2 获得随机数 for (int i=0;i<30;i++) {.println((int)(1+m()*10));} (int)(1+m()*10) 通过包的random方法得到1-10的int随机数 公式是:最小值---最大值(整数)的随机数

(类型)最小值+m()*最大值 方法3 Random ra =new Random(); for (int i=0;i<30;i++) {.println(ra.nextInt(10)+1);} 通过包中的Random类的nextInt方法来得到1-10的int随机数import .*; class Test { public static void main(String args[]) { int[] t = new int[10]; Random rand = new Random(); for(int i=0;i

} } } java中Random的构造函数Random()中默认的种子就是当前时间和midnight, January 1, 1970 UTC的差值(用毫秒计),所以每次运行程序都可以得到不同的结果nt()也可以如此用r.nextInt(100)—–100以内的随机数

JAVA技术--Java基础知识常见考试题JAVA技术.doc

一、单选题 1.对类:(B) public class Test( //...do something } 下面那个正确地定义了类Test的构造函数。 A)public void Test() () B)publicTest()(} C ) public static Test() (} D) publicTest(); 2.下面哪个函数是public void example()(...)的重载函数。(A) A)public void example( float f)(...) B)public int example() (...) C)public void example2()(...} D)public int example_overLoad ()(...) 3.下面的代码段中,执行之后i和j的值是_C_。 int i = 1; intj; j = i++; A)1, 1 B) 1,2 C) 2, 1 D) 2,2 4.以下for循环的执行次数是_B o for(int x=0,y=0;(y !=0)&&(x<4) ;x++); A)无限次B) 一次也不执行 C)执行4次D)执行3次 5.下面程序的输出结果是—C o public class People( String name; int id; public People( String str, int n )( name = str; id = n; } public String toString(){ return id + " :” + name; } public String print()(

JAVA中常用类的常用方法

JAVA中常用类的常用方法 一、类 1、clone()方法 创建并返回此对象的一个副本。要进行“克隆”的对象所属的类必须实现. Cloneable接口。 2、equals(Object obj)方法 功能:比较引用类型数据的等价性。 等价标准:引用类型比较引用,基本类型比较值。 存在特例:对File、String、Date及封装类等类型来说,是比较类型及对象的内容而不考虑引用的是否为同一实例。 3、finalize()方法 当垃圾回收器确定不存在对该对象的更多引用时,由对象的垃圾回收器调用此方法。 4、hashCode()方法返回该对象的哈希码值。 5、notify()方法唤醒在此对象监视器上等待的单个线程。 6、notifyAll()方法唤醒在此对象监视器上等待的所有线程。 7、toString()方法 返回该对象的字符串表示。在进行String与其它类型数据的连接操作时,自动调用toString()方法。可以根据需要重写toString()方法。 8、wait()方法 在其他线程调用此对象的 notify() 方法或 notifyAll() 方法前,导致当前线程等待。

二、字符串相关类 l String类 charAt(int index) 返回指定索引处的 char 值。 compareTo(String anotherString) 按字典顺序比较两个字符串。compareToIgnoreCase(String str) 按字典顺序比较两个字符串,不考虑大小写。concat(String str) 将指定字符串连接到此字符串的结尾。 endsWith(String suffix) 测试此字符串是否以指定的后缀结束。 equals(Object anObject) 将此字符串与指定的对象比较。 equalsIgnoreCase(String anotherString) 将此 String 与另一个 String 比较,不考虑大小写。 indexOf(int ch) 返回指定字符在此字符串中第一次出现处的索引。 indexOf(String str) 返回第一次出现的指定子字符串在此字符串中的索引。lastIndexOf(int ch) 返回指定字符在此字符串中最后一次出现处的索引。length() 返回此字符串的长度。 replace(char oldChar, char newChar) 返回一个新的字符串,它是通过用 newChar 替换此字符串中出现的所有 oldChar 得到的。 split(String regex) 根据给定正则表达式的匹配拆分此字符串。 startsWith(String prefix) 测试此字符串是否以指定的前缀开始。 substring(int beginIndex) 返回一个新的字符串,它是此字符串的一个子字符串。该子字符串始于指定索引处的字符,一直到此字符串末尾。

java常用词汇

Abstract class 抽象类:抽象类是不允许实例化的类,因此一般它需要被进行扩展继承。 Abstract method 抽象方法:抽象方法即不包含任何功能代码的方法。 Access modifier 访问控制修饰符:访问控制修饰符用来修饰Java中类、以及类的方法和变量的访问控制属性。 Anonymous class 匿名类:当你需要创建和使用一个类,而又不需要给出它的名字或者再次使用的使用,就可以利用匿名类。 Anonymous inner classes 匿名内部类:匿名内部类是没有类名的局部内部类。 API 应用程序接口:提供特定功能的一组相关的类和方法的集合。 Array 数组:存储一个或者多个相同数据类型的数据结构,使用下标来访问。在Java中作为对象处理。 Automatic variables 自动变量:也称为方法局部变量method local variables,即声明在方法体中的变量。 Base class 基类:即被扩展继承的类。HP0-538 Blocked state 阻塞状态:当一个线程等待资源的时候即处于阻塞状态。阻塞状态不使用处理器资源 Call stack 调用堆栈:调用堆栈是一个方法列表,按调用顺序保存所有在运行期被调用的方法。 Casting 类型转换 :即一个类型到另一个类型的转换,可以是基本数据类型的转换,也可以是对象类型的转换。 char 字符:容纳单字符的一种基本数据类型。 Child class 子类:见继承类Derived class Class 类:面向对象中的最基本、最重要的定义类型。350-018 Class members 类成员:定义在类一级的变量,包括实例变量和静态变量。 Class methods 类方法:类方法通常是指的静态方法,即不需要实例化类就可以直接访问使用的方法。 Class variable 类变量:见静态变量Static variable

大学高数 函数与反三角函数图像

三角函数公式和图象总结 1.与角α终边相同的角,连同角α在内,都可以表示为S={β|β=α+k ×360,k ∈Z} 2.弧长公式:α?=r l 扇形面积公式lR S 21 = 其中l 是扇形弧长,R 是圆的半径。 3.三角函数定义: sin ,cos ,tan y x y r r x ααα===,其中P (,)x y 是α终边上一点,||r OP = 4.同角三角函数的两个基本关系式 22 sin sin cos 1 tan cos ααααα +== sin sin αsin β tan tan α

sin cos), a x b x x? +=+其中tan b a ?=,?所在的象限与点(,) a b所在的象限一 致。

12.①sin()(0)y A x b A ω?=++>、cos()(0)y A x b A ω?=++>的最小正周期为 || ω,最大值为A+b ,最小值为-A+b. ②tan()(0)y A x b A ω?=++>的最小正周期为|| π ω 13.正弦定理: A a sin = B b sin =C c sin = 2R (R 为三角形外接圆半径) 14.余弦定理:2 2 2 2cos a b c bc A =+- bc a c b A 2cos 2 22-+= 15.S ⊿= 21a a h ?=21ab C sin =21bc A sin =2 1ac B sin =R abc 4=2R 2 A sin B sin C sin =))()((c p b p a p p ---(其中)(2 1 c b a p ++=, r 为三角形内切圆半径) 反三角函数图像与反三角函数特征 反正弦曲线 反余弦曲线 拐点(同曲线对称中心):,该点切线斜率为1 拐点

Java中的Random函数

关于Java中的Random()函数 今天在做Java练习的时候注意到了Java里面的一个随机函数——Random,刚开始只是知道这个函数具有随机取值的作用,于是上网搜索了资料一番,做了一下一些关于Random 函数的总结: Java中其实存在着两种Random函数: 一、https://www.doczj.com/doc/7614050905.html,ng.Math.Random; 调用这个Math.Random()函数能够返回带正号的double值,该值大于等于0.0且小于1.0,即取值范围是[0.0,1.0)的左闭右开区间,返回值是一个伪随机选择的数,在该范围内(近似)均匀分布。 例如我下面的实验代码 编译通过后运行结果如下图 大家观察会发现代码的用一个循环10次循环输出num的取值,均随机分布在[0,3)之间!在使用Math.Random()的时候需要注意的地方时该函数是返回double类型的值,所以在要赋值给其他类型的变量的时候注意需要进行塑形转换。

二、java.util.Random; 在Java的API帮助文档中,总结了一下对这个Random()函数功能的描述: 1、java.util.Random类中实现的随机算法是伪随机,也就是有规则的随机,所谓有规则的 就是在给定种子(seed)的区间内随机生成数字; 2、相同种子数的Random对象,相同次数生成的随机数字是完全相同的. 3、Random类中各方法生成的随机数字都是均匀分布的,也就是说区间内部的数字生成的几率均等. 我们可以在构造Random对象的时候指定种子(这里指定种子有何作用,请接着往下看),如: Random r1 = new Random(20); 或者默认当前系统时间对应的相对时间有关的数字作为种子数: Random r1 = new Random(); 需要说明的是:你在创建一个Random对象的时候可以给定任意一个合法的种子数,种子

java 字符串常用函数及其用法

java中的字符串也是一连串的字符。但是与许多其他的计算机语言将字符串作为字符数组处理不同,Java将字符串作为String类型对象来处理。将字符串作为内置的对象处理允许Java提供十分丰富的功能特性以方便处理字符串。下面是一些使用频率比较高的函数及其相关说明。 String相关函数 1)substring() 它有两种形式,第一种是:String substring(int startIndex) 第二种是:String substring(int startIndex,int endIndex) 2)concat() 连接两个字符串 例:String s="Welcome to "; String t=s.concat("AnHui"); 3)replace() 替换 它有两种形式,第一种形式用一个字符在调用字符串中所有出现某个字符的地方进行替换,形式如下: String replace(char original,char replacement) 例如:String s=”Hello”.replace(’l',’w'); 第二种形式是用一个字符序列替换另一个字符序列,形式如下: String replace(CharSequence original,CharSequence replacement) 4)trim() 去掉起始和结尾的空格 5)valueOf() 转换为字符串 6)toLowerCase() 转换为小写 7)toUpperCase() 转换为大写 8)length() 取得字符串的长度 例:char chars[]={’a',’b’.’c'}; String s=new String(chars); int len=s.length(); 9)charAt() 截取一个字符 例:char ch; ch=”abc”.charAt(1); 返回值为’b’ 10)getChars() 截取多个字符 void getChars(int sourceStart,int sourceEnd,char target[],int targetStart) sourceStart 指定了子串开始字符的下标 sourceEnd 指定了子串结束后的下一个字符的下标。因此,子串包含从sourceStart到sourceEnd-1的字符。

相关主题
文本预览
相关文档 最新文档