当前位置:文档之家› Offspring-annotated probabilistic context-free grammars

Offspring-annotated probabilistic context-free grammars

Offspring-annotated probabilistic context-free

grammars

Jose L.Verd′u-Mas,Jorge Calera-Rubio,and Rafael C.Carrasco

Departamento de Lenguajes y Sistemas Inform′a ticos

Universidad de Alicante,E-03071Alicante,Spain.

verdu,calera,carrasco@dlsi.ua.es

Abstract.This paper describes the application of a new model to learn

probabilistic context-free grammars(PCFGs)from a tree bank corpus.The

model estimates the probabilities according to a generalized-gram scheme

for trees.It allows for faster parsing,decreases considerably the perplexity

of the test samples and tends to give more structured and re?ned parses.

In addition,it also allows several smoothing techniques such as backing-

off or interpolation that are used to avoid assigning zero probability to any

sentence.

1Introduction

Context-free grammars may be considered to be the customary way of rep-resenting syntactical structure in natural language sentences.In many natural-language processing applications,obtaining the correct syntactical structure for a sentence is an important intermediate step before assigning an interpretation to it.But ambiguous parses are very common in real natural-language sentences (e.g.,those longer than15words);the fact that many ambiguous parse exam-ples in books sound a bit awkward is due to the fact that they involve short sen-tences[1,2,p.411].Choosing the correct parse for a given sentence is a crucial task if one wants to interpret the meaning of the sentence,due to the principle of compositionality[3,p.358],which states,informally,that the interpretation of a sentence is obtained by composing the meanings of its constituents according to the groupings de?ned by the parse tree1.

A set of rather radical hypotheses as to how humans select the best parse tree[6]propose that a great deal of syntactic disambiguation may actually oc-cur without the use of any semantic information;that is,just by selecting a preferred parse tree.It may be argued that the preference of a parse tree with respect to another is largely due to the relative frequencies with which those

choices have lead to a successful interpretation.This sets the ground for a fam-ily of techniques which use a probabilistic scoring of parses to the correct parse in each case.

Probabilistic scorings depend on parameters which are usually estimated from data,that is,from parsed text corpora such as the Penn Treebank[7].The most straightforward approach is that of treebank grammars,[8].Treebank gram-mars—which will be explained in detail below—-are probabilistic context-free grammars in which the probability that a particular nonterminal is expanded according to a given rule is estimated as the relative frequency of that expan-sion by simply counting the number of times it appears in a manually-parsed corpus.This is the simplest probabilistic scoring scheme,and it is not with-out problems;we will show how a set of approximate models,which we will call offspring-annotated models,in which expansion probabilities are dependent on the future expansions of children,may be seen as a generalization of the classic-gram models to the case of trees,and include treebank grammars as a special case;other models,such as Johnson’s[9]parent-annotated models(or more generally,ancestry annotated models)and IBM history-based grammars [2,p.423],[10]or Bod’s and Scha’s Data-Oriented parsing[11]offer an alter-nate approach in which the probability of expansion of a given nonterminal is made dependent on the previous expansions.An interesting property of many of these models is that,even though they may be seen as context-dependent, they may still be easily rewritten as context-free models in terms of specialized versions of the original nonterminals.

This paper is organized as follows:section2proposes our generalization of the classic-gram models to the case of trees,which is shown to be equiva-lent to having a specialized context-free grammar.A simplication of this model, called the child-annotated model or,for short,is also presented in that sec-tion.Experiments using this model for structural disambiguation are presented in section4and compared with other family-annotated models,and,?nally, conclusions are given in section5.

2The model

Let a treebank,that is,a sample of parse trees.

For all and for all trees we de?ne the-root of as the tree

(1)

The sets of-forks and of-subtrees are de?ned for all as follows:

(2)

(3)

where denotes the depth of the tree,The depth of a single node tree is zero).

We de?ne the treebank probabilistic testable grammar

through:

–;

–is the set of lables in;

–is the start symbol;

–where is

a set of production rules(usually written as,where and

)and is the emission probability associated with the rule .The set is built as follow:

for every tree add to the rule with probability

(5)

Here counts the number of times that the fork appers in the tree;

for every tree add to the rule

with probability

(6)

De?ned in this way,these probabilities satisfy the normalization constraint for each(7)

and the consistency constraint.PCFGs estimated from treebanks using the rel-ative frequency estimator always satisfy those constraints[12][13].

Note that in this kind of models,the expansion probability for a given node is computed as a function of the subtree of depth that the node generates, i.e.,every non-terminal symbol stores a subtree of depth.In the particular case,only the label of the node is taken into account(this is analogous to

the standard bigram model for strings)and the model coincides with the simple rule-counting approach used in treebank grammars.

However,in the case,we get a child-annotated model,that is,non-terminal symbols are de?ned by:

–the node label,

–the number of descendents(if any)and

–the labels in the descendents(if any)and their ordering.

As an ilustration,consider a very simple sample with only the tree in the ?gure1.If we choose,then

–;

and the CFG is

with containing the rules

S NP VP

NP NP PP

NP N

VP VP PP

VP V NP

PP P NP

However,for we obtain

and the CFG is

with containing the rules

S NP

N

PP

P NP

2As will be seen in section4,parent-annotated grammars usually have less parameters than child-annotated grammars,contrary to what this example may suggest.

3.1Linear interpolation

Smoothing through linear interpolation[14]is performed by computing the prob-ability of events as a weighted average of the probabilities given by different models.For instance,the smoothed probability of a model could be com-puted as a weighted average of the probability given by the model itself,and that given by the model,that is,the probability of a tree in the interpo-lated model is

(8)

where and are the probabilities of the tree in,respectively,the model and.

In our experiments,the mixing parameter will be chosen to mini-mize the perplexity of the sample.

3.2Tree-level back-off

Back-off allows one to combine information from different models.In our case, the highest order model such that the probability of the event is greater than zero is selected.Some care has to be taken in order to preserve normalization.

if

(9)

if

where

1.Add the rules of the model with probability:

(11) 2.For each non-terminal symbol of the model,,add a back-off rule with probability:

In all experiments the training corpus,consisted of all of the trees(41,532)in sections02to22of the Wall Street Journal portion of Penn Treebank,modi?ed as above.This gives a total number of more than600,000subtrees.The test set contained all sentences in section23having less than40words.

All grammar models were written as standard context-free grammars,and Chappelier and Rajman’s[15]probabilistic extended Cocke-Younger-Kasami parsing algorithm(which constructs a table containing generalized items like those in Earley’s[16]algorithm)was used to obtain the most likely parse for each sentence in the training set;this parse was compared to the correspond-ing gold-standard tree in the test set using the customary PARSEVAL evalua-tion metric[1,2,p.432]after deannotating the most likely tree delivered by the parser.PARSEVAL gives partial credit to incorrect parses by establishing three measures:

–labeled precision()is the fraction of correctly-labeled nonterminal bracket-ing(constituents)in the most likely parse which match the gold-standard parse,

–labeled recall()is the fraction of brackets in the gold-standard parse which are found in the most likely parse with the same label,and

–crossing brackets()refers to the fraction of constituents in one parse cross over constituent boundaries in the other parse.

The crossing brackets measure does not take constituent labels into account and will not be shown here.Some authors(see,e.g.[17])have questioned partial-credit evaluation metrics such as the PARSEVAL measures;in particular,if one wants to use a probability model to perform structural disambiguation before assigning some kind of interpretation ot the parsed sentence,it may well be argued that the exact match between the gold-standard tree and the most likely tree is the only possible relevant measure.It is however,very well known that the Penn Treebank,even in its release3,still suffers from problems.One of the problems worth mentioning(discussed in detail by Krotov et al.[18])is the presence of far too many partially bracketed constructs according to rules like NP NN NN CC NN NN NNS which lead to very?at trees,when one can, in the same treebank,?nd rules such as NP NN NN,NP NN NN NNS and NP NP CC NP,which would lead to more structured parses such as

NP

NP NN NN CC NP

NN NN NNS

Some of these?at parses may indeed be too?at to be useful for semantic pur-poses and have little linguistic plausibility;therefore,if one gets a more re?ned parse,one may consider it to be the one leading to the correct interpretation or not,but it surely contains a more information than the?at,unstructured one.

For this reason,we have chosen to give,in addition to the exact-match?g-ure,the percentage of trees having100%recall,because these are the trees in which the most likely parse is either exactly the gold-standard parse or a re-?nement thereof in the sense of the previous example.

4.2Structural disambiguation results

Here is a list of the models which were evaluated:

–A standard treebank grammar,with no annotation of node labels(NO or ),with probabilities for15,140rules.

–A child-annotated grammar(CHILD or),with probabilities for92,830 rules.

–A parent-annotated grammar(PARENT),with probabilities for23,020rules.–A both parent-and child-annotated grammar(BOTH),with probabilities for 112,610rules.

As expected,the number of rules obtained increases as more information is conveyed by the node label,although this increase is not extreme.On the other hand,as the generalization power decreases,some sentences in the test set become unparsable,that is,they cannot be generated by the grammar.

A NNOTATION EXACT

70.7%10.4%100%

79.2%19.4%94.6%

80.0%18.5%100%

80.1%20.5%79.6%

Table1.Parsing results with different annotation schemes:labelled recall,labelled precision,fraction of sentences with total labelled recall,fraction of exact matches,fraction of sentences parsed by the annotated model,and average time per sentence in seconds.

The results in table1show that

–The parsing performance of parent-annotated and child-annotated PCFG is similar and better than those obtained with the standard treebank PCFG.

The performance is measured both with the customary PARSEVAL metrics and by counting the number of maximum-likelihood trees that(a)match their counterparts in the treebank exactly,and(b)contain all of the con-stituents in their counterpart(100%labeled recall,).The fact that child-annotated grammars do not perform better than parent-annotated ones may be due to their larger number of parameters compared to parent-annotated PCFG.This makes it dif?cult to estimate them accurately from currently available treebanks(only about6subtrees per rule in the experi-ments).

–The average time to parse a sentence shows that child annotation leads to parsers that are much faster.This comes as no surprise because the number of possible parse trees considered is drastically reduced;this is,however, not the case with parent-annotated models.

It may be worth mentioning that parse trees produced by child-annotated mod-els tend to be more structured and re?ned than parent-annotated and unanno-tated parses which tend to use rules that lead to?at trees in the sense mentioned above.Favoring structured parses may be convenient in some applications(es-pecially where meaning has to be compositionally computed)but may be less convenient than?at parses when the structure obtained is incorrect.

On the other hand,child-annotated models,CHILD and BOTH,were unable to deliver a parse tree for all sentences in the test set(CHILD parses94.6%of the sentences and BOTH,79.6%).To be able to parse all sentences,the smoothing techniques described in section3were applied.

Those smoothed models were evaluated:

–A linear interpolated model,M1,as described in subsection3.1with

(the value of selected to minimize the perplexity).

–A tree-level back-off,M2,as described in subsection3.2.

–A rule-level back-off,M3,as described in subsection3.3.This model has 92,830rules,15,140rules and10,250back-off rules.A?xed parameter(0.005)was selected to maximize labelled recall and precision.

M ODEL PARSED

M178.6%100%

78.9%17.1%9.3

M381.3%100%

Table2.Parsing results with different smoothed models.

The results in table2show that:

–M2is the fastest but its performance is worse than that of M1and M3.

–M1and M3parse sentences at a comparable speed but recall and precision are better using M3.

Compared to un-smoothed models,smoothed ones:

–Cover the whole test set(did not).

–Parsed at reasonable speed(compared to P ARENT).

–Achieved acceptable performance(did not).

5Conclusion

We have introduced a new probabilistic context-free grammar model,offspring-annotated PCFG in which the grammar variables are specialized by annotating them with the subtree they generate up to a certain level.In particular,we have studied child-annotated models(one level)and have compared their parsing performance to that of unannotated PCFG and of parent-annotated PCFG[9]. Offspring-annotated models may be seen as a special case of a very general probabilistic state-based model,which in turn is based on probabilistic bottom-

up tree automata.The experiments show that:

–The parsing performance of parent-annotated and the proposed child-annotated PCFG is similar.

–Parsers using child-annotated grammars are,however,much faster because the number of possible parse trees considered is drastically reduced;this is,

however,not the case with parent-annotated models.

–Child-annotated grammars have a larger number of parameters than parent-annotated PCFG which may make it dif?cult to estimate them accurately from currently available treebanks.

–Child-annotated models tend to give very structured and re?ned parses instead of?at parses,a tendency not so strong for parent-annotated gram-mars.

As future work we plan to study the use of statistical con?dence criteria as used in grammatical inference algorithms[19]to eliminate unnecessary anno-tations by merging states,therefore reducing the number of parameters to be estimated.Indeed,offspring-annotation schemes(for a value of)may be useful as starting points for those state-merging mechanisms,which so far have always been initialized with the complete set of different subtrees found in the treebank(ranging in the hundreds of thousands).

References

1.Ezra Black,Steven Abney,Dan Flickinger,Claudia Gdaniec,Ralph Grishman,

Philip Harrison,Donald Hindle,Robert Ingria,Frederick Jelinek,Judith Klavans, Mark Liberman,Mitch Marcus,Salim Roukos,Beatrice Santorini,and Tomek Strza-lkowski.A procedure for quantitatively comparing the syntatic coverage of English grammars.In Proc.Speech and Natural Language Workshop1991,pages306–311,San Mateo,CA,1991.Morgan Kauffmann.

2.Christopher D.Manning and Hinrich Sch¨utze.Foundations of Statistical Natural Lan-

guage Processing.MIT Press,1999.

3.A.Radford,M.Atkinson,D.Britain,H.Clahsen,and A.Spencer.Linguistics:an

introduction.Cambridge Univ.Press,Cambridge,1999.

4.Alfred V.Aho,Ravi Sethi,and Jeffrey https://www.doczj.com/doc/048469722.html,pilers Principles,Techniques,and

Tools.Addison-Wesley,Reading,Massachusetts,1986.

5.W.John Hutchins and Harold L.Somers.An Introduction to Machine Translation.

Academic Press,1992.

6.L.Frazier and K.Rayner.Making and correcting errors during sentence comprehen-

sion:Eye movements in the analysis of structurally ambiguous sentences.Cognitive Psychology,14:178–210,1982.

7.Mitchell P.Marcus,Beatrice Santorini,and Mary Ann Marcinkiewicz.Building a

large annotated corpus of English:the Penn https://www.doczj.com/doc/048469722.html,putational Linguistics, 19:313–330,1993.

8.Eugene Charniak.Treebank grammars.In Proceedings of the Thirteenth National Con-

ference on Arti?cial Intelligence,pages1031–1036.AAAI Press/MIT Press,1996.

9.Mark Johnson.PCFG models of linguistic tree https://www.doczj.com/doc/048469722.html,putational Lin-

guistics,24(4):613–632,1998.

10.Ezra Black,Frederick Jelinek,John https://www.doczj.com/doc/048469722.html,fferty,David M.Magerman,Robert L.Mer-

cer,and Salim Roukos.Towards history-based grammars:Using richer models for probabilistic parsing.In Proceedings of the DARP A Speech and Natural Language Work-shop,pages31–37,1992.

11.R.Bod and R.Scha.Data-oriented language processing:An overview.Technical

Report LP-96-13,Departement of Computational Linguistics,University of Amster-dam,The Netherlands,1996.

12.J.A.S′a nchez and J.M.Bened′?.Consistency of stochastic context-free grammars from

probabilistic estimation based on growth transformations.IEEE Transactions on Pat-tern Analysis and Machine Intelligence,19(9):1052–1055,1997.

13.Zhiyi Chi and Stuart Geman.Estimation of probabilistic context-free grammars.

Computational Linguistics,24(2):299–305,1998.

14.L.R.Bahl,P.F.Brown,P.V.de Souza,and R.L.Mercer.A tree-based statistical

language model for natural language speech recognition.In A.Waibel and K.-F.

Lee,editors,Readings in Speech Recognition,pages507–514.Kaufmann,San Mateo, CA,1990.

15.J.-C.Chappelier and M.Rajman.A generalized CYK algorithm for parsing stochas-

tic CFG.In Actes de TAPD’98,pages133–137,1998.

16.J.Earley.An ef?cient context-free parsing https://www.doczj.com/doc/048469722.html,munications of the ACM,

13(2):94–102,1970.

17.John Carroll,Ted Briscoe,and Antonio San?lippo.Parser evaluation:A survey and

a new proposal.In Proceedings of the International Conference on Language REsources

and Evaluation,pages447–454,Granada,Spain,1998.

18.Alexander Krotov,Robert Gaizauskas,Mark Hepple,and Yorick https://www.doczj.com/doc/048469722.html,pacting

the Penn Treebank grammar.In Proceedings of COLING/ACL’98,pages699–703,1998.

19.Rafael C.Carrasco,Jose Oncina,and Jorge Calera-Rubio.Stochastic inference of

regular tree languages.Machine Learning,44(1/2):185–197,2001.

广工Anyview试题答案第八章

/********** 【习题】请编写函数func(char s[], char t[], int n), 由数组s中长度为n的字符序列构造其逆序列,并存储在数组t中。例如,由给定字符序列?慜敲求得逆序列?敜慲;由?瑜浩履 求得?敜業屴。 **********/ void func(char s[], char t[], int n) /* 数组s的前n个元素存放给定的字符序列, 数组t的前n个元素存放s的逆序列。 注意:数组的下标从0开始。 */ { for(int i=0;i

江苏大学大一c语言期末复习题汇总

选择题1.下列字符序列中,不可用作C语言标识符的是()。 A.abc123 B.C._123_ D._ok 2.请选出可用作C语言用户标识符的一组标识符()。 A.void B.a3_b3 C.For D.2a define _123 -abc DO WORD IF Case sizeof 3.不属于C语言关键字的是()。 A.int B.break C.while D.character 4.以下不能定义为用户标示符的是()。 A.scanf B.Void C._3com_ D.int 5.C语言程序的基本单位是()。 A.程序行B.语句C.函数D.字符 6.以下说法中正确的是()。 A.C语言程序总是从第一个定义的函数开始执行 B.在C语言程序中,要调用的函数必须在main( )函数中定义 C.C语言程序总是从main( )函数开始执行 D.C语言程序中的main( )函数必须放在程序的开始部分 7.以下选项中,合法的用户标识符是()。 A.long B._2abc C.3dmax D. 8.已知大写字母A的ASCII码值是65,小写字母a的ASCII码是97,则用八进制表示 的字符常量’\101’是()。 A.字符A B.字符a C.字符c D.非法的常量 9.以下选项中,正确的字符常量是()。 A.”F”B.’\\’’C.’W’D.’’ 10.下列变量定义中合法的是 A.short _a=; B.double b=1+; C.long do=0xfdaL; D.float 2_and=1-e-3; 11.为了避免嵌套的if-else语句的二义性,C语言规定else总是与()组成配对关系。 A.缩排位置相同的if B.在其之前未配对的if C.在其之前未配对的最近的if D.同一行上的if 12.下列运算符中优先级最高的是()。 A.< B.&& C.+ D.!= 13.判断char型变量s是否为小写字母的正确表达式是()。 A.’a’ <= s<=’z’B.(s>=’a’) & (s<=’z’) C.(s>=’a’) && (s<=’z’) D.(’a’<=s) and (’z’>=s)

c语言期末测试题附答案

c语言期末测试题附答 案 集团标准化工作小组 #Q8QGGQT-GX8G08Q8-GNQGJ8-MHHGN#

课程代码:A100002 座位号: 《计算机技术基础(C 语言)》试卷A 姓名: 学号: 专业: 学院: 班级: 20 年 月 日 第一 部分 选择题(共 30 分) 一、单项选择题(本大题共 15 小题,每题只有一个正确答 案,答对一题得 2 分,共 30 分) 1、以下关于C 语言标识符的描述中,正确的是 【 】。 A )标 识符可以由汉字组成 B )标识符只能以字母开头 C )关键字可以作为用户标识符 D )Area 与area 是不同的标识符 2、使下列程序段输出“123,456,78”,键盘输入数据,正确的输入是【 】。 int i,j,k; scanf(“%d,%3d%d”,&i,&j,&k); printf(“%d,%d,%d \n”,i,j,k); 3、判断char 类型的变量c1是否为数字字符的正确表达式为 【 】。 A) (c1>=0)&&(c1<=9) B) (c1>=’0’)&&(c1<=’9’) C) ’0’<=c1<=’9’ D) (c1>=’0’)||(c1<=’9’) 4、若有语句int a=1,b=2,c=3;则以下值为0的表达式是 【 】。 A )’a’&&’b’ B )a<=b C )((a>b)||(b0;i--);后,变量i 的值为 【 】。 A )10 B )9 C )0 D )1 8、若有int *p1, *p2,k; 不正确的语句是 【 】 A. p1=&k B. p2=p1 C. *p1=k+12 D. k=p1+p2 9、在函数中未指定存储类别的局部变量,其隐含的存储类别是 【 】

实验三 软中断通信

实验三软中断通信 实验目的 1、了解什么是信号 2、熟悉LINUX系统中进程之间软中断通信的基本原理 实验内容 1、编写程序:用fork( )创建两个子进程,再用系统调用signal( )让父进程捕捉键盘上来的中断信号(即按^c键);捕捉到中断信号后,父进程用系统调用kill( )向两个子进程发出信号,子进程捕捉到信号后分别输出下列信息后终止: Child process1 is killed by parent! Child process2 is killed by parent! 父进程等待两个子进程终止后,输出如下的信息后终止: Parent process is killed! 2、分析利用软中断通信实现进程同步的机理 实验指导 一、信号 1、信号的基本概念 每个信号都对应一个正整数常量(称为signal number,即信号编号。定义在系统头文件中),代表同一用户的诸进程之间传送事先约定的信息的类型,用于通知某进程发生了某异常事件。每个进程在运行时,都要通过信号机制来检查是否有信号到达。若有,便中断正在执行的程序,转向与该信号相对应的处理程序,以完成对该事件的处理;处理结束后再返回到原来的断点继续执行。实质上,信号机制是对中断机制的一种模拟,故在早期的UNIX版本中又把它称为软中断。 信号与中断的相似点: (1)采用了相同的异步通信方式; (2)当检测出有信号或中断请求时,都暂停正在执行的程序而转去执行相应的处理程序;(3)都在处理完毕后返回到原来的断点; (4)对信号或中断都可进行屏蔽。 信号与中断的区别: (1)中断有优先级,而信号没有优先级,所有的信号都是平等的; (2)信号处理程序是在用户态下运行的,而中断处理程序是在核心态下运行; (3)中断响应是及时的,而信号响应通常都有较大的时间延迟。 信号机制具有以下三方面的功能: (1)发送信号。发送信号的程序用系统调用kill( )实现; (2)预置对信号的处理方式。接收信号的程序用signal( )来实现对处理方式的预置;(3)收受信号的进程按事先的规定完成对相应事件的处理。 2、信号的发送 信号的发送,是指由发送进程把信号送到指定进程的信号域的某一位上。如果目标进程正在一个可被中断的优先级上睡眠,核心便将它唤醒,发送进程就此结束。一个进程可能在其信号域中有多个位被置位,代表有多种类型的信号到达,但对于一类信号,进程却只能记住其中的某一个。 进程用kill( )向一个进程或一组进程发送一个信号。

广工Anyview试题答案第五章

/********** 【习题5.002】编写程序,利用while语句在同一行中 逐个显示从1至5的数字,每个数字之前保留2个空格。**********/ void main() { int i=1; while(i<=5) { printf(" %d",i); i++; } } /********** 【习题5.003】编写程序,利用for语句在同一行中逐个 显示从1至6的数字,每个数字之前保留2个空格。 **********/ void main() { for(int i=1;i<=6;i++) printf(" %d",i);

} /********** 【习题5.004】n是系统给定的外部整型变量(不需要 自行定义)。编写程序,利用循环语句在同一行中逐 个显示从1至n的数字,每个数字之前保留2个空格。**********/ void main() { for(int i=1;i<=n;i++) printf(" %d",i) ; } /********** 【习题5.012】请仅在程序空缺处填入合适内容,使其 实现功能:依次输入5个整数,计算它们之和并输出。**********/ #include void main() { int i,sum=0,n; for (i=1;i<=5;i++)

{ scanf("%d",&n); sum=sum+n; } printf("sum = %d",sum); } /********** 【习题5.020】n和s是系统给定的外部整型变量(不需要 自行定义)。编写程序,求1到n之间的整数之和,并将结果存放到s。 **********/ void main() { for(int i=1;i<=n;i++) s+=i; } /********** 【习题5.022】n是系统给定的外部变量。编写程序, 求1到n间的自然数之和。请定义局部变量s存放求和 的结果,并用下列语句输出结果 printf("1+2+...+n=%d\n",s);

c语言期末考试及答案讲解

C语言考试模拟试卷 1.若有定义:char c;int d;程序运行时输入:c=1,d=2<回车>,能把字符1输入给变量c、整数2输入给变量d的输入语句是 A、scanf("c=%d d=%d",&c,&d); B、scanf("c=%c d=%d",&c,&d); C、scanf("c=%d,d=%d",&c,&d); D、scanf("c=%c,d=%d",&c,&d); 【答案】D 【解析】scanf()函数中,%d对应的参数是整数型地址,%c对应参数为char 型地址,C,A选项错误;如果输入地址有多个,应该用逗号隔开,B选项错误,故答案为D选项。 2.以下叙述错误的是 A、在进行模块化程序设计的时候,应首先完成每个模块的编写调试,再集中考虑主程序中的算法 B、同一程序各模块可由一组人员同时进行编写调试,可提高编写程序的效率 C、模块化的程序设计是采用自顶向下、逐步细化的原则 D、程序的每个模块都可通过三种基本结构实现 【答案】A 【解析】结构化程序设计把一个复杂问题的求解过程分阶段进行,需要保证自顶向下、逐步细化、模块化设计、结构化编码。进行模块化设计时,首先设计框架,并定义和调试好各个模块之间的输入输出关系,然后完成各个模块的编写调试后

再集中编译,模块化的程序设计采用自顶向下、逐步细化的原则,A选项叙述错误,C选项叙述正确。各个模块可以由不同人员同时进行编写调试,提高编写程序的效率,B选项叙述正确。结构化程序主要由3种基本控制结构组成,顺序结构、选择结构、循环结构,这三种基本结构可以解决任何复杂的问题,D选项叙述正确。故选择A选项。 3.设有定义:int a=0,b=1,c=1; 以下选项中,表达式值与其它三个不同的是 A、b=a==c B、a=b=c C、a=c==b D、c=a!=c 【答案】A 【解析】赋值运算结合性为由右向左结合,赋值运算符左值为变量,右值为变量或常量,且左右两边数据类型相同才能实现赋值。成功实现赋值后以左值为返回值。逻辑表达式成立则返回1,不成立返回0。A选项逻辑表达式a==c不成立(0),则 b=0,表达式值为0。B选项中将c赋值给b,b=1,再将b赋给a,a=1,表达式值为1。C选项逻辑表达式c==b成立(1),则a=1,表达式值为1。D选项逻辑表达式a!=c成立(1),则c=1,表达式值为1。A选项与其他选项不同,A选项正确。 4.设有两行定义语句:

进程软中断通信资料

进程软中断通信 【预备知识】 进程软中断通信涉及的系统调用描述如下。 1.kill() 进程用kill()向一个进程或一组进程发送一个信号。系统调用格式为int kill(pid,sig)。其中,pid是一个或一组进程的标识符,sig是要发送的软中断信号。信号的发送分如下三种情况。 pid>0时,核心将信号发送给进程pid。 pid=0时,核心将信号发送给与发送进程同组的所用进程。 pid=-1时,核心将信号发送给所有用户标识符真正等于发送进程的有效用户标识号的进程。 2.signal(sig,function) 接收信号的程序用signal()来实现对处理方式的预置,允许调用进程控制软中断信号。系统调用格式为signal(sig function),此时需包含头文件signal.h。其中,sig用于指定信号的类型,sig为0则表示没有收到任何信号,其余类型如表所示。 调用函数使用如下头文件: #include 参数定义如下: signal (sig,function) int sig; void(*func) (); function是该进程中的一个函数地址,在核心返回用户态时,它以软中断信号的序号作为参数调用该函数,对除了信号SIGKILL、SIGTRAP和SIGPWR以外的信号,核心自动重新设置软中断信号处理程序的值为SIG_DFL,进程不能捕获SIGKILL信号。 function的解释如下: (1)function=1时,进程对sig类信号不做任何处理便立即返回,亦即屏蔽该类型号。 (2)function=0时,默认值,进程收到sig信号后终止自己。 (3)function为非0、非1类整数时,执行用户设置的软中断处理程序。

Linux进程通信实验报告

Linux进程通信实验报告 一、实验目的和要求 1.进一步了解对进程控制的系统调用方法。 2.通过进程通信设计达到了解UNIX或Linux系统中进程通信的基本原理。 二、实验内容和原理 1.实验编程,编写程序实现进程的管道通信(设定程序名为pipe.c)。使 用系统调用pipe()建立一条管道线。而父进程从则从管道中读出来自 于两个子进程的信息,显示在屏幕上。要求父进程先接受子进程P1 发来的消息,然后再接受子进程P2发来的消息。 2.可选实验,编制一段程序,使其实现进程的软中断通信(设定程序名为 softint.c)。使用系统调用fork()创建两个子进程,再用系统调用 signal()让父进程捕捉键盘上来的中断信号(即按Del键),当父进程 接受这两个软中断的其中一个后,父进程用系统调用kill()向两个子 进程分别发送整数值为16和17的软中断信号,子进程获得对应软中 断信号后分别输出相应信息后终止。 三、实验环境 一台安装了Red Hat Linux 9操作系统的计算机。 四、实验操作方法和步骤 进入Linux操作系统,利用vi编辑器将程序源代码输入并保存好,然后 打开终端对程序进行编译运行。 五、实验中遇到的问题及解决 六、实验结果及分析 基本实验 可选实验

七、源代码 Pipe.c #include"stdio.h" #include"unistd.h" main(){ int i,j,fd[2]; char S[100]; pipe(fd); if(i=fork==0){ sprintf(S,"child process 1 is sending a message \n"); write(fd[1],S,50); sleep(3); return; } if(j=fork()==0){ sprintf(S,"child process 2 is sending a message \n"); write(fd[1],S,50); sleep(3); return;

2016最新广工anyview数据结构答案

【题目】若两棵二叉树T1和T2皆为空,或者皆不空且T1的左、右子树和T2的左、右子树分别相似,则称二叉树T1和T2相似。试编写算法,判别给定两棵二叉树是否相似。 二叉链表类型定义: typedef struct BiTNode { TElemType data; struct BiTNode *lchild, *rchild; } BiTNode, *BiTree; **********/ Status Similar(BiTree T1, BiTree T2) /* 判断两棵二叉树是否相似的递归算法*/ { if(!T1&&!T2)//同为空时,两树相似 return TRUE;

else if(T1&&T1){ if(Similar(T1 -> lchild,T2 -> lchild) && Similar(T1 -> rchild,T2 -> rchild)) //两树都不为空时,判断左右子树是否相似 return TRUE; else return FALSE; }else//以上两种情况都不符合,就直接返回FALSE return FALSE; } /********** 【题目】编写递归算法,求对二叉树T先序遍历时 第k个访问的结点的值。 二叉链表类型定义: typedef struct BiTNode {

TElemType data; struct BiTNode *lchild, *rchild; } BiTNode, *BiTree; **********/ TElemType PreOrder(BiTree T, int &k) { TElemType x='#'; if(T==NULL)return '#'; if(k==1)return T->data; if(T->lchild!=NULL) { k--; x=PreOrder(T->lchild,k); } if(T->rchild!=NULL&&x=='#')

大学C语言期末考试习题集(带详解答案)

一、单项选择题 1.(A)是构成C语言程序的基本单位。 A、函数 B、过程 C、子程序 D、子例程 2.C语言程序从 C开始执行。 A) 程序中第一条可执行语句 B) 程序中第一个函数 C) 程序中的main函数 D) 包含文件中的第一个函数 3、以下说法中正确的是(C)。 A、C语言程序总是从第一个定义的函数开始执行 B、在C语言程序中,要调用的函数必须在main( )函数中定义 C、C语言程序总是从main( )函数开始执行 D、C语言程序中的main( )函数必须放在程序的开始部分 4.下列关于C语言的说法错误的是(B)。 A) C程序的工作过程是编辑、编译、连接、运行 B) C语言不区分大小写。 C) C程序的三种基本结构是顺序、选择、循环 D) C程序从main函数开始执行 5.下列正确的标识符是(C)。 A.-a1 B.a[i] C.a2_i D.int t 5~8题为相同类型题 考点:标识符的命名规则 (1)只能由字母、数字、下划线构成 (2)数字不能作为标识符的开头 (3)关键字不能作为标识符 选项A中的“-”,选项B中“[”与“]”不满足(1);选项D中的int为关键字,不满足(3) 6.下列C语言用户标识符中合法的是( B)。 A)3ax B)x C)case D)-e2 E)union 选项A中的标识符以数字开头不满足(2);选项C,E均为为关键字,不满足(3);选项D中的“-”不满足(1); 7.下列四组选项中,正确的C语言标识符是(C)。 A) %x B) a+b C) a123 D) 123 选项A中的“%”,选项B中“+”不满足(1);选项D中的标识符以数字开头不满足(2) 8、下列四组字符串中都可以用作C语言程序中的标识符的是(A)。 A、print _3d db8 aBc B、I\am one_half start$it 3pai

广工Anyview试题答案-第五章

广工Anyview试题答案-第五章

/********** 【习题5.002】编写程序,利用while语句在同一行中 逐个显示从1至5的数字,每个数字之前保留2个空格。 **********/ void main() { int i=1; while(i<=5) { printf(" %d",i); i++; } } /********** 【习题5.003】编写程序,利用for语句在同一行中逐个 显示从1至6的数字,每个数字之前保留2个空格。 **********/

void main() { for(int i=1;i<=6;i++) printf(" %d",i); } /********** 【习题5.004】n是系统给定的外部整型变量(不需要 自行定义)。编写程序,利用循环语句在同一行中逐 个显示从1至n的数字,每个数字之前保留2个空格。 **********/ void main() { for(int i=1;i<=n;i++) printf(" %d",i) ; } /********** 【习题5.012】请仅在程序空缺处填入合适内容,

使其 实现功能:依次输入5个整数,计算它们之和并输出。 **********/ #include void main() { int i,sum=0,n; for (i=1;i<=5;i++) { scanf("%d",&n); sum=sum+n; } printf("sum = %d",sum); } /********** 【习题5.020】n和s是系统给定的外部整型变量(不需要 自行定义)。编写程序,求1到n之间的整数之和,并将结 果存放到s。 **********/

C语言期末考试试题及答案

个人收集整理-仅供参考 2008-2009学年第一学期期末考试试卷 考试说明:本课程为闭卷考试,可携带书写与修正文具,满分为:100 分. 考试结束后请将后页答题卡与试卷分开上交 ..... 一、单选题<每题2分,共30分) 1.以下叙述正确地是 ______ . A)C程序地每行只能写一条语句 B>语言本身没有输入输出语句 C)在C程序中,注释说明只能位于一条语句地后面 D)在多函数地程序中,main函数必须放在其它子函数地定义之前 2.以下不正确地常量表示形式是 ______ . A> 0.45 B>0XF5 C>‘\85’D> 32.67E-5 3. 以下不正确地变量名是 ______ . A)R&B B> _max C>INT D> SUM3 4. 以下正确地一组语句是 ________. A> int x=y=5; B> int n; scanf("%d",&n>;int a[n]; C> char a,*p; p=&a; D> char s[10]; s="hello"; 5. 若以下变量均已正确定义和赋值,则正确地语句是 ________. A> a=b==5; B> y=x%2.0; C> x+y=10; D> n=8=2*4; 6.下面各组中与给出地程序段功能不等价地是 ________. if(a>0> b=1; else if(a==0> b=0; else b=-1; A>if(a>0> b=1; B>if(a>0> b=1; C>if(a>0> b=1; D>if(a>=0> if(a==0> b=0; else if(a==0> b=0; if(a>0> b=1; if(a<0> b=-1; if(a<0> b=-1; else b=-1; if(a==0> b=0; else b=0; if(a<0> b=-1; 7. 运行下面程序段,若输入abc#,则程序输出是:________. char c;int v1=0,v2=0; while((c=getchar(>>!='#'> { switch(c> { case 'a':v1++; default :v1++;v2++; case 'c':v2++; } } printf("v1=%d,v2=%d\n",v1,v2>; A>2,2 B>3,5 C> 3,4 D>2,5

实验一 进程管理

实验一进程管理 1. 实验目的 ⑴加深对进程概念的理解,明确进程和程序的区别; ⑵进一步认识并发执行的实质; ⑶分析进程争用资源的现象,学习解决进程互斥的方法; ⑷了解Linux系统中进程通信的基本原理。 2. 实验准备 ⑴阅读Linux的sched.h源码文件,加深对进程管理的理解。 ⑵阅读Linux的fork.h源码文件,分析进程的创建过程。 3. 实验内容 ⑴进程的创建 编写一段程序,使用系统调用fork ( )创建两个子进程。当此程序运行时,在系统中有一个父进程和两个子进程活动。让每一个进程在屏幕上显示一个字符:父进程显示字符“a”;子进程显示字符“b”和字符“c”。试观察记录屏幕上的显示结果,并分析原因。 ⑵进程的控制 修改已编写的程序,将每个进程输出一个字符改为每个进程输出一句话,再观察程序执行时屏幕上出现的现象,并分析原因。 如果在程序中使用系统调用lockf ( )来给每一个进程加锁,可以实现进程之间的互斥,观察并分析出现的现象。 ⑶软中断通信 编制一段程序实现进程的软中断通信。要求:使用系统调用fork ( )创建两个子进程,再用系统调用signal( )让父进程捕捉键盘上发来的中断信号(既按Del键);当捕捉到中断信号后,父进程系统调用kill( )向两个子进程发出信号,子进程捕捉到信号后分别输出下列信息后终止:Child process 1 is killed by parent! Child process 2 is killed by parent! 父进程等待两个子进程终止后,输出如下的信息后终止: Parent process is killed! 在上面的程序中增加语句signal (SIGINT, SIG_IGN) 和signal (SIGQUIT, SIG_IGN),观察执行结果,并分析原因。 4. 实验指导

江苏大学大一c语言期末复习题汇总

选择题 1.下列字符序列中,不可用作C语言标识符的是()。 A.abc123 B.no.1 C._123_ D._ok 2.请选出可用作C语言用户标识符的一组标识符()。 A.void B.a3_b3 C.For D.2a define _123 -abc DO WORD IF Case sizeof 3.不属于C语言关键字的是()。 A.int B.break C.while D.character 4.以下不能定义为用户标示符的是()。 A.scanf B.V oid C._3com_ D.int 5.C语言程序的基本单位是()。 A.程序行B.语句C.函数D.字符 6.以下说法中正确的是()。 A.C语言程序总是从第一个定义的函数开始执行 B.在C语言程序中,要调用的函数必须在main( )函数中定义 C.C语言程序总是从main( )函数开始执行 D.C语言程序中的main( )函数必须放在程序的开始部分 7.以下选项中,合法的用户标识符是()。 A.long B._2abc C.3dmax D.A.dat 8.已知大写字母A的ASCII码值是65,小写字母a的ASCII码是97,则用八进制表示 的字符常量’\101’是()。 A.字符A B.字符a C.字符c D.非法的常量 9.以下选项中,正确的字符常量是()。 A.”F”B.’\\’’C.’W’D.’’ 10.下列变量定义中合法的是 A.short _a=1-.le-1; B.double b=1+5e2.5; C.long do=0xfdaL; D.float 2_and=1-e-3; 11.为了避免嵌套的if-else语句的二义性,C语言规定else总是与()组成配对关系。 A.缩排位置相同的if B.在其之前未配对的if C.在其之前未配对的最近的if D.同一行上的if 12.下列运算符中优先级最高的是()。 A.< B.&& C.+ D.!= 13.判断char型变量s是否为小写字母的正确表达式是()。 A.’a’ <= s<=’z’B.(s>=’a’) & (s<=’z’) C.(s>=’a’) && (s<=’z’) D.(’a’<=s) and (’z’>=s) 14.已知x=45, y=’a’, z=0; 则表达式(x>=z && y<’z’ || !y)的值是()。 A.0 B.语法错 C.1 D.“假”

C语言期末考试复习题及答案

C语言期末考试复习题及答案 一、选择题:下列各题A)、B)、C)、D)四个选项中只有一个是正 确的,请将正确的选项涂写在答案纸上。答在试卷上不得分。 (1)C语言规定:在一个源程序中,main函数的位置 D 。 A)必须在最后B)必须在系统调用的库函数的后面。 C)必须在最开始。。D)可以任意 (2) C语言中的标识符只能由字母、数字和下划线三种字符组成,且第一个字符 A 。 A)必须为字母或下划线。。B)必须为下划线。 C)必须为字母D)可以是字母、数字和下划线中的任一种字符。 (3)下面四个选项中,均是正确的八进制数或十六进制数的选项是 B 。 A)-10 0x8f -011 B) 010 -0x11 0xf1 C) 0abc -017 0xc D) 0a12 -0x123 -0xa (4) C语言中int型数据在内存中占两个字节,则unsegned int取值范围是 A 。 A)0 ~ 65535 B)0 ~ 32767 C)-32767 ~ 32768 D)-32768 ~ 327687 (5) 若有定义:int a = 7; floa x = , y = ; 则表达式x + a % 3 * (int) (x + y) % 2/4 的值是 D 。 A) B) 0.00000 C) D) (6)已知ch是字符型变量,下面不正确的赋值语句是 B 。 A)ch = 5 + 9 ; B) ch= ' a + b '; C) ch = ' \ 0 '; D) ch= '7' + '6' ; (7) 设x , y和z是int型变量,且x = 3, y = 4 , z = 5 则下面表达式中值为0的

实验5 进程间通信实验

实验五进程间通信实验 一、实验目的 1、了解什么是信号。 2、熟悉LINUX系统中进程之间软中断通信的基本原理。 3、了解什么是管道 4、熟悉UNIX/LINUX支持的管道通信方式 二、实验内容 1、编写一段程序,使用系统调用fork( )创建两个子进程,再用系统调用signal( )让父进程捕捉键盘上来的中断信号(即按ctrl+c键),当捕捉到中断信号后,父进程用系统调用kill( )向两个子进程发出信号,子进程捕捉到信号后,分别输出下列信息后终止:Child process 1 is killed by parent! Child process 2 is killed by parent! 父进程等待两个子进程终止后,输出以下信息后终止: Parent process is killed! <参考程序> #include #include #include #include #include int wait_mark; void waiting(),stop(); void main() {int p1, p2; signal(SIGINT,stop); while((p1=fork())==-1); if(p1>0) /*在父进程中*/ { while((p2=fork())==-1); If(p2>0) /*在父进程中*/ { wait_mark=1; waiting(0); kill(p1,10); kill(p2,12); wait( ); wait( ); printf("parent process is killed!\n"); exit(0); } else /*在子进程2中*/ { wait_mark=1; signal(12,stop); waiting();

广工Anyview试题答案第八章

/**********【习题】请编写函数func(char s[], char t[], int n), 由数组s中长度为n的字符序列构造其逆序列,并存储在数组t中。 例如,由给定字符序列s="are"求得逆序列t="era";由s="time" 求得t="emit"。 **********/ void func(char s[], char t[], int n) /* 数组s的前n个元素存放给定的字符序列, 数组t的前n个元素存放s的逆序列。 注意:数组的下标从0开始。 */ { for(int i=0;i

2019年C语言期末考试题及答案

的一个)。, 求最小值出现的位置(如果最小值1、给定n个数据例如:输入的数组为:求出第一次出现的位置即可)。出现多次, 1 2 3 4 1 5 6 12 18 9k+1i<=n最大值第三行五行 10 11 2的值并输出。计算公式为:2、编写程序求无理数e e=1+1/1!+1/2!+1/3!+......+1/n! 求出的最大数为18,行坐标为2,列坐标为1。。当1/n!<0.000001时e=2.718282 7、求一个n位自然数的各位数字的积。(n 是小于10的 自然数)求一批数中最大值和最小值的积。3、 2、某一正数的值保留位小数,对第三位进行四舍48、计算n门课程的平均值,计算结果作为函数值返回。五入。例如:若有5门课程的成绩是:92,76,69,58,88,

则函数的值为76.599998。 x,5、从键盘上输入任意实数求出其所对应的函数值。 z=(x-4)的二次幂(x>4) 9、求一批数中小于平均值的数的个数。 z=x的八次幂(x>-4) z=z=4/(x*(x+1))(x>-10)z=|x|+20(其他) M×N求出、6整型数组的最大元素及其所在的行坐标10、编写函数判断一个整数m的各位数字之和能否被7整除,及列坐标(如果最大元素不唯一,选择位置在最前面 0。调用该函数找出可以被7整除则返回1,否则返回 100~200之间满足条件的所有数。 15、实现两个整数的交换。 例如:给a和b分别输入:60和65,输入为:a=65 b=60 void fun(int tt[M][N],int pp[N]),11、请编一个函数16、将字符串中的小写字母转换为对应的大写字母, N列的二维数组,求出二维数组每列M tt指向一个行其它字符不变。

上海海事大学(C语言期末)上机题库

试卷编号:9688 所属语言:C语言 试卷方案:期中考试 试卷总分:100分 共有题型:5种 一、填空共15题(共计15分) 第1题(1.0分)题号:528 设a、b、c为整型数, 且a=2、b=3、c=4, 则执行完以下语句: a*=16+(b++)-(++c); 后,a的值是【1】. 答案: =======(答案1)======= 28 第2题(1.0分)题号:78 已知 i=5;写出语句 i+=012; 执行后整型变量 i 的十进制值是【1】. 答案: =======(答案1)======= 15 第3题(1.0分)题号:510 若a是int型变量,则计算表达式 a=25/3%3 后a的值为【1】. 答案: =======(答案1)======= 2 第4题(1.0分)题号:437 以下程序的输出结果为【1】, #include "stdio.h" main(){int a=010,j=10;printf("%d,%d\n",++a,j--);}

答案: =======(答案1)======= 9,10 第5题(1.0分)题号:431 执行下面两个语句,输出的结果是【1】,char c1=97,c2=98;printf("%d %c",c1,c2); 答案: =======(答案1)======= 97 b *第6题(1.0分)题号:293 getchar()函数只能接收一个【1】. 答案: =======(答案1)======= 字符 第7题(1.0分)题号:440 设a=3,b=4,c=5,则表达式!(a+b)+c-1&&b+c/2的值为【1】. 答案: =======(答案1)======= 1 第8题(1.0分)题号:95 已知a=13,b=6, a&&b的十进制数值为【1】. 答案: =======(答案1)======= 1 第9题(1.0分)题号:306 当a=1,b=2,c=3时,执行以下程序段后b=【1】.

大一c语言期末试题及参考答案word版本

2004级信息学院《C语言设计》考试试题 一、判断下列语句或程序的对错。 10分√ 1 int x=y=z=’0’; (×) y,z没有定义 2 #include ; (×)不能有分号,#开头的结尾均不能有分号; 3 printf(“%s\n”,”c language”); (√) 4 float a[100]; int *p=a; (×)数据类型不匹配 5 char str[20]; 6 int data[4]={0,1,2,3,4}; (×)五个元素,但是只有四个单元 7 float x=1.45e+310L; (×)数值越界 8 int xyz-1=2; (×) 9 int x=‘\xae’ ; (√) 10 int *p,a[2][3] ; p=a ; (×)数据类型不匹配 二计算下列表达式的值 10分 设 unsigned int a=10,b=17,c=5,d=3; float f ; (1)f=b/c ( 3.0 ) (2)!(a+b)+c-1&&b+c/2 ( 1 ) (3)(a^b)+(c>>1+d) ( 0x1b ) (4)a+=b%=a=b ( 17 ) (5)a=2,b=a*++b ( 2 ) 三程序改错 10分 (1)求两个浮点数的平方和及平方差 #include float calculate (float x,float y,float *sub);添加函数原型声明 main () { float a,b; float add_reasult, sub_result; scanf (“%f,%f”,a,b); add_result=calculate(a,b,&sub_result); printf( “a*a+b*b=%d,a*a-b*b=%d\n”,add_result,sub_result); } float calculate (float x,float y,float *sub) 添加函数类型 { float *temp; 应该直接定义为变量float temp; sub=a*a-b*b ; *sub=a*a-b*b;

广工Anyview试题答案 第四章

/********** 【习题4.011】关系表达式,if语句第一种形式 在以下程序空缺处填写合适内容,使得程序判断用户输入的字符是否为'@',若是则显示:"输入正确"。**********/ #include void main() { char c; scanf("%c",&c); if( c=='@' ) printf("输入正确\n"); } /********** 【习题4.012】关系表达式,if语句第一种形式 在以下程序空缺处填写合适内容,使得程序输入 一个整数赋给变量a,计算并输出a的绝对值a1。**********/ #include #include void main() { int a,a1; scanf("%d",&a); a1=abs(a); printf("|%d| = %d\n",a,a1); } /********** 【习题4.013】逻辑表达式,if语句第一种形式 在以下程序空缺处填写合适内容,使得程序对 输入的整型变量x的值进行判断,若变量x值为“假”时输出“False”。 **********/ #include int main( ) { int x; scanf("%d",&x); if(x==0 )

printf("False!\n"); return 0; } /********** 【习题4.016】if语句的子句为复合语句 在以下程序空缺处填写合适内容,使得程序将输入到变量a和b的两个整数按照由大到小的顺序输出。**********/ #include void main( ) { inta,b,t; scanf("%d %d",&a,&b); if(ab成立则将b的平方值赋予 c,否则将0赋予c;最后显示c的值。 **********/ void main() { intc,d; c=( else c=0; printf("c=%d\n",c); } /********** 【习题4.211】已知3个非零整数被分别输入到整型

相关主题
文本预览
相关文档 最新文档