P r o c e e d i n g s o f H L T /E M N L P 2005
Exploiting a Verb Lexicon in Automatic Semantic Role Labelling
Robert S.Swier and Suzanne Stevenson
Department of Computer Science
University of Toronto
Toronto,Ontario,Canada M5S 3G4swier,suzanne @https://www.doczj.com/doc/8e5810737.html,
Abstract
We develop an unsupervised semantic role labelling system that relies on the direct application of information in a predicate lexicon combined with a simple probabil-ity model.We demonstrate the usefulness of predicate lexicons for role labelling,as well as the feasibility of modifying an existing role-labelled corpus for evaluat-ing a different set of semantic roles.We achieve a substantial improvement over an informed baseline.
1Introduction
Intelligent language technologies capable of full semantic interpretation of domain-general text re-main an elusive goal.However,statistical advances have made it possible to address core pieces of the problem.Recent years have seen a wealth of research on one important component of seman-tic interpretation—automatic role labelling (e.g.,Gildea and Jurafsky,2002;Pradhan et al.,2004;Ha-cioglu et al.,2004,and additional papers from Car-reras and Marquez,2004).Such work aims to an-notate each constituent in a clause with a semantic tag indicating the role that the constituent plays with respect to the target predicate,as in (1):
(1)[Yuka]
[whispered]to [Dar]Semantic role labelling systems address a crucial ?rst step in the automatic extraction of semantic re-lations from domain-general text,taking us closer to the goal of comprehensive semantic mark-up.
Most work thus far on domain-general role la-belling depends on supervised learning over statis-tical features extracted from a hand-labelled corpus.
The reliance on such a resource—one in which the arguments of each predicate are manually identi?ed and assigned a semantic role—limits the portability of such methods to other languages or even to other genres of corpora.
In this study,we explore the possibility of using a verb lexicon,rather than a hand-labelled corpus,as the primary resource in the semantic role labelling task.Perhaps because of the focus on what can be gleaned from labelled data,existing supervised approaches have made little use of the additional knowledge available in the predicate lexicon asso-ciated with the labelled corpus.By contrast,we ex-ploit the explicit knowledge of the role assignment possibilities for each verb within an existing lexi-con.Moreover,we utilize a very simple probability model within a highly ef?cient algorithm.
We use VerbNet (Kipper et al.,2000),a computa-tional lexicon which lists the possible semantic role assignments for each of its verbs.Our algorithm extracts automatically parsed arguments from a cor-pus,and assigns to each a list of the compatible roles according to VerbNet.Arguments which are given only a single role possibility are considered to have been assigned an unambiguous role label.This set of arguments constitutes our primary-labelled data,which serves as the noisy training data for a simple probability model which is then used to label the re-maining (role ambiguous)arguments.
This method has several advantages,the foremost of which is that it eliminates the dependence on a role labelled corpus,a very expensive resource to produce.Of course,a verb lexicon is also an expen-sive resource,but one that is highly reusable across a range of NLP tasks.Moreover,the approach points at some potentially useful information that current
supervised methods have failed to exploit.Even if one has access to an annotated corpus for training, our work shows that directly calling on additional information from the lexicon itself may prove useful in restricting the possible labels for an argument. The method has disadvantages as well.The in-formation available in a predicate lexicon is less di-rectly applicable to building a learning model.In-evitably,our results are noisier than in a super-vised approach which has access to a labelled sam-ple of what it must produce.Still,the method shows promise:on unseen test data,the system yields an F-measure of.83on labelling of correctly extracted arguments,compared to an informed baseline of.74, and an F-measure of.65(compared to.52)on the overall identi?cation and labelling task.The latter is well below the best supervised performance of about .80on similar tasks,but it must be emphasized that it is achieved with a simple probability model and without the use of hand-labelled data.We view this as a starting point by which to demonstrate the util-ity of deriving more explicit knowledge from a pred-icate lexicon,which can be later extended through the use of additional probabilistic features.
We face a methodological challenge arising from the particular choice of VerbNet for the prototyp-ing of our method:the lexicon has no associated semantic role labelled corpus.While this under-scores the need for approaches which do not rely on such a resource,it also means that we lack a labelled sample of data against which to evaluate our results.To address this,we use the existing labelled corpus of FrameNet(Baker et al.,1998), and develop a mapping for converting the FrameNet roles to corresponding VerbNet roles.Our mapping method demonstrates the possibility of leveraging existing resources to support the development of role labelling systems based on verb lexicons that do not have an associated hand-labelled corpus.
2VerbNet Roles and the Role Mapping Before describing our labelling algorithm,we?rst brie?y introduce the semantic role information available in VerbNet,and describe how we map FrameNet roles to VerbNet roles.
whisper
Frames:
Agent V
Agent V Prep(+dest)Recipient
Agent V Topic
Verbs in same(sub)class:
[bark,croon,drone,grunt,holler,...]
Figure1:A portion of a VerbNet entry.
2.1The VerbNet Lexicon
VerbNet is a manually developed hierarchical lexi-con based on the verb classi?cation of Levin(1993). For each of almost200classes containing a total of 3000verbs,VerbNet speci?es the syntactic frames along with the semantic role assigned to each argu-ment position of a frame.1Figure1shows an exam-ple VerbNet entry.The thematic roles used in Verb-Net are more general than the situation-speci?c roles of FrameNet.For example,the roles Speaker,Mes-sage,and Addressee of a Communication verb such as whisper in FrameNet would be termed Agent, Topic,and Recipient in VerbNet.These coarser-grained roles are often assumed in linguistic the-ory,and have some advantages in terms of capturing commonalities of argument relations across a wide range of predicates.
2.2Mapping FrameNet to VerbNet Roles
As noted,VerbNet lacks a corpus of example role as-signments against which to evaluate a role labelling based upon it.We create such a resource by adapting the existing FrameNet corpus.We formulate a map-ping between FrameNet’s larger role set and Verb-Net’s much smaller one,and create a new corpus with our mapped roles substituted for the original roles in the FrameNet corpus.
We perform the mapping in three steps.First we use an existing mapping between the semantically-speci?c roles in FrameNet and a much smaller inter-mediate set of39semantic roles which subsume all FrameNet roles.2The associations in this mapping are straightforward—e.g.,the Place role for Abusing verbs and the Area role for Operate-vehicle verbs are both mapped to Location.
Second,from this intermediate set we create a simple mapping to the set of22VerbNet roles.Some roles are unaffected by the mapping(e.g.,Cause alone in the intermediate set maps to Cause in the VerbNet set).Other roles are merged(e.g.,Degree and Measure both map to Amount).Moreover,some roles in FrameNet(and the intermediate set)must be mapped to more than one VerbNet role.For exam-ple,an Experiencer role in FrameNet is considered Experiencer by some VerbNet classes,but Agent by others.In such cases,our mappings in this step must be speci?c to the VerbNet class.
In this second step,some roles have no subsum-ing VerbNet role,because FrameNet provides roles for a wider variety of relations.For example,both FrameNet and the intermediate role set contain a Manner role,which VerbNet does not have.We create a catch-all label,“NoRole,”to which we map eight such intermediate roles:Condition,Man-ner,Means,Medium,Part-Whole,Property,Pur-pose,and Result.These phrases labelled NoRole are adjuncts—constituents not labelled by VerbNet.
In the third step of our mapping,some of the roles in VerbNet—such as Theme and Topic,Asset and Amount—which appear to be too-?ne grained for us to distinguish reliably,are mapped to a more coarse-grained set of VerbNet roles.The?nal set consists of16roles:Agent,Amount,Attribute,Bene?ciary, Cause,Destination,Experiencer,Instrument,Loca-tion,Material,Predicate,Recipient,Source,Stimu-lus,Theme and Time;plus the NoRole label.
3The Frame Matching Process
A main goal of our system is to demonstrate the usefulness of predicate lexicons for the role la-belling task.The primary way that we apply the knowledge in our lexicon is via a process we call frame matching,adapted from Swier and Steven-son(2004).The automatic frame matcher aligns arguments extracted from an automatically parsed sentence with the frames in VerbNet for the target verb in the sentence.The output of this process is a highly constrained set of candidate roles(possi-bly of size one)for each potential argument.The resulting singleton sets constitute a(noisy)role as-signment for their corresponding arguments,form-ing our primary-labelled data.This data is then used to train a probability model,described in Section4, which we employ to label the remaining arguments (those having more than one candidate role).
3.1Initialization of Candidate Roles
The frame matcher construes extracted arguments from the parsed sentence as being in one of the four main types of syntactic positions(or slots)used by VerbNet frames:subject,object,indirect object, and PP-object.3Additionally,we specialize the lat-ter by the individual preposition,such as“object of for.”For the?rst three slot types,alignment be-tween the extracted arguments and the frames is rel-atively straightforward.An extracted subject would be aligned with the subject position in a VerbNet frame,for instance,and the subject role from the frame would be listed as a possible label for the ex-tracted subject.
The alignment of PP-objects is similar to that of the other slot types,except that we add an ad-ditional constraint that the associated prepositions must match.For PP-object slots,VerbNet frames of-ten provide an explicit list of allowable prepositions. Alternatively,the frame may specify a required se-mantic feature such as+path or+loc.In order for an extracted PP-object to align with one of these frame slots,its associated preposition must be in-cluded in the list provided by the frame,or have the speci?ed feature.To determine the latter,we manu-ally create lists of prepositions that we judge to have each of the possible semantic features.
In general,this matching procedure assumes that frames describing a syntactic argument structure similar to that of the parsed sentence are more likely to correctly describe the semantic roles of the ex-tracted arguments.Thus,the frame matcher only chooses roles from frames that are the best syntac-tic matches with the extracted argument set.This is achieved by adopting the scoring method of Swier and Stevenson(2004),in which we compute the por-tion%Frame of frame slots that can be mapped to an extracted argument,and the portion%Sent of extracted arguments from the sentence that can be mapped to the frame.The score for each frame is given by%Frame%Sent,and only frames having the highest score contribute candidate roles to the
Extracted Slots
OBJ%Frame Score
100150
Theme100200
Theme100200
Theme67167
Table1:An example of frame matching.
extracted arguments.An example scoring is shown
in Table1.Note that two of the frames are tied for
the highest score of200,resulting in two possible
roles for the subject(Agent and Instrument),and
Theme as the only possible role for the object.
As mentioned,this frame matching step is very
restrictive,and it greatly reduces role ambiguity.
Many potential arguments receive only a single can-
didate role,providing the primary-labelled data we
use to train our probability model.Some slots re-
ceive no candidate roles,which is an error for argu-
ment slots but which is correct for adjuncts.The re-
duction of candidate roles in general is very helpful
in lightening the subsequent load on the probability
model to be applied next,but note that it may also
cause the correct role to be omitted.We experiment
with choosing roles from the frames that are the best
syntactic matches,and from all possible frames.
3.2Adjustments to the Role Mapping
We further extend the frame matcher,which has ex-
tensive knowledge of VerbNet,for the separate task
of helping to eliminate some of the inconsistencies
that are introduced by our role mapping procedure.
This is a process that applies concurrently with the
initialization of candidate roles described above,but
only affects the gold standard labelling of evaluation
data.4
For instance,FrameNet assigns the role Side2to
the object of the preposition with occurring with the
verb brawl.Side2is mapped to Theme by our role
mapping;however,in VerbNet,brawl does not ac-
cept Theme as the object of with.Our mapping thus
creates a target(i.e.,gold standard)label in the eval-
uation data that is inconsistent with VerbNet.Since
there is no possibility of the role labeller assigning a
label that matches such a target,this unfairly raises
Each probability model predicts a role given certain
conditioning information,with maximum likelihood
estimates determined by the primary-labelled data
directly resulting from the frame matching step.5
We also compare one non-probabilistic model to
resolve the same set of ambiguous cases:
Default assignment:candidate roles for am-
biguous slots are ignored;the four slot classes
of subject,object,indirect object and PP-object
are assigned the roles Agent,Theme,Recipi-
ent,and Location,respectively.
These are the most likely roles assigned by the frame
matcher over our development data.
For comparison,we also apply the iterative algo-
rithm developed by Swier and Stevenson(2004),us-
ing the same bootstrapping parameters.The method
uses backoff over three levels of speci?city of prob-
abilities.
5Materials and Methods
5.1The Target Verbs
For ease of comparison,we use the same verbs as in
Swier and Stevenson(2004),except that we measure
performance over a much larger superset of verbs.In
that work,a core set of54target verbs are selected
to represent a variety of classes with interesting role
ambiguities,and the system is evaluated against only
those verbs.An additional1105verbs—all verbs
sharing at least one class with the target verbs—are
also labelled,in order to provide more data for the
probability estimations.Here,we consider our sys-
tem’s performance over the1159target verbs that
consist of the union of these two sets of verbs.
5.2The Corpus and Preprocessing
The majority of sentences in FrameNet II are taken
from the British National Corpus(BNC Reference
Guide,2000).Our development and test data con-
sists of a percentage of these sentences.For some
experiments,these sentences are then merged with
a random selection of additional sentences from the
BNC in order to provide more training data for the
probability estimations.We evaluate performance
6The verbs appearing in the validation and test sets occur
respectively across161and165FrameNet classes(what in
FrameNet are called“frames”).
5.4Methods of Argument Identi?cation
One of the decisions we face is how to evaluate the identi?cation of extracted arguments generated by the system against the manually annotated target ar-guments provided by FrameNet.We try two meth-ods,the most strict of which is to require full-phrase agreement:an extracted argument and a target ar-gument must cover exactly the same words in the sentence in order for the argument to be considered correctly extracted.This means,for instance,that a prepositional phrase incorrectly attached to an ex-tracted object would render the object incompatible with the target argument,and any system label on it would be counted as incorrect.This evaluation method is commonly used in other work(e.g.,Car-reras and Marquez,2004).
The other method we use is to require that only the head of an extracted argument and a target argu-ment match.This latter method helps to provide a fuller picture of the range of arguments found by the system,since there are fewer near-misses caused by attachment errors.Since heads of phrases are often the most semantically relevant part of an argument, labels on heads provide much of the same informa-tion as labels on whole phrases.For these reasons, we use head matching for most of our experiments
below.For comparison,however,we provide results based on full-phrase matching as well.
6Experimental Results
6.1Experimental Setup
We evaluate our system’s performance on several as-pects of the overall role labelling task;all results are given in terms of F-measure,.7The ?rst task is argument identi?cation,in which con-stituents considered by our system to be arguments (i.e.,those that are extracted and labelled)are eval-uated against actual target arguments.The second task is labelling extracted arguments,which evalu-ates the labelling of only those arguments that were correctly https://www.doczj.com/doc/8e5810737.html,st is the overall role labelling task,which evaluates the system on the combined tasks of identi?cation and labelling of all target ar-guments.
We compare our results to an informed baseline
Lab. Baseline.80.52
.83
FM+.83.65
.78
FM+D?t.Assgnmt..83.64
.76
ing data.
Two observations indicate the power of the frame matcher.First,even using the non-probabilistic de-fault assignments to resolve ambiguous roles sub-stantially outperforms the baseline(and indeed per-forms quite close to the best results,since the default role assignment is often the same as that chosen by the probability models).Importantly,the baseline uses the same default assignments,but without the bene?t of the frame matcher to further narrow down the possible arguments.Second,the frame matcher alone achieves.60F-measure on the combined task, not far below the performance of the best models. These results show that once arguments have been extracted,much of the labelling work is performed
by the frame matcher’s careful application of lexical information.
Henceforth we consider the use of the frame matcher plus as our basic system,since this is our simplest model,and no other outperforms it.
6.3Evaluation of Training Methods
In our above experiments,the probabilistic mod-els are trained only on primary-labelled data from the frame matcher run on the FrameNet data.We would like to determine whether using either more data or less noisy data may improve results.To pro-vide more data,we ran the frame matcher on the additional20%of the BNC.This provides almost 600K more sentences containing our target verbs, yielding a much higher amount of primary-labelled data.To provide less-noisy data,we trained the probability models on manually annotated target la-bels from system-identi?ed arguments in1000sen-tences.While fewer sentences are used,all argu-ments in the training data are guaranteed to have a correct role assignment,in contrast to the primary-labelled data output by the frame matcher.(We chose1000sentences as an upper bound on an amount of data that could be relatively easily anno-tated by human judges.)
Training Prim.-lab.1K sents
Data:FN annot’d FM+.65.65
.62
All Frames FM+.65
Full Phrase
.49
.61
As mentioned,for most of our evaluations we match the arguments extracted by the system to the tar-get arguments via a match on phrase heads,since head labels provide much useful semantic informa-tion.When we instead require that the extracted arguments match the targets exactly,the number of correctly extracted arguments falls from about80%
of the roughly6700targets to about74%,due to in-creased parsing dif?culty.As expected,this results in both the system and the baseline having perfor-mance decreases on the overall task.
7Related Work
Most role labelling systems have required hand-labelled training data.Two exceptions are the sub-categorization frame based work of Atserias et al. (2001)and the bootstrapping labeller of Swier and Stevenson(2004),but both are evaluated on only a small number of verbs and arguments.In related un-supervised tasks,Riloff and colleagues have learned “case frames”for verbs(e.g.,Riloff and Schmelzen-bach,1998),while Gildea(2002)has learned role-slot mappings(but does not apply the knowledge for the labelling task).
Other role labelling systems have also relied on the extraction of much more complex features or probability models than we adopt here.As a point of comparison,we apply the iterative backoff model from Swier and Stevenson(2004),trained on20%of the BNC,with our frame matcher and test data.The backoff model achieves an F-measure of.63,slightly below the performance of.65for our simplest proba-bility model,which uses less training data and takes far less time to run(minutes rather than hours).
In general,it is not possible to make direct com-parisons between our work and most other role la-bellers because of differences in corpora and role sets,and,perhaps more signi?cantly,differences in the selection of target arguments.However,the best supervised systems,using automatic parses to identify full argument phrases in PropBank,achieve about.82on the task of identifying and labelling arguments(Pradhan et al.,2004).Though this is higher than our performance of.61on full phrase ar-guments,our system does not require manually an-notated data.
8Conclusion
In this work,we employ an expensive but highly reusable resource—a verb lexicon—to perform role labelling with a simple probability model and a small amount of unsupervised training data.We out-perform similar work that uses much more data and a more complex model,showing the bene?t of ex-ploiting lexical information directly.To achieve per-formance comparable to that of supervised methods may require human?ltering or augmentation of the initial labelling.However,given the expense of pro-ducing a large semantically annotated corpus,even such“human in the loop”approaches may lead to a decrease in overall resource demands.We use such a corpus for evaluation purposes only,modi-fying it with a role mapping to correspond to our lexicon.We thus demonstrate that such existing re-sources can be bootstrapped for lexicons lacking an associated annotated corpus. Acknowledgments
We gratefully acknowledge the support of NSERC of Canada.We also thank Afsaneh Fazly,who as-sisted with much of our corpus pre-processing. References
J.Atserias,L.Padr′o,and G.Rigau.2001.Integrating multiple knowledge sources for robust semantic parsing.In Proc.of the International Conf.on Recent Advances in NLP.
C.F.Baker,C.J.Fillmore,and J.B.Lowe.1998.The Berkeley
FrameNet Project.In Proc.of COLING-ACL,p.86–90. BNC Reference Guide.2000.Reference Guide for the British National Corpus(World Edition),second edition.
X.Carreras and L.Marquez,editors.2004.CoNLL-04Shared Task.
M.Collins.1999.Head-Driven Statistical Models for Natural Language Parsing.Ph.D.thesis,University of Pennsylvania.
D.Gildea.2002.Probabilistic models of verb-argument struc-
ture.In Proc.of the19th International CoNLL,p.308–314.
D.Gildea and D.Jurafsky.2002.Automatic labeling of seman-
tic https://www.doczj.com/doc/8e5810737.html,putational Linguistics,23(3):245–288.
K.Hacioglu,S.Pradhan,W.Ward,J.H.Martin,and D.Ju-rafsky.2004.Semantic role labeling by tagging syntactic chunks.In Proc.of the8th CoNLL,p.110–113.
K.Kipper,H.T.Dang,and M.Palmer.2000.Class based con-struction of a verb lexicon.In Proc.of the17th AAAI Conf.
B.Levin.1993.English Verb Classes and Alternations:A Pre-
liminary Investigation.University of Chicago Press.
S.Pradhan,W.Ward,K.Hacioglu,J.Martin,and D.Jurafsky.
2004.Shallow semantic parsing using support vector ma-chines.In Proc.of HLT/NAACL.
E.Riloff and M.Schmelzenbach.1998.An empirical approach
to conceptual case frame acquisition.In Proc.of the6th WVLC.
D.L.T.Rohde.2004.TGrep2user manual ver. 1.11.
https://www.doczj.com/doc/8e5810737.html,/?dr/Tgrep2.
R.Swier and S.Stevenson.2004.Unsupervised semantic role labelling.In Proc.of the2004Conf.on EMNLP,p.95–102.
书山有路勤为径,学海无涯苦作舟 浮选药剂的发展概况 很早以前,人们就用粘有油脂的鹅毛从含金的矿砂中提选砂金。以后开始在工业中应用全油浮选,其最大的缺点是药剂(石油)消耗量大,开始时每1t 硫化矿要加药剂1~3t。在1902~1912 年期间,出现了新的表层浮选和泡沫浮选,在此期间广泛地进行了对于泡沫浮选法的探寻。1906 年出现用强烈搅拌导入空气的办法,降低了药剂的用量。1912 年发现重铬酸盐对方铅矿有抑制作用。1913 年发现二氧化硫对闪锌矿的抑制作用,同时也发现硫酸铜可作为闪锌矿的活化剂。1921 年发现含有三价氮和二价硫原子的可溶性有机化合物,可代替油类作为矿物的捕收剂。但是直到1925 年才真正引用黄药作为硫化矿的捕收剂。1926 年引用黑药作为捕收剂。黄药和黑药的出现大大促进了浮选工业的发展,使硫化矿的回收率大为提高,药剂用量大为减少,降低了加工成本,从而降低了金属的价格,为贫矿资源的综合利用开辟了道路。另一方面,由于发现氰化物可以抑制闪锌矿和黄铁矿,浮选方铅矿,用硫酸铜可以活化闪锌矿,提高锌的回收率;加石灰可以抑制黄铁矿,这就为浮选复杂的硫化铅一锌一铁矿石打下了基础。 1924 年还发现了用脂肪酸皂类浮选金属氧化物及非金属矿物,浮选工业也随之进一步推广和扩大到非金属及碱土金属矿物。 1925 年以后,全浮选和优先浮选法就更加成熟,从而开始出现有关浮选理论的研究。1934 年又引入烷基硫酸钠作为捕收剂;1935 年引入了阳离子型脂肪胺类作为捕收剂。总的趋势是,药剂消耗下降,一吨矿石消耗药剂的量最低可降到10~30g。使用各种化学药剂控制浮选过程,可以有效地分离复杂的金属和非金属矿物。 与此同时,由于浮选理论的发展,新型浮选药剂的发掘研究工作,已不再是
3.俄罗斯钾肥选矿厂使用的浮选药剂的技术特点: 钾石盐矿浮选过程中使用下列浮选药剂: 去矿泥 -絮凝剂 -矿泥捕收剂 使用分子质量为15-17的高分子干燥聚丙烯酰胺作为絮凝剂。根据矿石中不溶于水混合物的含量,聚丙烯酰胺的用量为每吨矿石5-10克。 在乌拉尔钾肥选矿厂使用AKZO NOBEL(瑞典)公司生产的乙基氧化胺Ethomeen HT/40作为矿泥捕收剂。根据矿石中和浮选矿泥给料中不溶于水混合物的含量,捕收剂的用量为每吨矿石3-12克。 聚丙烯酰胺在常温下以0.1%水溶液形式加入流程中。聚丙烯酰胺是絮凝剂在水中溶解制成。 为加快聚丙烯酰胺溶液的制备和提高其功效进行: -用水对加入溶解槽中的干燥聚丙烯酰胺粉末初步湿润; -水温35-40℃时把聚丙烯酰胺加入溶解槽,然后用温度50-60℃的水加满溶解槽; -在工厂主厂房制备聚丙烯酰胺,在工艺流程作业中用自流加入絮凝剂溶液。
矿泥捕收剂在常温下以浓度为2-10%的水溶液形式加入流程中。Ethomeen HT/40水溶液是使用剧烈蒸汽将其融化,并且温度为30-50℃时在水中溶解制成的。
钾盐浮选 -矿泥抑制剂 -钾盐捕收剂 -起泡剂 -非积极性药剂 矿泥抑制剂减少钾盐阳离子捕收剂在不溶水混合物上的吸附,同时为固定浮选矿物(钾盐)上捕收剂必需的数量创造条件。在乌拉尔钾肥选矿厂使用尿素甲醛树脂КС-МФ作为抑制剂,其甲醛含量低,储存期限长(不小于6个月)。 抑制剂КС-МФ以常温下5-10%水溶液形式加入钾盐粗选作业中。 КС-МФ水溶液是通过药剂在水中的简单溶解制成。向溶解槽中加入矿泥抑制剂以后再向槽中加水。
选矿浮选药剂分类及机理 浮选捕收剂(collectors)是能提高矿物表面疏水性的一类药剂,也是矿物浮选最主要的一类药剂。由于浮选是利用捕收剂与矿物表面的活性点作用,从而使矿物表面疏水上浮的选矿方法,而自然界中,天然疏水性矿物(hydrophobic minerals)为数甚少,大部分矿物亲水或弱疏水,只有与捕收剂作用,增大其表面的疏水性,才具有一定的可浮性。即使是天然疏水性矿物,为了有效浮选,也要适当添加非极性油类捕收剂,以提高其可浮性。因此,捕收剂对浮选技术的发展起着关键的作用。据统计,美国1985年浮选处理4.22x108t矿石,所用捕收剂就占全部浮选药剂费用的50%以上。 最初的捕收剂为杂酚油等油类,随后是油酸捕收剂。可溶于水的捕收剂的发现是浮选药剂的一大进步,尤其是科勒尔发明的黄药。上世纪30年代,浮选技术发展到处理非金属矿物,此时皂类捕收剂和阳离子胺类捕收剂与抑制剂一起使用。至50年代,除哈里斯发明了Z-200外,浮选捕收剂研究进展不大。随后,捕收剂的研究取得很大进展,研制了大豆油脂肪酸硫酸化皂、氧化石蜡皂等铁矿的捕收剂,合成了黄原酸酯类及硫代氨基甲酸酯类等选择性较好的捕收剂。近些年,也出现了一系列高效捕收剂,如硫化矿捕收剂Y-89、T-2K、KM-109、PAC,氧化矿捕收剂GY、CF、MOS,硅酸盐浮选的胺类捕收剂等。 目前,捕收剂的研究,主要朝两个方向发展:一是开发研制高效、无毒(或低毒)、价廉、低耗、原料来源广泛的新型捕收剂;再就是对各种现有捕收剂进行合理搭配与组合使用。前者一旦突破,将使选矿技术取得革命性进展,但研制周期长、难度大;后者见效快,容易在选矿实践中实现。 3.1 浮选捕收剂的分类与作用 3.1.1 捕收剂的分类 理论研究和浮选实践均已表明,对不同类型的矿石需要选用不同类型的捕收剂。对捕收剂进行分类,可系统地、科学地认识各类捕收剂的共性和个性,有利于对药剂的掌握和发展,同时也有助于正确的选择和使用好各种药剂。然而,由于研究角度不同,对捕收剂的分类存在着不同的方法。依据捕收剂对矿物起捕收作用的部分及其结构,可将其分为异极性捕收剂、非极性油类捕收剂和两性捕收剂三类;按捕收剂的应用范围把其分为硫化矿、氧化矿、硅酸盐矿物、非极性矿物和沉积金属等的捕收剂;通常根据药剂在水溶液中的解离性质,将捕收剂分为离子型(ionizing)和非离子型(non-ionizing)两类。在离子型捕收剂中,又根据起捕收作用疏水离子的电性,分为阴离子型、阳离子型和两性型捕收剂。非离子型捕收剂则可进一步分为非极性捕收剂与异极性捕收剂两类(见表3-1)。 表3-1 浮选捕收剂的常用分类
综述 1999年浮选药剂的进展 朱建光X 摘要本文收集了1999年国内外一部份浮选药剂的文献。分硫化矿捕收剂、氧化矿捕收剂、起泡剂和调整剂四方面进行介绍,并略加评论。 关键词浮选药剂硫化矿捕收剂氧化矿捕收剂起泡剂调整剂 5国外金属矿选矿61999年第4期,发表了本文作者写的/浮选药剂的进展0一文,该文收集材料从1998年1月~11月;本文根据1998年12月~1999年11月国内外杂志上发表的一部分浮选药剂材料写成,使读者参考有连贯性。 1硫化矿捕收剂 111用代号发表的硫化矿捕收剂 不少作者为了保护知识产权,将自己研究的硫化矿捕收剂用代号表示,只发表浮选数据,现介绍如下。 SK-9011新型捕收剂112是沈阳矿冶研究所于雪研制,该药剂对金、铜的捕收能力强,选择性好,起泡性能好,浮选速度快,适应性广,用量少等特点,在朝阳金矿,1998年4-6月各月平均生产指标与该药剂工业试验指标对比,精矿金品位提高了33118g/t,金回收率提高了9169%,精矿中铜品位提高了2105%,铜回收率下降了0106%。 锦西某金矿工业试验结果表明,精矿中铜品位提高了31162%,铜回收率提高0155%,精矿中金品位提高231718g/t,金回收率降低3142%,精矿质量大幅度提高。 FZ-9538金银浮选增效剂122是北京矿冶研究总院董贞允研制,该药剂是多种有机化合物的混合物,对金银及其矿物具有较强的捕收增效能力,极佳的选择性,较快的浮选速度,精矿产品沉降速度快,过滤性能良好,应用范围广,适用于浮选多种类型矿石,工业试验结果表明,金银指标均得到提高,对提高企业经济效益有重要意义。 B2新型硫化矿捕收剂132是广西冶金研究所林榜立等研制,合成反应原理如下: RF2+NaOH+CS2y B2 原料RF2是自制为一种含有两种极性基的有机化合物,B2的毒性:雌雄小白鼠LD50为1260mg/kg,属低毒级药剂,毒性小于丁基黄药(丁基黄药的小白鼠口服LD501150?150mg/kg)或毒性与丁基黄药相当142。含活性物质93189%,价格11500元/t,用来浮选广西佛子冲铅锌矿、广西武宣铜矿、广西某地铁闪锌矿。浮选指标与丁基黄药、丁铵黑药和硫氮相近,但用量少,可代替上述常用捕收剂,有经济效益。 本文作者估计RF2分子中的两种极性基中最少有一种是胺基或醇基,才能与NaOH和CS2反应,B2捕收剂估计与胺醇黄药相似。 PAC高效捕收剂15,621999年继续得到推广使用,因它对硫化铜矿捕收力强,对闪锌矿和黄铁矿捕收力弱,故适用于铜锌分离和铜硫分离。在低pH 介质中(pH7172),使用丁铵黑药40g/t,再加入10g/t PAC,浮选硫铁矿中的Cu和Zn多金属矿,亦得到良好结果,金银贵金属也得到回收。可见PAC 是很值得推广的捕收剂。 Y-89新型黄药浮铜能提高铜精矿中金回收率 X中南工业大学,长沙410083
目 录 第一章 2007年投入产出表编制综述 (1) 第二章 总产出编制方法 (6) 第三章 中间投入及构成编制方法 (48) 第四章 增加值及构成编制方法 (60) 第五章 最终使用项及构成编制方法 (98) 第六章 购买者价格投入产出表平衡 (138) 第七章 生产者价格投入产出表编制方法 (142) 第八章 全社会产出表的编制方法 (148) 附件1 投入产出部门对应的增值税率表(略,见电子文件) 附件2 投入产出部门与成本费用指标转换表(略,见电子文件)
第一章 2007年投入产出表编制综述 投入产出表,也称部门联系平衡表或产业关联表,它以矩阵形式描述国民经济各部门在一定时期(通常为一年)生产活动的投入来源和产出使用去向,揭示国民经济各部门之间相互依存、相互制约的数量关系,是国民经济核算体系的重要组成部分。 一、投入产出表简介 2007年投入产出表由三部分组成,称为第I、Ⅱ、Ⅲ象限。 (一)第I象限 第I象限是由名称相同、排列次序相同、数目一致的若干产品部门纵横交叉而成的中间产品矩阵,其主栏(纵向)为中间投入,宾栏为中间使用。矩阵中的每个数字都具有双重意义:沿行方向看,反映各产出部门生产的货物或服务提供给各投入部门使用的价值量,被称为中间使用;沿列方向看,反映各投入部门在生产过程中消耗各产出部门生产的货物或服务的价值量,被称为中间投入。 第I象限是投入产出表的核心,它充分揭示了国民经济各产品部门之间相互依存、相互制约的技术经济联系,反映了国民经济各部门之间相互依赖、相互提供劳动对象供生产和消耗的过程。 (二)第Ⅱ象限 第Ⅱ象限是第I象限在水平方向上的延伸,主栏的部门分组与第I象限相同;宾栏由最终消费、资本形成总额、出口等最终使用项目组成。沿行方向看,反映各产出部门生产的货物或服务用于各种最终使用的价值量;沿列方向看,反映各项最终使用的规模及其构成。 第I象限和第Ⅱ象限连接组成的横表,反映国民经济各产品部门生产的货物或服务的使用去向,即各产品部门的中间使用和最终使用数量。 (三)第Ⅲ象限 第Ⅲ象限是第I象限在垂直方向的延伸,主栏由劳动者报酬、生产税净额、固定资产折旧、营业盈余等各种增加值项目组成;宾栏的部门分组与第I象限相同。第Ⅲ象限反映各产品部门的增加值及其构成情况。
Value Engineering 表12007年中国房地产业的投入结构 绝对值(万元)中间投入第一产业 第二产业 第三产业除房地产房地产 中间投入合计67261137872811850711131770724553872中间投入结构系数(%) 比例(%)0.03%46.34%48.26%5.37%100.00% 0.00%7.70%8.02%0.89%16.62% 增加值劳动者报酬 生产税净额固定资产折旧营业盈余增加值合计 总投入 13386599189352086409301326777539123192359147746231 最终投入结构系数(%) 10.87% 15.37%52.03%21.74%100.00% 9.06%12.82%43.38%18.12%83.38%100.00%0引言 投入产出分析(Input-output Analysis )又称“投入产出法”,自美国经济学家瓦西里·里昂惕夫(Wassily ·Leontief )于20世纪30年代 提出以来,经过70余年的实践与发展, 其理论与方法日趋成熟,已成为一种研究分析宏观经济活动、进行经济分析和预测及制定经济发展规划的基本工具[1]。在我国市场经济条件下,应用投入产出分 析国民经济具有相当意义。 早在07年4月份,国家发改委宏观经济研究院经济形势分析课题组就完成一份名为《宏观调控:重点调整 过剩流动性的流向》 的专题报告,该报告提出我国经济再次面临从“偏快”向“过热”转化的风险。造成这一问题的根本原因是房地产和出口的过度增长。并建议政府高层放弃房地产支柱地位,彻底改变房地产依赖型的经济模式。该报告发布后的两年多来,中国房地产业继续飞速发展,由北京、上海等大城市带动的全国性房价上涨趋势仍然未得到行之有效的控制。本文通过07年的投入产出表,运用投入产出分析的方法对中国房地产业的产业特征以及产业关联情况进行分析,以期对中国房地产业的基本面貌做一个粗略的描述。 1中国房地产业的投入结构分析所谓投入,是指产品生产所需原材料、辅助材料、燃料、动力、固定资产折旧和劳动力的投入。对投入结构的分析选用中间投入结构 系数、 最初投入结构系数以及中间投入率这三个参数。由于投入的两个主要部分就是中间投入和最初投入这两个要素,选用这两个参数可以很好的反映投入结构的基本情况。中间投入(也称中间产品或中间消耗)是一定时期内所有常驻单位在生产或提供货物与服务活动过程中,消耗和转换的所有非固定资产的货物和服务的价值。中间投入一般按购买者价格计算。中间投入结构是指国民经济各产业对某一产业的中间投入占对该产业中间投入合计的比重,反映某一部门各项中间投入的结构关系。用公式表示为: L i ,j =N i ,j /N j (1)最初投入是指初始投入要素的消耗。最初投入结构系数Lij 是指国民经济某一产业的各项最初投入(包括固定资产折旧、劳动者报酬、生产税净额和营业盈余等)占该产业最初投入总量(增加值)的比重,反映了某一产业各项最初投入的结构关系[2]。用公式表示: a i ,j =x i ,j /X j (2)中间投入率则反映中间投入与总投入的比例关系。从中可以得出中国的房地产业对其他部门的消耗比例。根据这三个指标的数值(表1),投入结构的基本情况一目了然。根据07年投入产出表,42个部门中,金融业、租赁和商务服务业、建筑业、化学工业、住宿和餐 饮业、石油加工、炼焦及核燃料加工业、房地产业自身、电气机械及 器材制造业、 金属制品业、居民服务和其他服务业、造纸印刷及文教体育用品制造业、电力、热力的生产和供应业给了房地产这一行业的发展大部分支持。07年,房地产对这些行业的消耗占总消耗的77.81%。其中,金融业和租赁商务服务业对其的中间投入更是占了28.10%。根据表1可知,增加值部分,劳动者报酬1338.66亿元,生 产税净额1893.52亿元, 固定资产折旧6409.30亿元,分别占增加值的10.86%、15.37%、52.02%、21.74%。42个部门中,房地产业的固定资产折旧远超其他部门,占所有固定资产折旧的17.20%。另外,2007年,房地产业的中间投入为2455.39亿元,总投入为14774.62亿元,其中间投入率仅为16.62%。中间投入率不大。从具体的表中 数据来看,房地产业分别消耗了第一、二、三产业0.67、 1137.87、1185.07亿元,占中间投入比重的0.02%、46.34%和48.26%。 22007年中国房地产的使用结构分析 所谓产出,是指产品生产的总量及其分配使用的方向和数量,如用于生产消费(中间产品)、生活消费、积累和净出口等(后三者总称为最终产品)。投入产出表中,产出由中间使用和最终使用两个部 分构成。 参照对投入结构的分析,选用中间使用结构系数、最终使用结构系数和中间使用率来综合反映07年中国房地产业使用结构的基本情况。中间使用结构是指国民经济某部门提供的中间使用在各部门之间的分配使用比例。用公式表示: h i ,j =x i ,j /X i (3)最终使用结构系数是指国民经济某一产业的各项最终使用(包括总消费、资本形成总额和净出口等)占该产业最终使用总量的比重,反映了某一产业各项最终使用的结构关系[2]。用公式表示为: z i ,j =y i ,j /Y i (4)同样引入中间使用率则反映中间使用与总使用的比例关系。从中可以得出其他部门对中国的房地产业的消耗比例。根据这三个指标的数值(表2),使用结构的基本情况亦一目了然。 —————————————————————— —作者简介:程杰(1986-),男,江苏南京人,硕士研究生,研究方向为研发与技 术创新管理、知识管理与组织学习、系统工程与工业工程等;彭灿 (1962-),男,湖南长沙人,教授,研究领域为研发与技术创新管理、知识管理与组织学习、系统工程与工业工程等。 中国房地产业的投入产出分析 ———基于2007年投入产出表 An Input-output Analysis of China's Real Estate Industry: Based on Input-output Tables in 2007 程杰Cheng Jie ;彭灿Peng Can (南京航空航天大学经济与管理学院,南京211100) (School of Economics and Management , Nanjing University of Aeronautics and Astronautics ,Nanjing 211100,China )摘要:通过对2007年中国42个产业部门的投入产出表的运用,选用一系列投入产出指标从投入结构、使用结构以及前向和后向关联等角 度,对2007年中国的房地产业进行投入产出分析,并根据分析结果,得出07年中国房地产业的部门特点。 Abstract:Through the application of Chinese 42industries sectors'input-output tables in 2007,a series of input and output indicators have been selected from the angle of input structure , use structure as well as the forward-backward association .An analysis of Chinese real estate in 2007has been made and its characteristics have been concluded. 关键词:房地产;投入产出;关联 Key words:real estate industry ;input-output ;association 中图分类号:F062.9 文献标识码:A 文章编号:1006-4311(2010)07-0079-02·79·
DOI:10.14120/https://www.doczj.com/doc/8e5810737.html,11-5057/f.2013.03.003 中国经济复杂度及其演变: 基于1987年至2007年的投入产出表测度 吴三忙1李善同2 (1.中国地质大学(北京)人文经管学院,北京100083; 2.国务院发展研究中心发展战略与区域经济研究部,北京100010) 摘要:本文基于平均波及步数法(Average Propagations Lengths),利用1987年至2007年投入产 出表数据,测度了中国经济复杂度及其演变。研究结果表明:(1)1987年至2007年期间,中国 经济复杂度总体显著提高,产业链不断延长,复杂度指数(Complexity Index)由1987年的2.69, 上升至2007年的3.63,上升了34.9%,但是在1997年至2002年期间由于国有企业兼并重组 等带来的结构性变革,经济复杂度略微下降;(2)能源矿产开采业等部门在中国产业链中的基 础地位更加强化,而批发和零售贸易业、房地产及社会服务业等作为产业链末端部门,对其他 部门的后向拉动效果更为广泛;(3)中国不同地区经济复杂度及演变呈现显著的差别,东部地 区的广东、浙江、江苏等省区经济复杂度明显要高于中西部地区的江西、湖南、陕西、新疆等省 区,且上升速度更为明显。 关键词:经济复杂度;平均波及步数;产业链 引言 复杂度是来源于物理学和生物学的概念,但也被广泛使用在经济社会系统分析中[1-3]。具体何为复杂度,目前没有公认的定义。不过在经济学语境中较为认可的是,复杂度指经济系统中部门间的相互依存性,部门间相互依存性越高,经济复杂度越高,反之,则经济复杂度越低[4]。深入研究经济复杂度及其演变不仅对认识经济系统本身具有重要意义,也对政策制定具有重要现实意义,其原因在于外生冲击对复杂经济系统带来的不确定是巨大的,政策调整由于部门间的扩散波及,往往会带来意想不到的后果。只有深入把握经济复杂度状况,特别是部门间的关联程度及关联方向,才能有效的制定政策,提高经济政策的干预效果[5,6,7]。 西方学者关于经济复杂度的研究始于上个世纪60年代,并提出了各种测度经济复杂度的指标与方法。 Yan和Ames[8]提出了相关测度法,Finn[9]和Ulanovicz[10]提出了循环测度法。在历经五十多年的发展后,近年来随着全球化的推进,生产活动空间布局的调整,国外关于经济复杂度的研究又成为新的热点。如,Erik和Romero[11]提出了平均波及步数法,Amaral等[7]提出了相互依赖测度法等新的测度经济复杂度方法。 纵观西方学者提出的各种测度经济复杂度方法,大多是基于投入产出分析框架,这主要是因为投入产出表本质特征是反映部门间相互依存关系,而这正是测度经济复杂度所关注的核心问题[12]。对于影响经济复杂 收稿日期:2012-02-21 基金项目:国家自然科学基金青年项目(71003066);国家自然科学基金重点项目(71133003)。 作者简介:吴三忙,中国地质大学(北京)人文经管学院副教授,博士;李善同,国务院发展研究中心发展战略与区域经济研究部研究员。