当前位置:文档之家› Context-Sensitive Spoken Dialogue Processing with the DOP Model

Context-Sensitive Spoken Dialogue Processing with the DOP Model

Context-Sensitive Spoken Dialogue Processing with the DOP Model
Context-Sensitive Spoken Dialogue Processing with the DOP Model

Context-Sensitive Spoken Dialogue Processing

with the DOP Model

Rens Bod

Institute for Logic, Language and Computation

University of Amsterdam

Spuistraat 134, NL-1012 VB Amsterdam

&

School of Computer Studies

University of Leeds

Leeds LS2 9JT, UK

rens@https://www.doczj.com/doc/cb5834278.html,

Abstract

We show how the DOP model can be used for fast and robust context-sensitive processing of spoken input in a practical spoken dialogue system called OVIS. OVIS, Openbaar Vervoer Informatie Systeem ("Public Transport Information System"), is a Dutch spoken language information system which operates over ordinary telephone lines. The prototype system is the immediate goal of the NWO Priority Programme "Language and Speech Technology". In this paper, we extend the original DOP model to context-sensitive interpretation of spoken input. The system we describe uses the OVIS corpus (which consists of 10,000 trees enriched with compositional semantics) to compute from an input word-graph the best utterance together with its meaning. Dialogue context is taken into account by dividing up the OVIS corpus into context-dependent subcorpora. Each system question triggers a subcorpus by which the user answer is analyzed and interpreted. Our experiments indicate that the context-sensitive DOP model obtains better accuracy than the original model, allowing for fast and robust processing of spoken input.

1. Introduction

The Data-Oriented Parsing (DOP) model is a corpus-based parsing model which uses subtrees from parse trees in a corpus to analyze new sentences. The occurrence-frequencies of the subtrees are used to estimate the most probable analysis of a sentence (cf. Bod 1992, 93, 95, 98; Bod &Kaplan 1998; Bonnema et al. 1997; Chappelier & Rajman 1998; Charniak 1996; Cormons 1999;Goodman 1996, 98; Kaplan 1996; Rajman 1995; Scha 1990, 92; Scholtes 1992; Sekine &Grishman 1995; Sima'an 1995, 97, 99; Tugwell 1995; Way 1999).

To date, DOP has mainly been applied to corpora of trees whose labels consist of primitive symbols (but see Bod & Kaplan 1998, Cormons 1999 and Way 1999 for more sophisticated DOP models). Let us illustrate the original DOP model, called DOP1 (Bod 1992, 93), with a very simple example. Assume a corpus consisting of only two trees:

(1)

NP Mary V

likes NP V hates Susan

New sentences may be derived by combining subtrees from this corpus, by means of a node-substitution operation indicated as °. Node-substitution identifies the leftmost nonterminal frontier node of one tree with the root node of a second tree (i.e., the second tree is substituted on the leftmost nonterminal frontier node of the first tree). Under the convention that the node-substitution operation is left-associative, a new sentence such as Mary likes Susan can be derived by combining subtrees from this corpus, as in figure (2):

(2)

NP VP

S

NP

V

likes

NP

Mary

NP

Susan

NP

V

likes Susan

=

°°

Other derivations may yield the same parse tree; for instance: (3)

NP VP

S

NP

V

NP

Mary

NP

V

likes Susan

=

Susan

V likes

°°

DOP computes the probability of a subtree t as the probability of selecting t among all corpus-subtrees that can be substituted on the same node as t. This probability is equal to the number of occurrences of t, | t |, divided by the total number of occurrences of all subtrees t' with the same root label as t. Let r(t) return the root label of t. Then we may write:

P(t) =

| t |

Σt': r(t')=r(t) | t' |

Since each node substitution is independent of previous substitutions, the probability of a derivation D =t1°... °t n is computed by the product of the probabilities of its subtrees t i:

P(t1°... °t n) = Πi P(t i)

The probability of a parse tree is the probability that it is generated by any of its derivations. The probability of a parse tree T is thus computed as the sum of the probabilities of its distinct derivations D:

P(T) = ΣD derives T P(D)

Usually, several different parse trees can be derived for a single sentence, and in that case their probabilities provide a preference ordering.

Bod (1992) demonstrated that DOP can be implemented using conventional context-free parsing techniques by converting subtrees into rewrite rules. However, the computation of the most probable parse of a sentence is NP-hard (Sima'an 1996). The parse probabilities can be estimated by Monte Carlo techniques (Bod 1995, 98), but efficient algorithms are only known for the most probable derivation of a sentence (the "Viterbi parse" -- see Bod 1995, Sima'an 1995) or for the "maximum constituents parse" of a sentence (Goodman 1996, 98; Bod 1996).

In van den Berg et al. (1994) and Bod et al. (1996), DOP was generalized to semantic interpretation by using corpora annotated with compositional semantics. In the current paper, we extend the DOP model to context-sensitive interpretation in spoken dialogue understanding, and we show how it can be used as an efficient and robust NLP component in a practical spoken dialogue system called OVIS. OVIS, Openbaar Vervoer Informatie Systeem ("Public Transport Information System"), is a Dutch spoken language information system which operates over ordinary telephone lines. The prototype system is the immediate goal of the NWO Priority Programme "Language and Speech Technology" (see Boves et al. 1996).

The backbone of any DOP model is an annotated language corpus. In the following section, we therefore start with a description of the corpus that was developed for the OVIS system, the "OVIS corpus". We then show how this corpus can be used by DOP to compute the most probable meaning M of a word string W: argmax M P(M, W). Next we demonstrate how the dialogue context C can be integrated so as to compute argmax M P(M, W | C). Finally, we interface DOP with speech and show how the most probable meaning M of an acoustic utterance A given dialogue context C is computed: argmax M P(M, A | C). The last section of this paper deals with the experimental evaluation of the model.

2. The OVIS corpus: trees enriched with compositional semantics

The OVIS corpus currently consists of 10,000 syntactically and semantically annotated user utterances that were collected on the basis of a pilot version of the OVIS system.1 The user utterances are answers to system questions such as "From where to where do you want to travel?", "At what time do you want to travel from Utrecht to Leiden?", "Could you please repeat your destination?".

For the syntactic annotation of the OVIS user utterances, a tag set of 40 lexical/syntactic categories was developed. This tag set was deliberately kept small so as to improve the robustness of the DOP parser. A correlate of this robustness is that the parser overgenerates, but as long as the probability model can accurately select the correct utterance-analysis from all possible analyses, this overgeneration is not problematic. Robustness is further achieved by a special category, called ERROR. This category is used for stutters, false starts, and repairs. No predefined grammar is used to determine the correct syntactic annotation; there is a small set of annotation guidelines, that has the degree of detail necessary to avoid an "anything goes" attitude from the annotator, but leaves room for the annotator's perception of the structure of the utterance (see Bonnema et al. 1997).

1 The pilot version is based on a German system developed by Philips Dialogue Systems in Aachen (Aust et al. 1995), adapted to Dutch.

The semantic annotations are based on the update language defined for the OVIS dialogue manager by Veldhuijzen van Zanten (1996). Each update consists of an information state or form. This form is a hierarchical structure with slots and values for the origin and destination of a train connection, for the time at which the user wants to leave or arrive, etc. Slots represent information that is already in the information form, whereas values represent information that is related to the change of information. The distinction between slots and values can be regarded as a special case of the ground and focus distinction (Vallduvi 1990; van Noord et al. 1999), where ground relates to the given information (slot) and focus to the new information (value). For example, if the system asks Van waar naar waar wilt u reizen? ("From where to where do you want to travel?"), and the user answers with Ik wil van Utrecht naar Leiden. (literally: "I want from Utrecht to Leiden"), then the update in (4) is yielded:

(4) user.wants.(origin.place.town.utrecht ; destination.place.town.leiden)

In this update, utrecht and leiden are the values of the slots origin.place.town and destination.place.town respectively. The update language of Veldhuijzen van Zanten (1996) also allows for (a limited) encoding of speech-act information, such as denials and corrections. For instance, the user utterance Ik wil niet vandaag maar morgen naar Almere (literally: "I want not today but tomorrow to Almere") yields the following update:

(5) user.wants.(([# today];[! tomorrow]);destination.place.town.almere)

The "#" in (5) means that the information between the square brackets is denied, while the "!" denotes the corrected information.

This update language is used to enrich the syntactic nodes of the OVIS trees with semantic annotations, which is accomplished in a compositional way by means of the following convention:

?Every meaningful lexical node is annotated with an update expression which represents the

meaning of the lexical item.

?Every meaningful non-lexical node is annotated with a formula schema which indicates how

its meaning representation can be put together out of the meaning representations assigned to its daughter nodes.

In the examples below, these schemata use the variable d1 to indicate the meaning of the leftmost daughter constituent, d2 to indicate the meaning of the second daughter node constituent, etc. For instance, the full (syntactic and semantic) annotation for the above sentence Ik wil niet vandaag maar morgen naar Almere is given in figure (6).(6)

[d1d2]

ADV # MP

today

niet

vandaag MP tomorrow !maar morgen P destination.place naar NP town.almere almere MP d1.d2 MP

(d1;d2) VP

d1.d2

V

wants

wil ik S

d1.d2

PER user

The meaning of Ik wil niet vandaag maar morgen naar Almere is thus compositionally built up out of the meanings of its sub-constituents. Substituting the meaning representations into the corresponding variables yields the update semantics of the top-node: u s e r.w a n t s. (([# today];[! tomorrow]);destination.place.town.almere).2

Note that our annotation convention is based on the notion of surface compositionality: it assumes that the meaning representation of a surface-constituent can in fact always be composed out of the meaning representations of its sub-constituents. This assumption is not unproblematic. To maintain it in the face of phenomena such as non-standard quantifier scope or discontinuous constituents creates complications in the syntactic or semantic analyses assigned to certain sentences and their constituents. It is unlikely therefore that our annotation convention can be viewed as completely general. However, it turned out to be powerful enough for annotating the sentences from the OVIS corpus. Moreover, to the best of our knowledge, the OVIS is the first corpus annotated according to the Principle of Compositionality of Meaning.

Figure (7) gives an example of the ERROR category for the annotation of the ill-formed sentence Van Voorburg naar van Venlo naar Voorburg ("From Voorburg to from Venlo to Voorburg"):

(7)

2 The syntactic annotation of (6) may seem relatively arbitrary, as the annotator could as well have created a tree with the two "niet vandaag maar morgen" and "naar almere" MPs being direct sisters of the "wil" V. However, in Bonnema (1996) and Bod et al. (1996) it is shown that the DOP model is largely insensitive to inconsistencies in the syntactic annotations, as long as the semantic annotations yield the correct top-node semantics.

P origin.place van MP

d1.d2

voorburg NP town.voorburg P destination.place naar MP

(d1;d2)

ERROR

P origin.place van MP d1.d2venlo NP town.venlo P destination.place naar MP d1.d2voorburg NP town.voorburg MP (d1;d2)MP

d2

Note that the ERROR category has no semantic annotation; in the top-node semantics of Van Voorburg naar van Venlo naar Voorburg , the meaning of the false start Van Voorburg naar is thus absent:

(8) (origin.place.town.venlo ; destination.place.town.voorburg)

The manual annotation of 10,000 OVIS utterances may seem a laborious and error-prone process.In order to expedite this task, a flexible and powerful annotation workbench (SEMTAGS ) was developed by Bonnema (1996). S EMTAGS is a graphical interface, written in C using the XVIEW toolkit. It offers all functionality needed for examining, evaluating, and editing syntactic and semantic analyses. S EMTAGS is mainly used for correcting the output of the DOP parser. After the first 100 OVIS utterances were annotated and checked by hand, the parser used the subtrees of these annotations to produce analyses for the next 100 OVIS utterances. These new analyses were checked and corrected by the annotator using SEMTAGS , and were added to the total set of annotations. This new set of 200 analyses was then used by the DOP parser to predict the analyses

for a next subset of OVIS utterances. In this incremental, bootstrapping way, 10,000 OVIS utterances were annotated in approximately 600 hours (supervision included).

3. Using the OVIS corpus for data-oriented semantic analysis

An important advantage of a corpus annotated according to the Principle of Compositionality of Meaning is that the subtrees can directly be used by DOP for computing syntactic/semantic representations for new utterances. The only difference is that we now have composite labels which do not only contain syntactic but also semantic information. By way of illustration, we show how a representation for the input utterance Ik wil van Venlo naar Almere ("I want from Venlo to Almere ")can be constructed out of subtrees from the trees in figures (6) and (7):

(9)

(d1;d2)wants

wil ik S

P origin.place van MP d1.d2venlo

NP town.venlo MP d1.d2 MP (d1;d2)°° P

destination.place

naar

NP town.almere almere

MP

d1.d2

=

S

P

origin.place

van d1.d2

venlo NP town.venlo d1.d2 P destination.place naar

NP town.almere almere which yields the following top-node update semantics:

(10) user.wants.

(origin.place.town.venlo;

destination.place.town.almere)

The probability calculations for this semantic DOP model are similar to the original DOP model.That is, the probability of a subtree t is equal to the number of occurrences of t in the corpus divided by the number of occurrences of all subtrees t' that can be substituted on the same node as t . The probability of a derivation D = t 1 ° ... ° t n is the product of the probabilities of its subtrees t i . The probability of a parse tree T is the sum of the probabilities of all derivations D that produce T . And,finally, the probability of a meaning M and a word string W is the sum of the probabilities of all parse trees T of W whose top-node meaning is logically equivalent to M (see Bod et al. 1996).

As with the most probable parse, the most probable meaning of a word string cannot be computed in deterministic polynomial time. Although the most probable meaning can be estimated

by Monte Carlo techniques (see Bod 1995, 98), the computation of a sufficiently large number of random derivations is currently not efficient enough for a practical application. To date, only the most probable derivation can be computed in near-to-real-time (by a best-first Viterbi optimization algorithm -- see Sima'an 1995). We will therefore assume that most of the probability mass for each top-node meaning is focussed on a single derivation. Under this assumption, the most probable meaning of a string is the top-node meaning generated by the most probable derivation of that string (see also section 5).

4. Extending DOP to dialogue context: context-dependent subcorpora

We now extend the semantic DOP model to compute the most likely meaning of a sentence given the previous dialogue. In general, the probability of a top-node meaning M and a particular word string W i given a dialogue-context C i = W i-1, W i-2 ... W1 is given by P(M, W i | W i-1, W i-2 ... W1). Since the OVIS user utterances are typically answers to previous system questions, we assume that the meaning of a word string W i does not depend on the full dialogue context but only on the previous (system) question W i-1. Under this assumption,

P(M, W i | C i) = P(M, W i | W i-1)

For DOP, this formula means that the update semantics of a user utterance W i is computed on the basis of the subcorpus which contains all OVIS utterances (with their annotations) that are answers to the system question W i-1. This gives rise to the following interesting model for dialogue processing: each system question triggers a context-dependent domain (a subcorpus) by which the user answer is analyzed and interpreted. Since the number of different system questions is a small closed set (see Veldhuijzen van Zanten 1996), we can create off-line for each subcorpus the corresponding DOP parser.

In OVIS, the following context-dependent subcorpora can be distinguished:

(1)place subcorpus: utterances following questions like From where to where do you want to

travel? What is your destination?, etc.

(2) date subcorpus: utterances following questions like When do you want to travel?, When do

you want to leave from X?, When do you want to arrive in Y?, etc.

(3) time subcorpus: utterances following questions like At what time do you want to travel? At

what time do you want to leave from X?, At what time do you want to arrive in Y?, etc.

(4) yes/no subcorpus: utterances following y/n-questions like Did you say that ...? Thus you want

to arrive at ...?

Note that a subcorpus can contain utterances whose topic goes beyond the previous system question. For example, if the system asks From where to where do you want to travel?, and the user answers with: From Amsterdam to Groningen tomorrow morning, then the date-expression tomorrow morning ends up in the place-subcorpus.

It is interesting to note that this context-sensitive DOP model can easily be generalized to domain-dependent interpretation in general: a corpus is clustered into subcorpora, where each subcorpus corresponds to a topic-dependent domain. A new utterance is interpreted by the domain in which it gets highest probability. Since small subcorpora tend to assign higher probabilities to utterances than large subcorpora (because relative frequencies of subtrees in small corpora tend to be higher), it follows that an utterance tends to be interpreted by the smallest, most specific domain in which it can still be analyzed.

5. Interfacing DOP with speech

So far, we have dealt with the estimation of the probability P(M, W | C) of a meaning M and a word string W given a dialogue context C. However, in spoken dialogue processing, the word string W is

not given. The input for DOP in the OVIS system are word-graphs produced by the speech recognizer (these word-graphs are generated by our project partners from the University of Nijmegen).

A word-graph is a compact representation for all sequences of words that the speech recognizer hypothesizes for an acoustic utterance A (see e.g. figure 11). The nodes of the graph represent points in time, and a transition between two nodes i and j represents a word w that may have been uttered between the corresponding points in time. For convenience we refer to transitions in the word-graph using the notation . The word-graphs are optimized to eliminate epsilon transitions. Such transitions represent periods of time when the speech recognizer hypothesizes that no words are uttered. Each transition is associated with an acoustic score. This is the negative logarithm (of base 10) of the acoustic probability P(a | w) for a hypothesized word w normalized by the length of w. Converting these acoustic scores into their corresponding probabilities, the acoustic probability P(A | W) for a hypothesized word string W can be computed by the product of the probabilities associated to each transition in the corresponding word-graph path. Figure (11) shows an example of a simplified word-graph for the uttered sentence Ik wil graag vanmorgen naar Leiden ("I'd like to go this morning to Leiden"):

(11)

ik

wil graag van Maarn naar Leiden

The probabilistic interface between DOP and speech word-graphs thus consists of the interface between the DOP probabilities P(M, W | C) and the word-graph probabilities P(A | W) so as to compute the probability P(M, A | C) and argmax M P(M, A | C).

We start by rewriting P(M, A | C) as:

P(M, A | C) = ΣW P(M, W, A | C)

= ΣW P(M, W | C) ? P(A | M, W, C)

The probability P(M, W | C) is computed by the dialogue-sensitive DOP model as explained in the previous section. To estimate the probability P(A | M, W, C) on the basis of the information available in the word-graphs, we must make the following independence assumption: the acoustic utterance A depends only on the word string W and not on its context C and meaning M (cf. Bod & Scha 1994). Under this assumption:

P(M, A | C) = ΣW P(M, W | C) ? P(A | W)

To make fast computation feasible, we furthermore assume that most of the probability mass for each meaning and acoustic utterance is focused on a single word string W (this will allow for efficient Viterbi best first search):

P(M, A | C) = P(M, W | C) ? P(A | W)

Thus, the probability of a meaning M for an acoustic utterance A given a context C is computed by the product of the DOP probability P(M, W | C) and the word-graph probability P(A | W).

As to the parsing of word-graphs, it is well-known that chart parsing algorithms for word strings can easily be generalized to word-graphs (e.g. van Noord 1995). For word strings, the initialization of the chart usually consists of entering each word w i into chart entry . For

word-graphs, a transition corresponds to a word w between positions i and j where j is not necessarily equal to i+1 as is the case for word strings (see figure 11). It is easy to see that for word-graphs the initialization of the chart consists of entering each word w from transition into chart entry . Next, parsing proceeds with the subtrees that are triggered by the dialogue context C. That is, every subtree t is converted into a context-free rewrite rule: root(t) →yield(t), and every such rule is indexed to maintain the link to its original subtree (see Bod 1995). The rules obtained in this way are used to construct a chart-like parse forest for the input word-graph. The most probable derivation from the chart is computed by a dynamic programming algorithm based on the well-known Viterbi algorithm for CKY (Sima'an 1995, 1997). This algorithm has a time complexity which is cubic in the number of word-graph nodes and linear in the grammar size. Sima'an's algorithm is an optimization over previous statistical disambiguation algorithms (such as Jelinek et al. 1990) in that Sima'an's algorithm has a time complexity which is linear instead of quadratic in the grammar size. Finally, the top-node meaning of the tree resulting from the most probable derivation is taken as the best meaning M for an utterance A given context C.

6. Evaluation

In our experimental evaluation of DOP we were interested in the following questions:

(1) Is DOP fast enough for practical context-sensitive spoken language understanding?

(2) Can we constrain the OVIS subtrees without losing accuracy?

(3) What is the impact of dialogue context on the accuracy?

For all experiments, we used a random split of the 10,000 OVIS trees into a 90% training set and a 10% test set. The training set was divided up into the four subcorpora described in section 4, which served to create the corresponding DOP parsers. The 1000 word-graphs for the test set utterances were used as input. For each word-graph, the previous system question was known to determine the particular DOP parser, while the user utterances were kept apart. As to the complexity of the word-

graphs: the average number of words per word-graph path is 4.6, and the average number of transitions per word (i.e., the number of transitions in the word-graph divided by the actual number of words in the acoustic utterance) is 4.2. All experiments were run on an SGI Indigo with a MIPS R10000 processor and 640 Mbyte of core memory.

To establish the semantic accuracy of the system, the best meanings produced by the DOP parser were compared with the meanings in the test set. Besides an exact match metric, we also used a more fine-grained evaluation for the semantic accuracy. Following the proposals in Boros et al. (1996) and van Noord et al. (1999), we translated each update meaning into a set of semantic units, where a unit is triple . For instance, the next example

user.wants.travel.destination.

([# place.town.almere];

[! place.town.alkmaar])

translates as:

Both the updates in the OVIS test set and the updates produced by the DOP parser were translated into semantic units of the form given above. The semantic accuracy was then evaluated in three different ways: (1) match, the percentage of updates which were exactly correct (i.e. which exactly matched the updates in the test set); (2) precision, the number of correct semantic units divided by the number of semantic units which were produced; (3) recall, the number of correct semantic units divided by the number of semantic units in the test set.

As to question (1), we already suspected that it is not efficient to use all OVIS subtrees. We therefore performed experiments with versions of DOP where the subtree collection is restricted to

subtrees with a certain maximum depth. The following table shows for four different maximum depths (where the maximum number of frontier words is limited to 3), the number of subtree types in the training set, the semantic accuracy in terms of match, precision and recall (as percentages), and the average CPU time per word-graph in seconds.

subtree- semantic accuracy CPU time

#subtrees

depth match precision recall

1319176.2 79.4 82.10.21

21054578.5 83.0 84.30.86

33214079.8 84.7 86.2 2.76

46448680.6 85.8 86.9 6.03

Table 1: Experimental results on OVIS word-graphs

The experiments show that at subtree-depth 4 the highest accuracy is achieved, but that only for subtree-depths 1 and 2 are the processing times fast enough for practical applications. Thus, there is a trade-off between efficiency and accuracy: the efficiency deteriorates as the accuracy improves. We believe that a match of 78.5% and a corresponding precision and recall of resp. 83.0% and 84.3% (for the fast processing times at depth 2) is promising enough for further research. Moreover, by testing DOP directly on the word strings (without the word-graphs), a match of 97.8% was achieved. This shows that linguistic ambiguities do not play a significant role in this domain. The actual problem is caused by the ambiguities in the word-graphs (i.e. the multiple paths). Secondly, we were concerned with the question as to whether we can impose constraints on the subtrees other than their depth, in such a way that the accuracy does not deteriorate and perhaps even improves. To answer this question, we kept the maximal subtree-depth constant at 3, and employed the following constraints:

?Eliminating once-occurring subtrees: this led to a considerable decrease for all metrics; e.g. match decreased from 79.8% to 75.5%.

?Restricting subtree lexicalization: restricting the maximum number of words in the subtree frontiers to resp. 3, 2 and 1, showed a consistent decrease in semantic accuracy similar to the restriction of the subtree depth in table 1. The match dropped from 79.8% to 76.9% if each subtree was lexicalized with only one word.

?Eliminating subtrees with only non-head words: this led also to a decrease in accuracy; the most stringent metric decreased from 79.8% to 77.1%. Evidently, there can be important relations in OVIS that involve non-head words.

These results thus indicate that the accuracy decreases if we restrict the subtree set by frequency, number of lexical items, or non-head words.

Finally, we were interested in the impact of dialogue context on semantic accuracy. To test this, we neglected the previous system questions and created one DOP parser for the whole training set. The semantic accuracy metric match dropped from 79.8% to 77.4% (for depth 3). Moreover, the CPU time per sentence deteriorated by a factor of 4 (which is mainly due to the fact that larger training sets yield slower DOP parsers).

The following result nicely illustrates how the dialogue context can contribute to the correct prediction for the meaning of an utterance. In parsing the word-graph corresponding to the acoustic utterance Donderdag acht februari ("Thursday eight February"), the DOP model without dialogue context assigned highest probability to a derivation yielding the word string Dordrecht acht februari and its meaning. The uttered word Donderdag was thus interpreted as the town Dordrecht which was among the other hypothesized words in the word-graph. If the DOP model took into account the dialogue context, the previous system question When do you want to leave? was known and thus

triggered the subtrees from the date-subcorpus only, which now correctly assigned the highest probability to Donderdag acht februari and its meaning, rather than to Dordrecht acht februari.

7. Conclusions

We have shown how the DOP model can be used for efficient and robust context-sensitive interpretation of spoken input in the OVIS spoken dialogue system. The system we described uses syntactically and semantically analyzed subtrees from the OVIS corpus to compute from an input word-graph the best utterance together with its meaning. We showed how dialogue context is integrated by dividing up the OVIS corpus into context-dependent subcorpora. Each system question triggers a subcorpus by which the user utterance is analyzed and interpreted.

The experimental evaluation showed that DOP's combination of lexical, syntactic, semantic and dialogue dependencies yields promising results. The experiments also indicated that any restrictions on the subtree-set diminish the accuracy, even when intuitively unimportant subtrees with only non-head words were discarded. Neglecting dialogue context also diminished the accuracy.

As future research, we want to investigate further optimization techniques for DOP, including finite-state approximations. We want to enrich the OVIS utterances with discourse annotations, such as co-reference links, in order to cope with anaphora resolution. We will also extend the annotations with functional structures associated with the surface structures so as to deal with more complex linguistic phenomena and feature generalizations(see Bod & Kaplan 1998).

Acknowledgements

This research was supported by NWO, the Netherlands Organization for Scientific Research (Priority Programme Language and Speech Technology). We are grateful to Khalil Sima'an for using his optimized DOP parser, and to Remko Bonnema for using SEMTAGS and the relevant semantic interfaces. The OVIS corpus was annotated by Mike de Kreek and Sascha Schütz.

dialogue 讨价还价

下跌go down Could you please give me an indication of the price?(请你们提出一个估计价格好吗?)或I’d like to have your lowest quotation.(我想听听你的最低价) 在这一点上我不能同意您的看法。I can’t agree with you there. 在比较价格时,您必须全面考虑。When you compare the prices ,you must take everything into consideration. Better quality usually means a higher price. 我承认你们的质量比较好。I grant that yours are of better quality。 我们无法说服我们的客户以这么高的价格购买。We can’t succeed in persuading our clients to buy at such high prices. 我认为你们在销售时不会有多大困难。I don’t think you’ll have any difficul ty in pushing sales. 1.买主在抱怨价格过高时,可以这么说: 贵方价格过高。Your price is much on the high side. 贵方价格太高,无法接受。Your price is too high to work on. 贵方的价格并不优惠。Your price does not impress us very favorably. 与其它供应商的报价相比,你们的价格没有任何竞争力。Compared with what is quoted by other suppliers ,your price is not competitive at all. Isn’t it possible to give us even a little more discount?(难道不能再多打点折?)或Your price is not convincing.(贵方价格没有吸引力) 2.卖主在表明报价是合理的时候,可以这么说: 这是我们的最低价。This is our rock-bottom price. 我们的价格是十分优惠的。 Our price is most favorable . Our price is most moderate. 我们的价格又合理又实际。Our price is both reasonable and practical. 我们的价格是经过仔细的计算得出来的。Our price is closely calculated. I’m sure you’ll find our prices are very competitive. 3.在表示不能降价时,卖主可以说: 价格已经降到最低限度了。The price has been reduced to the limit. 没有多少可以再降价的余地了。There is little scope for reducing the price any more. 我们不能同意贵方要求的降价。We can’t grant the reduction you ask for. 对于少量订货,贵方的还盘实在是太低了。Your counter-offer is much too low for an order of a small account. 我们不能接受贵方的还价,但这并不意味着我们不愿同贵方合作。We cannot accept your counterbid , but this does not mean we are not willing to cooperate. ? 4.买方在表达只要降价才能成交时,可以说: 降低价格,就可以大量销售。A lower price would mean larger sales. 如果你们能够降低价格,我们就大量订货。If you can reduce the price ,we might place a large order.

名词(可数名词和不可数名词)

专题一名词主要考查三个方面: 1、联系上下文,考查同义词、近义词辨析; 2、可数名词的单复数、不可数名词、抽象名词、名词词 组的意义和用法; 3、名词的固定搭配和习惯用语。 ◆名词的数 规则名词的复数形式

a block of一块; a bottle of一瓶 a group of一群; a pile of一堆 a pair of一组/双/对; a piece of一片/张/块既可作可数名词又可作不可数名词的词

The broken ______may cut into your hand if you touch it, you should be careful. A. glass B. glasses C. candle D. candles 【2016广西来宾】 —There are many ____ about this farm. —Yes, lots of ____ are planted on it. A. photo; potato B. photos; potatos C. photos; potatoes D. photoes; potatoes 1. Help yourself to some_______. There are lots of vitamins in them. A. tomato B. tomatoes C. tomatos D. potatos 2.if you take a plane, you cannot take ______ onto the plane with you. A. knife B. knifes C. knives D. a knives 3. The _______ have caught the two_______ already. A. policeman; thief B. policemen; thiefs C. policemen; thieves D. policeman; thieves 【2016重庆】It’s sports time. Most students in Class 1 are playing football on the playground. A. boy B. boys C. boy’s D. boys’ 【2015攀枝花】All the are from . A. men doctors; Germany B. men doctors; German C. man doctors; Germany D. man doctor; German 【2015广安】 —How many can you see in the picture? —Two.

dialogue教案

Teaching Plan Unit7 Celebrating the Birthday Topic1 When were you born? Type:Dialogue 外国语学院09级本科3班 徐得华

Lesson Plan I. Background information Topic:S B (II) Unit 7,Topic 1 Name:Celebrating the Birthday Lesson Type:Dialogue Section:The first section Teacher:Xu Dehua(徐得华) from Anhui Science and Technology University Students:Students of junior high school II. Teaching Aims Knowledge: Get students to grasp the new knowledge in the dialogue: (1)Words: birthday, month, celebrate, party, date, present, surprise (2)Expressions: May thirteenth That’s next Sunday. have a birthday for sb. (3)Grammar: cardinal numbers and ordinal numbers (4)Function: teaching students how to ask one’s birthday date, and express their birthday date by using cardinal numbers and ordinal numbers (5)Topic: when is Kangkang’s birthday and how he celebrate it. Ability: 1.Train the ability of making up their dialogue freely. 2.Develop the ability of greeting and memory. Moral Lesson:

名词(可数名词和不可数名词)

专题一名词 主要考查三个方面: 1、联系上下文,考查同义词、近义词辨析; 2、可数名词的单复数、不可数名词、抽象名词、名词词 组的意义和用法; 3、名词的固定搭配和习惯用语。 ◆名词的数 规则名词的复数形式

可数名词复数形式的不规则变化

常见的不可数名词

不可数名词的量化 a block of一块; a bottle of一瓶 a group of一群; a pile of一堆 a pair of一组/双/对; a piece of一片/张/块既可作可数名词又可作不可数名词的词

【2016 广东】 The broken ______may cut into your hand if you touch it, you should be careful. A. glass B. glasses C. candle D. candles 【2016广西来宾】

—There are many ____ about this farm. —Yes, lots of ____ are planted on it. A. photo; potato B. photos; potatos C. photos; potatoes D. photoes; potatoes 1. Help yourself to some_______. There are lots of vitamins in them. A. tomato B. tomatoes C. tomatos D. potatos you take a plane, you cannot take ______ onto the plane with you. A. knife B. knifes C. knives D. a knives 3. The _______ have caught the two_______ already. A. policeman; thief B. policemen; thiefs C. policemen; thieves D. policeman; thieves 【2016重庆】It’s sports time. Most students in Class 1 are playing football on the playground. A. boy B. boys C. boy’s D. boys’ 【2015攀枝花】All the are from . A. men doctors; Germany B. men doctors; German C. man doctors; Germany D. man doctor; German 【2015广安】 —How many can you see in the picture —Two. A. dog B. child C. sheeps D. sheep 【2015天河】

Dialogue

Dialogue What happens if you write bad dialogue? People will really hate reading it. Let’s take a look at good dialogue, bad dialogue, and some rules to help us write dialogue properly. The basic rules Dialogue should: ?Begin on a new line for each new speaker ?Have double or single quotation marks around the words people say. “I hate tomatoes,” Joel said. ?Have punctuation inside the quotation marks ?End the dialogue line with a comma if you’re adding a dialogue tag, but with a full stop (period) if yo u’re adding an action. A dialogue tag is something like he said.But an action, like “She folded her arms,” is not a dialogue tag. Here’s an example: “Joe, please come here,” Sarah said. “We need to talk.” “What about?” “You know what.” She folded her arms. Notice that we don’t always need to add a dialogue tag. To make a good dialogue, don’t make everything into a pattern. You need variety and balance. More details about the basic rule s 1 - Regular Quote In a regular quote, with no continuing or following sentence, use standard punctuation. "You didn't see her?" "I didn't see her."

可数名词和不可数名词讲解及练习

可数名词和不可数名词讲解 (一)定义:1可数名词是指能以数目来计算,可以分成个体的人或东西;因此它有复数 形式,当它的复数形式在句子中作主语时,句子的谓语也应用复数形式。 2.不可数名词是指不能以数目来计算,不可以分成个体的概念、状态、品质、感情 或表示物质材料的东西;它一般没有复数形式,只有单数形式,它的前面不能用不定冠词a / an 。 可数名词用法讲解: 可数名词有单复数之分。㈠单数可数名词 1.单数可数名词一般不会单独出现,前面通常要有限定词。 例如:She is friend(friend 前面加上my.) I have pen.(pen前面加上a) I like boy.(boy前面加上this) 限定词通常有三类。⑴冠词。经常用不定冠词 a、an。⑵形容词性物主代词。⑶指 示代词this、that 。this、that可用the代替。 2.单数可数名词做主语看作第三人称单数,谓语动词使用三单(单数)形式。 My father is (be) very tall. His brother likes (like) playing basketball.㈡可数名词的复数形式。 1.单数变复数规则变化 a. 一般情况下,直接加-s.如:book-books, bag-bags, cat-cats, bed-beds b.以s、x、ch、sh和部分0结尾的加es c. 以“辅音字母+y”结尾,变y为i, 再加-es,如:family-families, strawberry-strawberries d. 以“f或fe”结尾,变f或fe为v, 再加-es,如:knife-knives e. 以o结尾,通常 加 s.初中范围只有这四个词Negro hero potato potato 这四个词加es 如tomato -potatoes. tomato-tomatoes巧记黑人英雄爱吃西红柿和马铃薯这四个词加es 不规则变化:man-men, woman-women, policeman-policemen, policewoman-policewomen, mouse-mice child-children, foot-feet,. tooth-teeth, fish-fish, sheep-sheep people-people, Chinese-Chinese, Japanese-Japanese. 3.什么时候使用可数名词的复数形式? a.数词大于1,可数名词用复数。 b.可数名词前有Some/any、these/those 、a lot of/lots of、 many、How many、 a few 修饰时,可数名词用复数。 Some/any+可复 a lot of/lots of+可复 Many+可复 How many+ 可复 A few+可复 c.复数名词表示泛指是可数名词使用复数形式。 I like apples Grapes are my favourite fruit. 4.对可数名词数量提问使用how many ㈢不可数名词 1. 不可数没有复数。不可数名词不能直接和a/an、数词连用。若要表 示它的个体意义时,一般要与一个名词短语连用,即a/an /数词 +量词单数/复数+of+不可 数名词。 A cup of tea two cups of tea 注意数词大于一,量词用复数。 a/an / 数词+量词单数/复数+of+不可数名词做主语时,它的数由量词的数决定。如 The cup of tea is (be) hot. Two cups of tea are (be) on the table. 2. 单个的不可数名词做 主语看作第三人称单数,谓语动词使用三单形式。 The meat smells (smell) delicious. The water is on the table. 3. 常用来修饰不可数名词的词 Some/any、 a lot of/lots

初一可数名词和不可数名词讲解

初一可数名词和不可数名词讲解 定义:1可数名词是指能以数目来计算,可以分成个体的人或东西;因此它有复数形式,当它的复数形式在句子中作主语时,句子的谓语也应用复数形式。 2.不可数名词是指不能以数目来计算,不可以分成个体的概念、状态、品质、感情或表示物质材料的东西;它一般没有复数形式,只有单数形式,它的前面不能用不定冠词a / an 。 可数名词用法讲解 可数名词有单复数之分。 ㈠单数可数名词 1. 单数可数名词一般不会单独出现,前面通常要有限定词。 例如:She is friend(friend 前面加上my.) I have pen.(pen前面加上a) I like boy.(boy前面加上this) 限定词通常有三类。 ⑴冠词。经常用不定冠词a、an。 ⑵形容词性物主代词。 ⑶指示代词this、that 。this、that可用the代替。 2.单数可数名词做主语看作第三人称单数,谓语动词使用三单(单数)形式。 My father is (be) very tall. His brother likes (like) playing basketball.

㈡可数名词的复数形式。 1.单数变复数 规则变化 a.一般情况下,直接加-s.如:book-books, bag-bags, cat-cats, bed-beds b.如:book-books, bag-bags, cat-cats, bed-beds c.以“辅音字母+y”结尾,变y为i, 再加-es,如:family-families, strawberry-strawberries d.以“f或fe”结尾,变f或fe为v, 再加-es,如:knife-knives e.以o结尾,通常加s.初中范围只有这四个词Negro hero potato potato 这 四个词加es 如tomato -potatoes. tomato-tomatoes巧记黑人英雄种西红柿和马铃薯这四个词es 不规则变化: man-men, woman-women, policeman-policemen, policewoman-policewomen, mouse-mice child-children, foot-feet,. tooth-teeth, fish-fish, sheep-sheep people-people, Chinese-Chinese, Japanese-Japanese. 2.什么时候使用可数名词的复数形式? a.数词大于1,可数名词用复数。 b.可数名词前有Some/any、these/those 、a lot of/lots of、many、How many、a few修饰时,可数名词用复数。 Some/any+可复 a lot of/lots of+可复 Many+可复 How many+可复 A few+可复 c.复数名词表示泛指是可数名词使用复数形式。

研究生英语听说教程一Chapter_2-4 main dialogue部分

Chapter Two Main Dialogue Denise: Excuse me, miss, how much does it cost to ride BART? Stranger: Well, that depends on your destination. From here to Glen Park it’s only a dollar ten, but if you go as far as Fremont, it costs a lot more. Tom: We’re going to Berkeley. Do you know what the fare is? Stranger: There are two stations in Berkeley. Which one are you interested in? Denise: Oh, gee, I’m not sure. We’re from out of town. We’re visiting my sister. She told me her house is just a stone’s throw from th e UC Berkeley campus. Which station is that? Stranger: The downtown Berkeley station is really close to the university. I’m sure that’s the one you want. Denise: Tom, are you going to remember this, or should I be jotting this down? Tom: What is there to write down, Denise? The woman is giving us very simple directions. So, how much is the fare to downtown Berkeley? Stranger: Let’s go take a look at the map over there. You see, the map shows you how much it costs to go from one station to another. Ah, there it is: two dollars and sixty-five cents. Tom: Two sixty-five? That’s highway robbery for such a short distance! Denise: You think that’s expensive? Tom from our house to Amherst it costs twice as much as that. Tom: Yeah, but that’s an hour ride. You really get your money’s worth. Stranger: Yeah…Uh…Well, did your sister explain which train to take? Tom: Berkeley. WE take the Berkeley train to Berkeley, right? Stranger: Well, no actually. That’s just one stop on the Richmond line. Here, let me show you on th is map. Here we are a t Powell Street in San Francisco, and it’s basically a straight shot on the Richnond line to the downtown Berkeley station. Denise: Yes, my sister said we wouldn’t have to change trains. Stranger: Uh-oh, what time is it? Denise: It’s 8:15.

可数名词与不可数名词(适合小学用)

名词 英语中的名词指的是一种抽象的或者具体的事物,它有可数和不可数之分。所谓可数名词指的就是在数量上可以计数,可以数出数量的事物;所谓不可数名词是指不能以数目来计算,不可以分成个体的概念、状态、品质、感情或表示物质材料的东西。 可数名词在他之前可以加上冠词a/an。而不可数名词前面是不可以直接家冠词的。可数名词变为复数形式有如下变化规律:a.一般情况下,直接加-s,如:book-books, bag-bags, cat-cats, bed-beds ;读音:清辅音后读[s],浊辅音和 元音后读[z]。 b.以s. x. sh. ch结尾,加-es,如:bus-buses, box-boxes, brush-brushes, watch-watches ;读音:[iz]。 c.以“辅音字母+y”结尾,变y为i, 再加-es,如:family-families, strawberry-strawberries ;读音:[z]。 d.以“f或fe”结尾,变f或fe为v, 再加-es,如:knife-knives ,thief-thieves;读音:[z]。 e.以“o”结尾的词,分两种情况 1)有生命的+es 读音:[z] 如:mango-mangoes tomato-tomatoes hero-heroes 2) 无生命的+s 读音:[z] 如:photo-photos radio-radios f. 不规则名词复数:man-men, woman-women, policeman-policemen, policewoman-policewomen, snowman-snowmen, mouse-mice, child-children, foot-feet, tooth-teeth, fish-fish, people-people。 除此之外,还有一部分名词单复数同形,如: fish鱼,deer鹿,sheep绵羊,works(工厂),means手段,Swiss瑞士人,Chinese中国人,news 新闻,goods 商品有一些名词则只有复数形式: trousers裤子,pants裤子,shorts短裤glasses眼镜,compasses圆规,scales天平,pliers钳子,clips剪子 “某国人”的复数有三种类型: (1)Chinese, Japanese, Swiss 三国人单数复数同形,不需加s; (2)Englishman, Frenchman, Dutchman复数要把man 变为men; (3)其他各国人以–an, -ian收尾的均直接加s。如:Americans, Australians, Indians等。 可依照这个口诀记忆:中日不变,英法变,其他”s”加后面。 不可数名词一般没有复数形式,只有单数形式,它的前面不能用不定冠词a / an ,若要表示它的个体意义时,一般需要将其量化。 不可数名词如何量化:就是在不可数名词前面加上“数词+量词+of”,比如: a piece of bread(paper(纸), cloth(布), coal(煤), news(新闻), advice(意见), information(信息), , meat(肉) ) an item of information 一则情报 a slip of paper 一张纸条 a length of cloth 一段布料 a cake of soap 一块肥皂 a tube of tooth-paste 一条牙膏 a bottle of ink 一瓶墨水等。 当量词可数,且前面的数词大于一时,量词需要用复数形式, 如two bottles of water 两瓶水two pieces of paper 两张纸等; 同时,可数名词也有相应用法,如:a box of apples 一箱苹果six boxes of apples 六箱苹果等。

Dialogue大学英语口语对话设计

Dialogue ( Sulan & Kila ) S:Hi,Kila!Fancy meeting you here! K:Oh,Sulan,haven't see you for a long time.Let’s sit down and have a talk. S:OK.Kila,what kind of book do you like best? K:Romance novels are my favorite. S:Well,can you list some titles of books you have read to me? K:Of course.Tess,Jane Eyre,Pride and prejudice and so on. S:Oh,don’t you think they are too long to focus on? K:No,I don’t think so.Recently,I have read a novel,Tess.And when I read the first chapter of Tess,I found the plot is so engaging that I couldn’t put it down. S:OK,that’s all right.And what about music?I like listening to pop music and Zhou Jielun,Chen Yixun are my favorite pop stars.I think their music can make me relaxed when I am tired of one busy day of study. K:Well,that’s true,but I prefer folk music.I’m always impressed by folk music.Folk music’s lyrics and melodies are simple and gentle.It makes me feel peaceful. S:I agree with you to some extent,because I choose to listen to some folk music when I feel fidget sometimes. K:Oh,really?I’m happy to hear that.Let’s talk something about films.Do you like going to the movies? S:Certainly.I normally watch some comedies.I love a comedy called Lost on Journey.The chief actor is also the director.His performance is so funny that I can’t help laughing from time to time.And you,Kila? K:As far as I’m concerned,love stories are appeal.For example,Gone with the Wind,a movie derived from a novel.The story is about the troubled love affair between the heroine and the hero when the Civil War broke out.I should say acting in the movie is splendid. S:That sounds great!I will go to watch it in my free time. K:It won’t let you down. S:I think so.Now,aren’t you hungry? K:No,I’m not.Not really. S:Oh,it’s half past sixteen,and I am.Would you like to go to the canteen with me? K:Yes,let’s go.

可数不可数名词用法练习题

可数名词和不可数名词 英语中的名词指的是一种抽象的或者具体的事物,它有可数和不可数之分。所谓可数名词指的就是在数量上可以计数,可以数出数量的事物;所谓不可数名词是指不能以数目来计算,不可以分成个体的概念、状态、品质、感情或表示物质材料的东西。 可数名词在他之前可以加上冠词a/an。而不可数名词前面是不可以直接家冠词的。可数名词变为复数形式有如下变化规律: a.一般情况下,直接加-s,如:book-books, bag-bags, cat-cats, bed-beds ;读音:清辅音后读[s],浊辅音和 元音后读[z]。 b.以s. x. sh. ch结尾,加-es,如:bus-buses, box-boxes, brush-brushes, watch-watches ;读音:[iz]。 c.以“辅音字母+y”结尾,变y为i, 再加-es,如:family-families, strawberry-strawberries ;读音:[z]。 d.以“f或fe”结尾,变f或fe为v, 再加-es,如:knife-knives ,thief-thieves;读音:[z]。e.以“o”结尾的词,分两种情况 1)有生命的+es 读音:[z] 如:mango-mangoes tomato-tomatoes hero-heroes 2) 无生命的+s 读音:[z] 如:photo-photos radio-radios f. 不规则名词复数:man-men, woman-women, policeman-policemen, policewoman-policewomen, snowman-snowmen, mouse-mice, child-children, foot-feet, tooth-teeth, fish-fish, people-people。 除此之外,还有一部分名词单复数同形,如: fish鱼,deer鹿,sheep绵羊,works(工厂),means手段,Swiss瑞士人,Chinese 中国人,news 新闻,goods 商品 有一些名词则只有复数形式: trousers裤子,pants裤子,shorts短裤glasses眼镜,compasses圆规,scales天平,pliers钳子,clips剪子 “某国人”的复数有三种类型: (1)Chinese, Japanese, Swiss 三国人单数复数同形,不需加s; (2)Englishman, Frenchman, Dutchman复数要把man 变为men; (3)其他各国人以–an, -ian收尾的均直接加s。如:Americans, Australians, Indians等。 可依照这个口诀记忆:中日不变,英法变,其他”s”加后面。 不可数名词一般没有复数形式,只有单数形式,它的前面不能用不定冠词a / an ,若要表示它的个体意义时,一般需要将其量化。 不可数名词如何量化:就是在不可数名词前面加上“数词+量词+of”,比如: a piece of bread(paper(纸), cloth(布), coal(煤), news(新闻), advice(意见), information(信息), , meat(肉) ) an item of information 一则情报 a slip of paper 一张纸条

dialogue(对话短语)

实用短英语 Absolutely!---- 绝对正确! Adorable! ---- 可爱极了! Amazing! ---- 太神了! Anytime! ---- 随时吩咐! Almost! ---- 差不多了! - Finished? - Almost! Awful! ---- 好可怕呀! After you. ---- 您先。 About when? ---- 大约何时? All set? ---- 一切妥当? Allow me! ---- 让我来! Baloney! ---- 胡扯!荒谬!bowmeg@https://www.doczj.com/doc/cb5834278.html, Behave! ---- 放尊重点! Bingo! ---- 中了! Boring! ---- 真无聊! Bravo! ---- 太棒了! Bullshit! ---- 胡说! C‘mon! ---- 拜托了! Check, please! ---- 唔该埋单! Cheers! ---- 干杯! Congratulations! ---- 恭喜啊! Correct! ---- 对的! Crazy! ---- 疯了! damn! ---- 该死的! Deal! ---- 一言为定! Definitely! ---- 当然! Disgusting! ---- 好恶心呀! Drat! ---- 讨厌! Encore! ---- 再来一次! Exactly! ---- 完全正确! Fantastic! ---- 妙极了! Farewell! ---- 再见啦! Fifty-fifty! ---- 对半分! Foul! ---- 犯规了!

Fresh! ---- 好有型!帅! Gesundheit! ---- 保重!(特别用于对打喷嚏的人说)Gone! ---- 跑了! Gorgeous! ---- 美极了! Great! ---- 太好了! Hey! ---- 嘿! Hopefully! ---- 希望如此!有希望的话... Horrible! ---- 好可怕! Hot! ---- 好辣! Hurray!/Hurrah! ---- 万岁! Hush! ---- (肃静)嘘! Hurry! ---- 快点! Imagine! ---- 想想看! Impossible! ---- 不可能吧! Impressive! ---- 很感人,永生难忘! Incredible! ---- 不可思议! Indeed? ---- 真的? Jesus! ---- 天啊! Liar! ---- 你撒谎! Listen! ---- 听着! Lousy! ---- 差劲! Marverllous! ---- 棒极了! Now! ---- 现在就做! Objection! ---- 我抗议! Outrageous! ---- 不得了! Pardon! ---- 请再说一遍! Peekaboo! ---- 躲猫猫! Perfect! ---- 很完美! Please! ---- 拜托了! Present! ---- 到(有)!(用于点名时) Probably! ---- 很可能! Rats! ---- 差劲! Really? ---- 真的? Relax! ---- 放轻松! Right! ---- 对的! Satisfied? ---- 满意吗? Shhh... ---- 嘘...

面试常用对话 Interview Dialogue

Dialogue A (I=Interview 主持人A=Application 应征者) Opening Remarks 开场白 A: Excuse me .May I see Mr. John Watt, the manager? I: It's me What can I do for you? A: I have come at your invitation for an interview. nice to meet you , Mr. watt. I: nice to meet you too. Please sit down. A: Thank you, sir. I: We have received your letter in answer to our advertisement I would like to talk with you regarding your qualifications for this position. A:I am very happy that I am qualified for interview I: Have you bought your credentials? A: Yes, here they are .This is my ID card. This is my diploma. And this my certificate for qualifications. About Your Name and Age 关于姓名、年龄 I:Can you tell me what your full name is, A:My full name is Ynming Liu. I: How do you spell your family name? A:Liu, L-I-U. I: Do you have a English name? A:Yes,sir.It’s Walter.It was given by my English professor when I was at the university. About Your Address and Native Place 关于住址和籍贯 I: Where do you live? A:I live at 238 Zhongshan Road,Apt 401,Nanjing I:What's your permanent address? A:My permanent address is 238 Zhongshan Road, Apt 401,Nanjing. I:What is your birthplace? A:My birthplace is SuZhou I:Give me your telephone number, Please A: My telephone number is 3755818 About Your Dependents 关于家眷 A: Are you single or married? I: I ’ m still single.Nowadays many young people In china are not in a hurry to got married.They'd rather secure their careers before they settle down in a family. About Your Educational Background 关于教育背景 I:Which university did you graduated from ? A:Beijing university.I have learned economics there for four years. I: Which schools have you attended? A:I finished primary school in 1986 and entered high school that September.

相关主题
文本预览
相关文档 最新文档