当前位置:文档之家› Budgeted learning, part I The multi-armed bandit case. submitted

Budgeted learning, part I The multi-armed bandit case. submitted

Budgeted learning, part I The multi-armed bandit case. submitted
Budgeted learning, part I The multi-armed bandit case. submitted

Omid Madani Daniel J.Lizotte

Dept.of Computing Science

University of Alberta

Edmonton,T6J2E8

madani dlizotte greiner@cs.ualberta.ca

Russell Greiner

Abstract

We introduce and motivate the task of learning

under a budget.We focus on a basic problem in

this space:selecting the optimal bandit after a pe-

riod of experimentation in a multi-armed bandit

setting,where each experiment is costly,our to-

tal costs cannot exceed a?xed pre-speci?ed bud-

get,and there is no reward collection during the

learning period.We address the computational

complexity of the problem,propose a number of

algorithms,and report on the performance of the

algorithms,including their(worst-case)approxi-

mation properties,as well as their empirical per-

formance on various different problem instances.

Our results show that several obvious algorithms,

such as round robin and random,can perform

poorly;we also propose new types of algorithms

that often work signi?cantly better.

1Introduction

Learning tasks typically begin with a data sample—e.g., symptoms and test results for a set of patients,together with their clinical outcomes.By contrast,many real-world stud-ies begin with no actual data,but instead with a budget—funds that can be used to collect the relevant information. For example,one study has allocated$2million to develop a system to diagnose cancer,based on a battery of patient tests,each with its own(known)costs and(unknown)dis-criminative powers.Given our goal of identifying the most accurate classi?er,what is the best way to spend the$2 million?Should we indiscriminately run every test on ev-ery patient,until exhausting the budget?...or selectively, and dynamically,determining which tests to run on which patients?We call this problem budgeted learning.

An initial step in any learning task is identifying the most discriminative features.Our present work studies the bud-geted learning problem in the following“coins problems”, closely related to feature selection:We are given(distin-guishable)coins with unknown head probabilities.We are allowed to sequentially specify a coin to toss,then observe the outcome of this toss,but only for a known,?xed num-ber of tosses.After this trial period,we have to declare a winner coin.Our goal is to pick the coin with the highest head probability from among the coins.However,consid-ering the limits on our trial period,we seek a strategy for coin tossing that,on average,leads to picking a coin that is as close to the best coin as possible.

There is a tight relation between identifying the best coin and the most discriminative feature:the head probability of a coin is a measure of quality,and corresponds to the discrimination power in the feature selection problem.Our companion paper[LMG03]develops the theory and prob-lem de?nitions for the more general learning problems; here we remark that the hardness results and the algorith-mic issues that we identify in this work also apply to these more general budgeted learning problems,while the latter introduce extra challenges as well.

The?rst challenge in de?ning the budgeted problem is to formulate the objective to obtain a well-de?ned and satis-factory notion of optimality for the complete range of bud-gets.We do this by assigning priors over coin quality,and by de?ning a measure of regret for choosing a coin as a winner.We describe strategies(for determining which coin to toss in each situation),and extend the de?nition of re-gret to strategies.The computational task is then reduced to identifying a strategy with minimum regret,among all strategies that respect the budget.

We address the computational complexity of the problem, showing that it is in PSPACE,but also NP-hard under dif-ferent coin costs.We establish a few properties of optimal strategies,and also explore where some of the dif?culties may lie in computing optimal strategies,e.g.,the need for contingency in the strategy,even when all coins have the same cost(the unit-cost case).We investigate the perfor-mance of a number of algorithms empirically and theoreti-cally,by de?ning and motivating constant-ratio approxima-bility.The algorithms include the obvious ones,such as round-robin and random,as well as novel ones that we pro-

pose based on our knowledge of problem structure.One such algorithm,“biased-robin”,works well,especially for the special case of identical priors and unit costs.The paper also raises a number of intriguing open problems.

The main contributions of this paper are:

Introducing the budgeted learning task and precisely de?ning a basic problem instance in this space(the “coins problem”)as a problem of sequential decision making under uncertainty.

Addressing the computational complexity of the prob-lem,highlighting important issues both for optimality and approximability.Empirically comparing a num-ber of obvious,and not so obvious,algorithms,to-wards determining which work most effectively.

Providing in closed-form the expected regret of using one of the most obvious algorithms:round-robin. The paper is organized as follows.A discussion of re-lated work and notational overview ends this section.Sec-tion2describes and precisely de?nes the coins problem, and Section3presents its computational complexity.Sec-tion4begins by showing that the objective function has an equivalent and simpli?ed form.This simpli?cation allows us to explore aspects of the problem that are signi?cant in designing optimal and approximation algorithms.This section also de?nes the constant-ratio approximation prop-erty,and describes the algorithms we study,and addresses whether they are approximation algorithms.Section5em-pirically investigate the performance of the algorithms over a range of inputs.For a complete listing of the data,to-gether with additional empirical and analytic results,please see[Gre].

1.1Related work

There is a vast literature on sequential decision making, sample complexity of learning,active learning,and exper-iment design,all somewhat related to our work;of which we can only cite a few here.Our“coins problem”is an instance of the general class of multi-armed bandit prob-lems[BF85],which typically involve a trade-off between exploration(learning)and exploitation.In our budgeted problem,there is a pure learning phase(determined by a ?xed budget)followed by a pure exploitation phase;This difference in the objective changes the nature of the task signi?cantly,and sets it apart from typical?nite or in?nite-horizon bandit problems and their analyses(e.g.,[KL00]). To the best of our knowledge(and to our surprise)our bud-geted problem has not been studied in the bandit literature before1.Our coins problem is also a special Markov de-cision problem(MDP)[Put94],but the state space in the direct formulation is too large to allow us to use stan-dard MDP solution techniques.While the research on

2The Coins Problem

Perhaps the simplest budgeted learning problem is the fol-lowing“coins”problem:You are given a collection of coins with different and unknown head probabilities.As-sume the tail and the head outcomes correspond to receiv-ing no reward and a?xed reward(1unit)respectively.You are given a trial/learning period for experimenting with the

coins only,i.e.,you cannot collect rewards in this period. At the end of the period,you are allowed to pick only a sin-gle coin for all your future tosses(reward collection).The trial period is de?ned as follows:Each toss incurs a cost (e.g.,monetary or time cost),and your total experimental costs may not exceed a budget.Therefore the problem is to?nd the best coin possible subject to the budget.Let us now de?ne the problem precisely.The input is:

A collection of coins,indexed by the set,where

each coin is speci?ed by a query(toss)cost and a prob-ability density function over its head probability.We assume that the priors of the different coins are inde-pendent.(Note that these distributions can be different for different coins.)

A budget on the total cost of querying.

We let the random variable.denote the head probabil-ity of a coin,and let be the density over. Both discrete and continuous densities are useful in our exposition and for our results;Figure1illustrates exam-ples with both types of priors.The family of Beta densities are particularly convenient for representing and updating priors[Dev95].A Beta density is a two-parameter func-tion,Beta,whose pdf has the form

,where and are positive integers in this pa-per,and is a normalizing constant,so that the density in-tegrates to.Beta has expectation. Note that the expectation is also the probability that the outcome of tossing such a coin is heads.If the outcome is heads(resp.tails),the density is updated(using Bayes rule)to Beta(resp.Beta).Thus a coin with the uniform prior Beta,that was tossed 5times,and gave4heads and1tail,now has the den-sity Beta,and the probability that its next toss gives heads is.

Let us?rst address the question of which coin to pick at the end of the learning period,i.e.,when the budget allows no more tosses.The expected head probability of coin, aka the mean of coin,is:. The coin to pick is the one with the highest mean

,which we denote by.Note that coin may be different from the coin whose density has the high-est mode or the coin whose density has the highest prob-ability of having head probability that is no less than any other(e.g.,see Figure1a).The motivation for picking coin is that tossing such a coin gives an expected reward no

(a)

c2:

c1:

0.7

0.6

0.20.9

(b)

1.0

c3: Beta(1,1), ω(Θ)=1

0 1.0

ω(Θ)=30 Θ (1?Θ)

4

c4: Beta(5,2),

0.3

0.4

Figure1:(a)Coins with discrete distributions.Coin,with0.3 probability has head probability of0.2,and otherwise(with prob-ability0.7)has head probability0.9.The mean(expected head probability)of coin is,while for coin it is,thus coin is the coin to pick with0budget.

.There-fore the regret from picking coin is.

(b)Coins with Beta priors.Coin has the higher mean at. Here(computed by numerical integration). less than the expected reward obtained from tossing any other coin.

We will now de?ne a measure of error which we aim to minimize.Let be the actual coin with the high-est head probability,and let,the max random variable,be the random variable correspond-ing to the head probability of.(Note

.)The(expected)regret from picking coin,,is then

,i.e.,the average amount by which we would have done better had we chosen coin instead of coin.Observe that to minimize regret,2 we also should pick coin.Thus the(expected)mini-mum regret is.Note that the(mini-mum)regret is the difference between two quantities that differ in the order of taking maximum and expectation:

,and.

2.1Strategies

Assume now that we are allowed to experiment with the coins before having to https://www.doczj.com/doc/1a17370645.html,rmally,a strategy is a prescription of which coin to query at a given time point.In general,such a prescription depends on the objective(min-imizing regret in our case),the outcomes of the previous tosses or equivalently the current densities over head prob-abilities(i.e.,the current belief state),and the remaining budget.Note that after each toss of a coin,the density over its head probability is updated using Bayes formula based on the toss outcome.In the most general form,a strategy may be viewed as a?nite rooted-directed tree,(or actually,DAG)where each leaf node is a special leaf(stop) node,and each internal node corresponds to tossing a par-

(a)

c1r=0.143

r=0.095

(b)

c4

r=0

r=0

max

max max

max μ = 6/8μ = 5/8μ = 3/4μ = 5/7Figure 2:Example strategy trees for the coins of Figure 1a and

1b respectively.Upper edges correspond to the head outcome of the coin tossed,and the leaf nodes box the coin to pick.(a)The

regret from tossing coin

(resp.coin )is 0under either node (resp.0.95and 0.143),and therefore the expected regret is 0

(resp.

).Tossing coin is optimal.(b)The highest mean is shown by each leaf (by Proposition 4.1,minimizing regret is equivalent to maximizing expected highest mean).In this case,tossing either coin once does not change the (expected)regret.Hence the optimal strategy when two tosses are allowed is to toss coin twice.

ticular coin,whose two children are also strategy trees,one for each outcome of the toss (see Figure 2).We will only consider strategies respecting the budget,i.e.,the total cost of coin tosses along any branch may not exceed the budget.Thus the set of strategies to consider is ?nite though huge (e.g.,assuming unit costs).Associated with each leaf node of a strategy is the regret ,computed using the belief state at that node,and the probability of reaching that leaf ,where is the product of the transition prob-abilities along the path from root to that leaf.We therefore de?ne the regret of a strategy to be the expected regret over the different outcomes,or equivalently the expectation of regret conditioned on execution of ,or :

Tree Leafs

An optimal strategy

is then one with minimum regret:

.Figure 3shows the optimal

strategy for the case of coins with uniform priors,

and a budget of

.We have observed that optimal strategies for identical priors enjoy a similar pattern (with some exceptions):their top branch (i.e.,as long as the out-comes are all heads)consists of tossing the same coin,and the bottom branch (i.e.,as long as the outcomes are all tails)consists of tossing the coins in a round-robin fashion;see biased-robin (Section 4.2.3below).

2.2The Computational Problem

Our overall goal is to execute tosses according to some op-timal strategy .Three relevant computational problems are outputting (1)an optimal strategy,(2)the best coin to toss now (the ?rst action of the optimal strategy),or (3)the minimum regret.As the optimal strategy may be expo-nential in the input size,whenever we talk about the

coins

c1

Figure 3:The optimal strategy tree for budget of where all the

coins have uniform priors,and there are at least 4coins.Note that

some branches terminal after only 2tosses;here the outcome is already determined.

problem in a formal sense (e.g.,Theorem 3.1),we shall mean the problem of computing the ?rst action of an opti-mal strategy.

3Computational Complexity

The coins problem is a problem of sequential decision mak-ing under uncertainty,and similar to many other such prob-lems [Pap85],one can verify that it is in PSPACE,as long as we make the assumption that the budget is bounded by a polynomial function of the number of coins 3.The problem is also NP-hard:

Theorem 3.1The coins problem is in PSPACE and NP-hard.

Proof (sketch).In PSPACE:It takes space polynomial in the budget to compute the value of an optimal policy or the ?rst ac-tion of such a policy:?x an action and compute the regret given the outcome is positive (recursively),and reuse the space to do the same for the negative outcome.Fix another action and reuse the space.The space used is no more than a constant multiple of the budget.

NP-Hardness:Assume the priors have non-zero probability at head probability values of 0or 1only:.In this case,strategies are different only with respect to the col-lection of coins they query,and not the order:as soon as a coin returns heads,the regret is 0,and otherwise the coin is discarded.We reduce the KNAPSACK problem to this coins problem.In the KNAPSACK problem [GJ79],given a knapsack with capac-ity ,and a collection of objects

with cost and reward ,the problem is to choose a subsets

of the objects that ?t in the knapsack and maximize the total reward among all subsets:

subject to

Object

will be coin with cost and probability

,so that ,and

,i.e.,maximizing the total reward

would be equivalent to minimizing the failure probability.There-fore success probability of such a coin collection is maximized iff the total reward of the corresponding object set is maximized.Maximizing the success probability is in turn equivalent to mini-mizing the (expected)regret.

Our reduction reveals the packing aspect of the budgeted problem.It remains open whether the problem is NP-hard when coins have unit costs and/or uni-modal distributions.The next section discusses the related issue of approxima-

bility.

4Problem Structure and Algorithm Design It is fairly intuitive that the expectation over the mean of a coin given the coin is tossed should not be different from the current mean of the coin,as we are taking a sum over the possible outcomes.Here,we observe that the expec-tation enjoys the same property:

.This allows us to observe the following simpli-fying and perhaps intuitive properties:

Proposition4.1We have,

1.,therefore,the regret of a

strategy is:

(1)

where denotes the conditional expectation of maximum mean,conditioned on the execution of strategy(i.e.,the expectation of the highest mean over all the outcomes of executing strategy).

2.Strategies are harmless:For any strategy,

,therefore regret from using any strategy is not greater than the current regret.

3.(no need to query useless coins)Assume that under

any outcome(i.e.,the execution of any strategy re-specting the budget),there is some coin whose mean is at least as high as coin.Then there exists an optimal strategy tree that never queries coin.

Proof(sketch).Part1:We consider tossing a single coin(event ),and we can establish the identity for by summing over the different outcomes.The result generalizes to strategies by induction on tree height.

Parts2and3are also established by induction on strategy tree height,by?rst showing that querying a single coin can only in-crease the highest mean,using the fact that the expectation over the mean of a coin if the coin is queried remains the same.Fur-thermore,tossing a coin is useless if the coin cannot affect the highest mean within a single toss.For the induction step for part3, we observe that no optimal strategy of smaller height may query the coin(by induction),and no leaf reports the coin(by the stated assumption).Therefore the relative merit of strategies

remains the same whether or not the coin is queried at?rst.Con-sequently,the optimal strategy is identical whether or not the coin is queried,implying that the regret also remains the same whether or not the coin is queried.

We conclude that the optimization problem boils down to computing a strategy that maximizes the expectation over the highest mean;.In se-lecting the coin to query,it is fairly intuitive that two sig-ni?cant properties of a coin are the magnitude of its current mean,and the spread of its density,that is how changeable its density is if it is queried:if a coin’s mean is too low,it can be ignored by the above result,and if its density is too peaked(imagine no uncertainty),then querying may yield little or no information(the expectation may not be signi?cantly higher than).However,it is fairly surprising that,for the purpose of computing an opti-mal policy,we cannot make further immediate use of these properties.For example,assume coin has Beta prior,and coin has Beta,thus has a higher mean and a lower spread than.But the optimal strategy for a budget of one(and two)starts by querying.The main reason in this case remains that querying does not change our decision under either of its two outcomes( will be the winner),and thus the equals the cur-rent highest mean value of,while querying affects the decision,and the expectation of the given is queried is slightly higher().The optimal strategy may also be contingent.For example,in Figure2b,the policy may simply toss coin twice.How-ever,if we add a third coin with Beta,then if the outcome of the?rst toss is tails,the next optimal action is to toss coin.These observations suggest that optimization may be hard even in the unit-cost case.

4.1Approximability

Consider an algorithm that given the input,outputs the next action to execute.We call algorithm a constant-ratio approximation algorithm if there is a constant(inde-pendent of problem size),such that given any problem in-stance,if is the optimal regret for the problem,the regret ,from executing actions prescribed by,is bounded by.Of course we seek an approximation algorithm (preferably with low)that is also ef?cient(polynomial time in input size).A constant-ratio approximation is es-pecially desirable,as the quality of the approximation does not degrade with problem size.

4.2Algorithms

We describe a number of plausible strategies and algo-rithms,and explore whether or not they are approximation algorithms for the unit-cost case.In the process,we also gain insight into the types of considerations that are signif-icant for designing approximation algorithms.

4.2.1Round-Robin,Random,and Greedy Algorithms The round-robin algorithm simply tosses coins in a round-robin fashion,and the random algorithm simply queries the coins uniformly at random.These algorithms are plausible algorithms,at least initially in the trial period,and they are typically used in practice.The third algorithm we consider is the constant-budget algorithm:For a small constant (independent of and),it computes the optimal strategy for that smaller budget,and tosses the?rst coin of such a strategy.(Given the outcome,it then computes the optimal strategy from this new state,etc.)We shall refer to the algorithm as simply greedy in the case of.Perhaps it

is not hard to see these algorithms are suboptimal,but we can say more:

Proposition4.2For each algorithm that is round-robin, random-strategy,or constant-budget:For any constant ,there is a problem with minimum regret,on which .

Proof(sketch).On problems with uniform priors,large,and relatively smaller budget,round-robin and random strategies are unlikely to query a coin more than once,and are therefore expected to get a highest mean of not much more than,from a coin with Beta.On the other hand,with large,and more targeted algorithms(such as envision-single, de?ned below)?nd a coin with mean close to this. For the case of the constant-budget algorithm,assume without loss of generality.Assume all coins have uniform pri-ors except for two that have almost completely peaked priors and that their mean is high enough that even if querying either one gives all tails,the rest of the coins cannot catch them within one toss.Then the greedy algorithm may waste tosses on these two coins to identify a winner,but the optimal would explore the rest of the coins,given enough budget,and would have a good chance of discovering a signi?cantly better coin.

4.2.2Look-ahead and Allocational Strategies

The single-coin look-ahead,or simply the look-ahead al-gorithm,takes the budget into account:at each time point, for each coin,it considers allocating all of the remaining tosses to coin,computes the expected regret from such al-location,and tosses a coin that gives the lowest such regret. As there are distinguishable outcomes for tossing a single coin times(where is the number of remaining tosses),this computation can be done in polynomial time (at every time point).The algorithm tosses a coin that is expected to reduce the regret more than others.As this algorithm considers each coin alone,it fails in query-ing the right coins,at least initially,in the following type of scenarios:there are two kindes of coins(i.e.,two different priors are used):Type1coins have probability of hav-ing head probability at(e.g.,)and otherwise their head probability is.With a budget of,the optimal algorithm would toss such coins and would obtain a re-gret close to.Coins of type2are used to“distract”the look-ahead:in each step the increase in maximum mean from tossing any one of them is higher than tossing type1 coins,but their highest mean is bounded away from.We have seen empirically that in such scenarios the regret ra-tio of look-ahead to optimal can exceed a factor of6for example,but the priors are carefully designed.In the next section,we will see that look-ahead is one of the top two algorithms,in the case of identical priors.

The argument suggesting that the look-ahead algorithm is not an approximation algorithm,sheds another insight into problem structure.It tells us that in designing an approx-imation(or an optimal)algorithm,it is important to know not only the current budget and the characteristics of the in-dividual coin,but also whether we have suf?ciently many other coins similar to:while the(expected)reduction in regret from tossing a single coin may be low,there may be suf?ciently many such coins and the budget may be large enough,that spending and spreading tosses over such coins is justi?ed.This observation motivates the following gen-eralization of the look-ahead algorithm:compute the(ap-proximately)optimal allocational strategy:An allocational strategy is speci?ed by the number of tosses assigned to each coin.For example,given a budget of,an alloca-tion may specify that coin1should be tossed twice,coin2 tossed once,and coin3twice(and all other coins0times). Notice this allocation does not specify which coin to query ?rst(any coin with positive allocation may be queried),and it is not contingent,i.e.,it does not change the allocation given the previous outcomes,and therefore it is subopti-mal.However,two interesting questions are:whether the expected regret of an optimal allocational strategy is a con-stant approximation to the minimum regret(of the unre-stricted optimal strategy),and whether an(approximately) optimal allocational strategy can be computed ef?ciently. Section5addresses the?rst question.The attraction of al-locational strategies is that they are compactly represented and ef?ciently evaluated:the expected highest mean of an allocational strategy can be computed in time polynomial in the number of coins and the budget.In the special case of round-robin,with coins,and an equal allocation of tosses to every coin(when the budget is a multiple of), the expression for the expected highest mean,, further simpli?es to:

low;the optimal action may be to toss a previous coin:

Suppose after a few tosses,three coins have the distribu-tions Beta Beta Beta,and we have

one more toss.The optimal action is to toss the coin with

Beta rather than the untouched Beta.We have implemented this basic algorithm,and for the cases when ,the algorithm simply wraps around when there are no more untouched coins.We call it biased-robin as it can

be viewed as a variant of round-robin,and similar to round-

robin,it is budget independent and any-time.

5Empirical Performance

We report on the performance of the algorithms on the im-portant special case of identical priors and unit costs.Fig-ure4shows the performance of round-robin,random strat-egy,(single-coin)look-ahead,allocational,and the biased-robin algorithms,when,,and the priors are respectively uniform,skewed to the right,Beta, and skewed to the left Beta.Each time point is the average of1000trials,where at the beginning of each trial every coin’s head probability is drawn from the prior.We computed the regrets at every intermediate time point to il-lustrate the performance of the algorithms as the budget is reached.The expectation of the actual maximum valued coin,,is about in the case of uniform priors, and as a randomly picked coin has expected head probabil-ity of,the initial regret for each algorithm is around

in this case,as expected.

The biased-robin and envision-single strategies consis-

tently outperform the rest of the strategies tested,while the

envision-single is the most costly algorithm.The biased-robin algorithm has performed the best over the many other parameter settings that we have tested.The reason for the poor performance of greedy is simple:it leads to startva-tion,i.e.,due to its myopic property,as soon as some coin gets ahead,a single toss of any coin does not change the leader,and yields to equal expected regret.In this case, ties are broken by tossing the?rst coin in the list,which may not change the status.If we add a randomization com-ponent to greedy,its performance improves,and becomes slightly better than random(not shown),and still inferior to biased-robin.As observed,looking ahead improves the performance signi?cantly.

The allocational strategy computes the optimal allocation at the outset,and sticks to that allocation.For example,for a budget of40,uniform priors and10coins,8of the10 coins are allocated5tosses each.The allocational strategy tosses them in a round-robin fashion.Here,as the priors are identical,we need only search the space of equal allo-cations(many)to?nd the optimal(initial)allocation. The relative performance of the round-robin(and random) strategy compared to biased-robin,i.e.,the ratio of the re-gret of round-robin to the regret of biased-robin,gets worse

a

v

g

.

r

e

g

r

e

t

time

Figure5:The performance of the algorithms versus optimal with

,and unifrom priors,over4000trials.The regret of biased-robin appears lower than that of optimal at several time points due to inaccuracies of averaging.

initially with increasing budget and?xed,but eventually begins to improve(not shown).The worst ratio is a func-tion of however,and the relative performance of round-robin gets worse with increasing(as suggested by the proof of Proposition4.1).For example,with,the worst ratio is below3,while with,it surpasses4. With?xed and,the relative performance worsens as we increase the?rst Beta parameter,skewing the prior to the right;and improves by increasing the second Beta param-eter(Figure4).The behavior of the allocational strategy approaches the round-robin strategy with increasing bud-get(and with identical priors),eventually allocating equal tosses to all the coins.We have observed empirically that the ratio of the allocational regret to that of biased-robin also degrades with increasing,e.g.,at,the ratio is about,at,it is over,and at

,it is over.Therefore,periodic realloca-tion,i.e.,dynamic versus static allocation,appears to be necessary if such a technique is to yield an approximation algorithm.

We computed the optimal regret for and, on different types of identical priors.Figure5shows the performance of the optimal strategy against the other al-gorithms on uniform priors.On these problems,the perfor-mances of look-ahead and biased-robin algorithms are very close to that of optimal,suggesting that these algorithms may be approximation algorithms for the special case of identical priors.

Due to its performance,ef?ciency,and simplicity,the biased-robin algorithm is the algorithm of choice among the algorithms we have tested,for the case of identical pri-ors and costs.

6Summary and Future Work

There are a number of directions for future work,in addi-tion to the open problems we have raised.An immediate extension of the problem is to selecting classi?ers,which

a v g . r e g r e t

time

performance under uniform priors: Beta(1,1)

a v g . r e g r e t

time

performance under Beta(10,1)

a v g . r e g r e t

time

performance under Beta(1,10)

Figure 4:The performance of the algorithms with

coins,budget of

tosses,and (a)uniform (Beta

)priors,(b)skewed

Beta

,and (c)Beta

.

may be de?ned in a similar way:we are presented with candidate classi?ers (imagine decision trees)with pri-ors over a measure of their performance,and our task is to declare a winner,one that minimizing the regret of not selecting the actual best.A signi?cant additional dimen-sion in this case is that querying a single feature affects not one but multiple classi?ers in general.There are however “?atter”versions of the classi?er problem,such as learning a Naive Bayes classi?er,in which these dependencies are limited.Our solution techniques more readily extend to the latter problems [LMG03].

Contributions:We introduced and motivated the budgeted multi-armed bandit task and precisely de?ned a basic prob-lem in this space,the coins problem,as a problem of se-quential decision making under uncertainty.We investi-gated the computational complexity of the coins problem,and the performance of a number of algorithms on the problem,both from a worst-case perspective and empiri-cally.Our analyses demonstrate signi?cant problems with the standard algorithms,round-robin and random,as well as greedy.We presented two different selective querying techniques that signi?cantly outperform the standard algo-rithms.References

[Ang92] https://www.doczj.com/doc/1a17370645.html,putational learning theory:survey and selected bibliography.In Proc.24th Annu.ACM Sympos.Theory Comput.,pages 351–369.ACM Press,New York,NY ,1992.[BF85]D.A.Berry and B.Fristedt.Bandit Problems:Sequential Allocation of Experiments .Chapman and Hall,New York,NY ,1985.[CAL94]D.A.Cohn,L.Atlas,and https://www.doczj.com/doc/1a17370645.html,dner.Improving gen-eralization with active learning.Machine Learning ,15(2):201–221,1994.[Dev95]J.Devore.Probability and Statistics for Engineering and the Sciences .Duxbury Press,New York,NY ,1995.[Duf02]M.Duff.Optimal Learning:Computational procedures for Bayes-adaptive Markov decision processes .PhD thesis,University of Massachusets,Amherst,2002.[EDMM02] E.Even-Dar,S.Mannor,and Y .Mansour.

Pac

bounds for multi-armed bandit and Markov decision processes.In COLT 2002,2002.

[GGR02]R.Greiner,A.Grove,and D.Roth.Learning cost-sensitive active classi?ers.Arti?cial Intelligence ,139(2),2002.[GJ79]M.R.Garey and https://www.doczj.com/doc/1a17370645.html,puters and In-tractability .W.H.Freeman and Company,1979.[Gre]http://www.cs.ualberta.ca/greiner/budget.html.

[KL00]S.R.Kulkarni and G.Lugosi.Finite time lower bounds for the two-armed bandit problem.IEEE Transactions on Au-tomatic Control ,45(4):711–714,2000.[LMG03] D.J.Lizotte,O.Madani,and R.Greiner.The budgeted-learning problem,part II:the Naive Bayes case.sub-mitted,2003.[LMRar]M.Lindenbaum,S.Markovitch,and D.Rusakov.Se-lective sampling for nearest neighbor classi?ers.Machine Learning ,To appear.[MCR93]R.Musick,J.Catlett,and S.Russell.Decision theoretic subsampling for induction on large databases.In Proceedings of the Tenth International Conference on Machine Learning ,pages 212–219,Amherst,MA,1993.[Pap85] C.H.Papadimitriou.Games against nature.Journal of Computer and Systems Science ,31:288–301,1985.[Put94]M.L.Puterman.Markov Decision Processes .Wiley Inter-science,1994.[RM01]Nicholas Roy and Andrew McCallum.Toward optimal active learning through sampling estimation of error reduction.In Proc.18th International Conf.on Machine Learning ,pages 441–448.Morgan Kaufmann,San Francisco,CA,2001.[RS95]S.Russell and D.Subramanian.Provably bounded-optimal agents.Journal of Arti?cial Intelligence Research ,2:575–609,1995.[SG95]D.Schuurmans and R.Greiner.Sequential PAC learning.In Proceedigs of COLT-95,1995.[TK00]S.Tong and D.Koller.Active learning for parameter es-timation in bayesian networks.In NIPS ,pages 647–653,2000.[Tur00]P.Turney.Types of cost in inductive concept learning.In Workshop on Cost-Sensitive Learning at the Seventeenth Inter-national Conference on Machine Learning ,pages 15–21,2000.[Val84]Leslie G.Valiant.A theory of the https://www.doczj.com/doc/1a17370645.html,muni-cations of the ACM ,27(11):1134–1142,1984.

雅思口语part2话题万能模板:事件类

雅思口语part2话题万能模板:事件类今天三立在线教育雅思网为大家带来的是雅思口语part2话题万能模板:事件类的相关资讯,备考的烤鸭们,赶紧来看看吧! 首先我们来说说“一件从数学中学到的事情”,拿到这个话题,先不要急着回答,可以用笔稍微构思一下思路,该怎么回答。首先,可以说一下这件事是什么、发生在什么时候,也就是作文中的事件和事件,这里需要注意的就是具体描述一下什么事但是又不能啰嗦累赘,而时间则可以大致概括点明即可。我们就从资料中的例子来说:高中的时候从乘法表中学到的一件事。 然后,要具体叙述一下这件事,即是最重要的部分,发生的地方、牵扯到的人物、以及这件事情影响到你的原因,为什么影响到你…… 资料例子来说:在高中的一次数学竞赛中失败,以至于灰心丧气,甚至对自己产生怀疑,而此时数学老师拿乘法的规则来比喻人生中的失败与成功的关系来鼓励我,以至于我从失败中走出来,从新赢得了下一次的数学竞赛。这件事情使我明白了,生活就如同数学乘法,有无数种可能,多种多样,我们要坦然面对。 而这里的例子看似像我们中文作文中的思路,但是关键在于它表述方式以及用词造句,不能太中式化。这里的重点,就是描述事件经过,即“在高中的一次数学竞赛中失败,以至于灰心丧气,甚至对自己产生怀疑,而此时数学老师拿乘法的规则来比喻人生中的失败与成功的关系来鼓励我,以至于我从失败中走出来,从新赢得了下一次的数学竞赛。”而老师的话则是重中之重,因为他让我明白了道理“生活就如同数学乘法,有无数种可能,多种多样,我们要坦然面对。”也是我深深牢记的原因,以及这件事对我影响深刻的原因。

而最后,只需要简单概括,从这件事中学到的东西即可。其实这个话题思路并不是很难,关键在于烤鸭们的表述以及词汇的积累。 其实,大家都知道事件类话题是雅思口语考试中最简单的、最容易描述,却也是最常见的话题,所以大家要多注意积累素材、思考思路。其实“A way of communication”和“A good parent”这两个话题也可以用这个例子来阐述回答,而最关键的则是把老师的那段话稍作转述即可。例如,“A way of communication”我们可以先描述手机及其性能作用,以及现今的流行趋势,然后从一件事引到交流方式,还是那件事,还是那写话,只不过要突出老师用短信的方式告知我,最后,简单概述手机的用处及好处即可。是不是很简单呢?同样的,“A good parent”只需要根据情况把那些话转述为父母亲对你说的话,已经对你的影响,所以一个模板真的是百用哦! 那么我们赶紧来看看还有哪些话题可以这样转换呢? 多米诺骨牌效应:=Something you learned from math=Something you learned from your family=A piece of advice=A change you had=An important stage in your life=A text message you got=A letter you got=An E-mail you got=A way of communication=An important decision you made=A person who helped you=A kind thing someone did to you=A family member you love talking to=A good parent=An elderly family member=A neighbor=A good friend=A teacher=A science course you took=A subject you love/d at school=Your personality=First day of college=Your experience of getting money as a gift=Your experience of getting congratulations=A person you helped=A thing your friend did made you admire.

中考英语 语法填空训练知识点-+典型题及解析

中考英语语法填空训练知识点-+典型题及解析 一、英语语法填空汇编 1.语法填空 A kidnapped 3-year-old boy ________ (name) Aiden helped police find himself on Tuesday, Jan. 13, after answering ________ (he) mum's phone in her stolen car- find out how he pulled it off! Authorities said they found a 3-year-old boy who was sitting ________ a car stolen outside a Utah day care on Tuesday after he ________ (answer) his mother's cellphone and honked the horn to draw their attention. "The boy's mother, Elizabeth Barrios, left the car unlocked and running ________ a snowy morning as she dropped off another child, a baby, at the day care around 7 am in Ogden, a town about 40 miles north of Salt Lake City," the police said. When she walked out, she saw someone ________ (drive) her car away with her 3-year old son Aiden inside. Police arrived and called her cellphone, which was in the car, hoping ________ (reach) the thief and negotiate the boy's release. Instead, the boy answered the phone. He told his mother that a ________ (strange) had driven away her car and was going through her purse. She told him ________ (stay) calm as the man took things from her purse ________ ran away. "He is a very smart child," the police praised the kid. "He did a great job." 【答案】 named;his;in;answered;on;driving;to reach;stranger;to stay;and 【解析】【分析】本文讲述了一个3岁的男孩通过在被偷的车里接了妈妈的电话后,帮助警方找到了他的事情。 (1)句意:1月13日,星期二,一个被绑架的名叫艾登的3岁男孩在被偷的车里接了妈妈的电话后,帮助警方找到了他。helped是谓语动词,故此处是非谓语动词,name与boy是被动关系,故用过去分词做后置定语,named,名字叫……的,故填named。 (2)句意:1月13日,星期二,一个被绑架的名叫艾登的3岁男孩在被偷的车里接了妈妈的电话后,帮助警方找到了他。phone是名词其前是形容词性物主代词,he是主格,他,其形容词性物主代词是his,他的,故填his。 (3)句意:当局说,他们发现一个3岁的男孩坐在一辆被偷走的车里,星期二他在犹他州一家日托所外接了他母亲的手机,并按了喇叭以引起他们的注意。根据was sitting,可知表示正坐在某处,in the car,在车里,表示地点,故填in。 (4)句意:当局说,他们发现一个3岁的男孩坐在一辆被偷走的车里,星期二他在犹他州一家日托所外接了他母亲的手机,并按了喇叭以引起他们的注意。根据and honked the horn,可知and前后时态一致是一般过去时,故填answered。 (5)句意:警方称,男孩的母亲伊丽莎白·巴里奥斯早上7点左右在盐湖城以北40英里的小镇奥格登接受日托时,在一个下雪的早晨,将另一个孩子,一个婴儿,放下后,车没锁,就跑了。根据a snowy morning,可知morning前有形容词时用介词on,表示在……的早上,故填on。 (6)句意:当她走出来的时候,她看到有人开车带着她3岁的儿子艾登走了。see sb

网页制作必懂的基础知识

网页制作必懂的基础知识(一) 近来这几天一直在学习网页制作的知识,颇有收获,分享大家,都是一些网页制作的必懂的基础知识。 1 html基础代码

 == 可以创建链接和瞄点。  特殊标签:空格 大于号:>小于号:<版权号:© 引号:" 滚动标签: 属性有:滚动方向和滚动次数direction:left;right;up;down; loop 鼠标放上去停止滚动:onmouseover=this.stop() 鼠标离开继续滚动:onmouseover=this.start() 2图片的讲解 图片的格式:jpg png gif 图片的要素:width height alt title src 图片的路径:相对路径和绝对路径 图片和文字的格式: align=”left””right””bottom””middle””absmiddle” 3三种列表的讲解:列表可以嵌套

四表格和表单 表格:表格边框
表单:
Type有:text;checkbox;radio;password;textarea;file button; Submit;reset;hidden;select

语法填空知识点总结

总结图 有提示词 名词 考点:单复数,名词变形/变性 1.Few people I know seem to have much desire or time to cook. Making Chinese ___________ (dish) is seen as especially troublesome. 2.To avoid knee pain, you can run on soft surfaces, do exercises to _____________ (strength) your leg muscles (肌肉), avoid hills and get good running shoes. 3.I’m a ______________ (science) who studies animals such as apes and monkeys. 4.The Central London Rail way was one of the most ______________ (success) of these new lines, and was opened in 1900.

答案: 1.dishes 2.strengthen 3.scientist 4.successful 动词 谓语考点 时态语态;主谓一致 非谓语考点 to do: 未发生或目的 doing: 主动或进行 done: 被动或完成 https://www.doczj.com/doc/1a17370645.html,ter, engineers __________ (manage) to construct railways in a system of deep tunnels, which became known as the tube. 2.In recent years, stress ____________ (regard) as a cause of a whole range of medical problems, from high blood pressure to mental illness. 3.If you find something you love doing outside of the office, you’ll be less likely __________ (bring) your work home. 4.I still remember _____________ (visit) a friend who’d lived here for five years and I ___________ (shock) when I learnt that she hadn’t cooked once in all that time. 5.This switch has decreased ___________ (pollute) in the country’s major lakes and reservoirs and made drinking water safer for people. 答案: 1.managed 2.has been regarded 3.to bring 4.visiting; was shocked 5.pollution

雅思口语万能模板

雅思口语万能模板 今天给大家收集了雅思口语万能模板,快来一起学习吧,下面就和大家分享,来欣赏一下吧。 雅思口语万能模板:自我介绍Sample Sample1: My name is ________. I am graduate from ________ senior high school and major in ________. There are ________ people in my family. My father works in a computer company. And my mother is a housewife. I am the youngest one in my family. In my spare time, I like to read novels. I think reading could enlarge my knowledge. As for novels, I could imagine whatever I like such as a well-known scientist or a kung-fu master. In addition to reading, I also like to play PC games. A lot of grownups think playing PC games hinders the students from learning. But I think PC games could motivate me to learn something such as English or Japanese.My favorite course is English because I think it is interesting to say one thing via different sounds. I wish my English could be improved in the next four years and be able to speak fluent English in the future.

高性能计算、分布式计算、网格计算、云计算概念与区别

高性能计算、分布式计算、网格计算、云计算--概念和区别 《程序员》2009-02 P34 “见证高性能计算21年” 高性能计算(High Performance Computing)HPC是计算机科学的一个分支,研究并行算法和开发相关软件,致力于开发高性能计算机(High Performance Computer)。 分布式计算是利用互联网上的计算机的中央处理器的闲置处理能力来解决大型计算问题的一种计算科学。 网格计算也是一种分布式计算。网格计算的思路是聚合分布资源,支持虚拟组织,提供高层次的服务,例如分布协同科学研究等。网格计算更多地面向科研应用,商业模型不清晰。网格计算则是聚合分散的资源,支持大型集中式应用(一个大的应用分到多处执行)。 云计算(Cloud Computing)是分布式处理(Distributed Computing)、并行处理(Parallel Computing)和网格计算(Grid Computing)的发展,或者说是这些计算机科学概念的商业实现。云计算的资源相对集中,主要以数据中心的形式提供底层资源的使用,并不强调虚拟组织(VO)的概念。云计算从诞生开始就是针对企业商业应用,商业模型比较清晰。云计算是以相对集中的资源,运行分散的应用(大量分散的应用在若干大的中心执行);

目录 高性能计算、分布式计算、网格计算、云计算--概念和区别 (1) 高性能计算 (3) 百科名片 (3) 概念 (3) 服务领域 (3) 网格 (5) 百科名片 (5) 网格的产生 (5) 网格技术的特征及其体系结构 (5) 高性能计算机的发展与应用 (17) 我国高性能计算机应用前景及发展中的问题 (17) 高性能计算机与大众生活息息相关 (17) 高性能计算机发展任重道远 (18) 分布式计算、网格计算和云计算 (21) 分布式计算 (21) 网格计算 (21) 云计算 (22) 网格计算和云计算的概念和区别 (24) 目标不同 (24) 分配资源方式的不同 (25) 殊途同归 (26) 钱德沛教授:云计算和网格计算差别何在? (27) 云计算与网格计算的概念 (27) 网格计算的特点是什么呢? (27) 云计算与网格计算区别何在 (28)

分享常用的雅思口语句型模版

分享常用的雅思口语句型模版 本文三立在线老师为大家整理了一些雅思口语模板句型,希望同学们都能取得好成绩。 I get up at six o''clock.我六点起床。 I meet the boss himself.我见到了老板本人。 I owe you for my dinner.我欠你晚餐的钱。 I really enjoyed myself.我玩得很开心。 I''m fed up with my work !我对工作烦死了! It''s no use complaining.发牢骚没什么用。 She''s under the weather.她心情不好。 The child sobbed sadly. 小孩伤心地抽泣着。 The rumor had no basis. 那谣言没有根据。 They praised him highly.他们大大地表扬了他。 Winter is a cold season.冬天是一个寒冷的季节。 You can call me any time. 你可以随时打电话给我。 What shall we do tonight?我们今天晚上去干点儿什么呢? What''s your goal in life?你的人生目标是什么? When was the house built?这幢房子是什么时候建造的? Why did you stay at home?为什么呆在家里? Would you like some help?需要什么帮助吗?

You mustn''t aim too high你不可好高骛远。 You''re really killing me!真是笑死我了! You''ve got a point there. 你说得挺有道理的。 Being criticized is awful !被人批评真是痛苦! 15 divided by 3 equals 5.15 除以3 等于5. All for one ,one for all.我为人人,人人为我。 East,west,home is best. 金窝,银窝,不如自己的草窝。 He grasped both my hands. 他紧握住我的双手。 He is physically mature.他身体已发育成熟。 I am so sorry about this. 对此我非常抱歉(遗憾)。 以上是三立在线老师带来的关于分享常用的雅思口语句型模版的详细内容,希望各位考生能从中受益,并不断改进自己的备考方法提升备考效率,从而获得理想的考试成绩。

网页设计基础知识点

web 一、超文本(hypertext) 一种全局性的信息结构,它将文档中的不同部分通过关键字建立链接,使信息得以用交互方式搜索。它是超级文本的简称。 二、超媒体(hypermedia) 超媒体是超文本(hypertext)和多媒体在信息浏览环境下的结合。它是超级媒体的简称。用户不仅能从一个文本跳到另一个文本,而且可以激活一段声音,显示一个图形,甚至可以播放一段动画。 Internet采用超文本和超媒体的信息组织方式,将信息的链接扩展到整个Internet上。Web就是一种超文本信息系统,Web的一个主要的概念就是超文本连接,它使得文本不再象一本书一样是固定的线性的。而是可以从一个位置跳到另外的位置。可以从中获取更多的信息。可以转到别的主题上。想要了解某一个主题的内容只要在这个主题上点一下,就可以跳转到包含这一主题的文档上。正是这种多连接性把它称为Web。 三、超文本传输协议(HTTP)Hypertext Transfer Protocol超文本在互联网上的传输协议。 IP IP是英文Internet Protocol(网络之间互连的协议)的缩写,中文简称为“网协”,也就是为计算机网络相互连接进行通信而设计的协议。在因特网中,它是能使连接到网上的所有计算机网络实现相互通信的一套规则,规定了计算机在因特网上进行通信时应当遵守的规则。任何厂家生产的计算机系统,只要遵守IP协议就可以与因特网互连互通。IP地址具有唯一性,根据用户性质的不同,可以分为5类。另外,IP还有进入防护,知识产权,指针寄存器等含义。 http 超文本传送协议(HTTP) 是一种通信协议,它允许将超文本标记语言(HTML) 文档从Web 服务器传送到Web 浏览器。HTML 是一种用于创建文档的标记语言,这些文档包含到相关信息的链接。您可以单击一个链接来访问其它文档、图像或多媒体对象,并获得关于链接项的附加信息。HTTP工作在TCP/IP协议体系中的TCP协议上。 FTP FTP(File Transfer Protocol, FTP)是TCP/IP网络上两台计算机传送文件的协议,FTP是在TCP/IP网络和INTERNET上最早使用的协议之一,它属于网络协议组的应用层。FTP客户机可以给服务器发出命令来下载文件,上载文件,创建或改变服务器上的目录。 第一、什么是C/S结构。 C/S (Client/Server)结构,即大家熟知的客户机和服务器结构。它是软件系统体系结构,通过它可以充分利用两端硬件环境的优势,将任务合理分配到Client端和Server端来实现,降低了系统的通讯开销。目前大多数应用软件系统都是Client/Server形式的两层结构,

中考英语语法填空知识点(大全)经典

中考英语语法填空知识点(大全)经典 一、初三中考语法填空(含答案详细解析) 1.阅读下面短文,在空白处填入一个适当的词,或填入括号中所给单词的正确形式。 About ten years ago when I was at university, I worked at my university's museum. One day ________ I was working in the gift shop, I saw two old people come ________ with a little girl in a wheelchair (轮椅). As I looked closer ________ this girl, I realized she had no arms or legs, just a head, neck and body. She was ________ (wear) a little white dress. When the old people pushed her up to me, I was looking down at the desk. I turned my head and gave ________ (she) a smile. As I took the money from her grandparents, I looked back at the girl, who was giving me the cutest, ________ (happy) smile I had ever seen. Just at that ________ , her physical handicap (生理缺陷)was gone. All I saw was this beautiful girl, whose smile just impressed me ________ almost gave me a completely new idea of what life is all about. She took me from a poor, unhappy university student and brought me into her world, a world of smiles and love. That was ten years ago. I'm a ________ (success) business person now and whenever I feel frustrated and think about the trouble of the world, I will think of that little girl and the unusual lesson about life that she ________ (teach) me. 【答案】when/while;in;at;wearing;her;the happiest;time/moment;and;successful;taught 【解析】【分析】文章大意:大学期间遇到的一个坐轮椅的小女孩,她的可爱的乐观的微笑改变了我的人生观。十多年以来,她的微笑始终激励着我。 (1)句意:有一天,我在礼品店工作时,看到两个老人带着一个坐轮椅的小女孩进来。分析句式结构可知,I was working in the gift shop是一个时间状语从句,要用when或while 来引导;第一空格故填when或while;因I was working in the gift shop,因此two old people需要进来,我才能看到。固定短语come in,进来,第二个空格故填in。(2)句意:当我走近这个女孩时,我发现她没有胳膊或腿,只有头、脖子和身体。根据后面语句I realized she had no arms or legs, just a head, neck and body. 可知,我近距离地看到了这个小女孩。固定短语look at,看到,故填at。 (3)句意:她穿着一件白色的小裙子。wear,穿,动词,根据前面的was可知,此句要用过去进行时,故填wearing。 (4)句意:我转过头对她笑了笑。she,人称代词主格,她;固定短语give sb. sth.给某人某物,人称代词作give的宾语要用宾格,故填her。 (5)句意:她给了我一个我所见过的最可爱、最快乐的微笑。happy,开心,快乐。根据语句中的the cutest与I had ever seen可知此空要用最高级,故填the happiest。 (6)句意:就在那时,她的生理缺陷消失了。固定短语at that time,at that moment,在那时,故填time或moment。 (7)句意:我看到的只是一个美丽的女孩,她的微笑给我留下了深刻的印象,几乎让我对生活的意义有了全新的认识。分析句式结构impressed me与almost gave me a,……是小

语法填空公开课

语法填空专题教案 一、教学目标: 1.让学生了解语法填空题考察内容; 2.让学生掌握语法填空题解题技巧; 3.让学生注意平常对词汇的积累,大量阅读课外材料,培养词感和语感,加强基本功;并注意生活常识的积累。 二、教学重点: 总结语法填空题的解题技巧 三、教学难点: 学生掌握语法填空题的解题技巧 四、教学过程 Step1:Lead-in(导入) Step2:语法填空题考察内容和解题思路 1.考察内容: a.语境(上下文); b.语法:动词(时态、语态、主谓一致、非谓语形式)、名词、代词、冠 词、介词、连词固定搭配、情态动词、复合句、形容词和副词的比较 级最高级及构词法、倒装等。 2.解题思路: a. 纯空格填空:一般考察介词,连词,代词,冠词,从句引导词,情态动词,强调助动词等 b.给出提示词填空:谓语动词和非谓语动词(前者要考虑时态和语态以及主谓一致;后者主要是to do,doing,done及变形) c.词类转换题(名词,形容词,副词,形容词或副词的比较级或最高级,词义的否定) Step3:案例分析讲解与练习 解题高招 1. 通读全文,把握大意。 2. 结合语境,试填空格。

具体来说,可按设题类型分为三类情况: (1)纯空格试题的解题技巧。 技巧1:缺主语或宾语,一定是填代词或名词(多考代词)。如: [例1] when I told my parents my story, ____ didn’t think it was a mistake.技巧2:名词前面,若没有限定词(冠词,形容词性物主代词,名词所有格,基数词,序数词,不定代词),很可能是填限定词。如: [例2]The young man went home with a happy heart. After the student left, the teacher let _______ student taste the water. 技巧3:句子不缺主语、表语、动词后不缺宾语的情况下,名词或代词前面,一定是填介词。 [例3]My parents took good care ___ me. 技巧4:若两个或几个单词或短语之间没有连词,可能是填连词。 [例4]…two world-famous artists, Pablo Picasso ___34___ Candido Portinari, which are worth millions of dollars. 技巧5:若结构较完整,空格后的谓语动词是原形,特别是与上下文时态不一致或主谓不一致时,很可能是填情态动词或表示强调或倒装的助动词(do, does, did等)。 [例5] He had no time or energy to play with his children or talk with his wife, but he ______ bring home a regular salary. 技巧6:定语从句缺主语或宾语,一定是填适当的关系词,如:who, that, which, whom。 [例6]He would open the box, take out an imaginary kiss, and remember the love of this beautiful child _______ had put it there. 技巧7:由特殊的句式结构或固定搭配来判断空格应填的词。 [例7]_____ was only after I heard she became sick that I learned she couldn’t eat MSG(味精). (2) 给出提示词填空的解题技巧 首先,判断要填的动词是谓语动词还是非谓语动词。然后按以下两点进行思考。技巧8:若句中没有别的谓语动词,或者虽然已有谓语动词,但需填的动词与之

万能雅思口语话题高分模板句

万能雅思口语话题高分模板句今天三立在线教育雅思网为大家带来的是万能雅思口语话题高分模板句的相关资讯,备考的烤鸭们,赶紧来看看吧! 遇到口语新题卡壳时,你需要这些句子 1. This is a tough question. I have never heard about it, nor have I ever read about it (倒装句丰富句型). 2. Give me a few seconds for me to search every piece of information in my head now. 3. It is an abstract question. I know little about it. 4. Are you asking me something about+你重复一下句子中的关键词…? 5. Have I given enough information? It would be great if you could give me more. 6. Am I making myself clear? 7. Now you want me to talk about it. But I don't have too much to say. 8. Give me a few seconds for me to organize my thought a little bit. 遇到话题来不及思考是,你需要一些简单过渡词 如果你不需要这么长的思考时间,你可以说这些简单的过渡词: 1.“well” 2.“you know” 3.“actually”

网页制作基础知识 教案

网页制作基础知识教案 教学目标: 了解网页的组成元素及常用的网页制作工具。了解网页制作的一些基础知识,如网站和网页的区别、网页的类型、网页的构成元素等,然后在了解的基础上再通过练习来巩固。 教学重点难点: 在深刻了解概念的基础上把理论转化成实际的应用。 教学过程: 在制作网页之前,首先要了解一些关于web网页的基本知识,了解一下构成一个网站的基本元素等。 网页的基本元素 文本:基本组成部分 图像:更加直观准确地表达某些信息,并且可以起到美化网页,吸引读者注意力的作用 超链接:可以方便地转入其他网页进行浏览。 网页的其他元素:音乐表格表单 美化网页,丰富网页的内容,增强网页的功能。 网页的实质 网页相当于刊物中所发表的一篇篇文章,但与纸上的文章相比,它增加了多媒体信息和网上交互的功能。 网页的实质 = 表格+文本+图片+动画+声音+超级链接+…… ◆网站相当于发行到全世界的期刊。 ◆网站的实质 = 服务器上的文件夹 ◆主页相当于期刊的封面。 ◆主页的实质 = 打开网站的第一个网页 网页和网站的分类 从网页是否执行程序来分,可分为静态网页和动态网页这两种类型。 什么是静态网页、动态网页? ◆所谓静态网页,指的是网页从服务器传到客户端时,网页的内容是“固定 不变”的,也就是说,服务器只是把所存储的网页的内容原封不动直接传递给客户端浏览器,这种网页一般是标准的HTML代码。 静态网页一般以.htm或.html为后缀结尾的,俗称html文件。本课程就是制作静态网页的课程。 ◆所谓动态网页,它在由服务器传递给客户端的时候必须由服务器把它转换 成相应的HTML格式,而且会根据用户的要求和选择在在服务器端做出相应的改变和响应。 动态网页一般要用专门的脚本语言编写,如ASP、https://www.doczj.com/doc/1a17370645.html,、PHP、JSP等等

雅思口语万能模板

Part1 介绍自己的姓名: My Chinese name is Ma Jiantong .which was given by my father , meaning a health &strong body 介绍学校: I am a student major in science in No 2 senior school in Mu Danjiang which is famous in HLJ province 介绍家乡: Just before a few days there are 5 people in my family , but now, there only 4 . I would like to say it is really a typical chinese family ,namely my grandmother ,my parents and I . my grandmother was a worker is retired. My father is a middle school teacher and he has been teaching English for approximately 30 years . my mother was once a chemist ,is also retired and I am a senior high student preparing the college entrance examination . my grandfather just pasted away .which means that I have a clear ,distinct memory of him . 关于“我喜欢”的表达: 1. There are a lot of ….;but …..being my favorite all the time . 2. I am really keen on… 3. I am passionate a bout….. 开头句: 1. Well in general I would say that…. 2. Well to be honest I would really to say tha…. 3. Certainly I would definite say that…. 连接词: 1. to be more accurate 2. in addition to this … 3. I suppose the reason has something to do with the f act that…. 关于“类型”的回答: 1. There is quite a wide range of….. 2. There is quite an extensive diversity of… 3. I assume …. are so + adj. because……. Part 2 卡片上的问题回答要有顺序,在用铅笔标明信息时,应该注明所要叙述的要点,越简洁越好,另外要选好时态 开场语句: 1. All right then , in response to the first question of …… 2. Initially then ,I would like to get started by look at….. 回答第二题: 1. Continuing then with the next point of… 2. Now with regard to the next question of…… 回答第三题: 1. Moving onto the business of…… 2. Drawing attention to the matter of…. 3. What I would like to make clear is that….

(完整版)《网页设计与制作》课程教学大纲.docx

《网页设计与制作》课程教学大纲 课程名称:网页设计与制作 学分及学时: 4 学分总学时72学时,理论36 学时 适用专业:网络工程 开课学期:第四学期 开课部门:计算机与互联网学院 先修课程:计算机文化基础计算机网络 考核要求:考试 使用教材及主要参考书: 向知礼主编:《网页设计与制作》航空工业出版社2017 年 杨松主编:《网页设计案例教程》航空工业出版社2015 年 一·课程性质和任务: 本课程全面地介绍网页制作技术的基本理论和实际应用。全书共 10 章,分为 3 大部分。前 5 章为第 1 部分,主要介绍网页制作的基本理论——HTML,同时穿插介绍Fireworks,Flash, Anfy 等软件在网页制作过程中的应用;第 6 章~第 8 章为第 2 部分,主要介绍网页 制作技术,包括CSS技术、客户端脚本技术(DHTML)以及 XML 技术;第9 章~第 10 章为第3 部分,主要介绍当前最流行的网页制作工具——Dreamweaver ,通过应用实践能够从实际 应用的角度进一步巩固所学知识。 课程内容不但包括各种网页制作技术的基础理论,而且强调网页制作的具体应用,使读者既能打下坚实的理论基础,又能掌握实际的操作技能。 二·课程教学目的与要求 以Dreamweaver 的使用为主线,介绍网页制作的相关技术。使学生理解网页制作的基本 概念和理论 ,掌握站点的建立和网页的设计 ,能用 HTML 语言修改网页 ;掌握网页制作和站点的基本 知识 ;掌握站点的创建和网页的编辑 ;掌握超链接、图像、 CSS样式的使用 ;掌握表格、框 架、表单、多媒体对象的使用 ; 理解行为、模板、库、 CSS布局的概念和使用 ;理解 HTML 语言、 网站的测试与的发布; 要求:教学过程中,须注重学生实践操作能力的培养,采取“面向实践、能力为先”的 教学思路,教学内容应结合当前WEB 技术的发展趋势,把握未来企业级WEB 页面开发的发 展方向,兼顾各行各业的需求变化,力争面向社会,服务于企业“互联网+”战略。

网格计算论文

网格之数据管理 中文摘要:网格是把地理位置上分散的资源集成起来的一种基础设施。网格上的资源包括计算机、集群、计算机池、仪器、设备、传感器、存储设施、数据、软件等实体,另外,这些实体工作时需要的相关软件和数据也属于网格资源。数据是网格中的一种重要资源,具有可复制、可移动、可压缩、可加密等特性。网格上许多数据的数据量非常大,且通常为分布式存储,需要专门的管理机制来管理网格上的数据,如数据传输、数据存储、副本管理等。 关键词:数据管理;数据资源;缓存;数据传输;副本; 1网格概述 网格是把地理位置上分散的资源集成起来的一种基础设施。通过这种基础设施,用户不需要了解这个基础设施上资源的具体细节就可以使用自己需要的资源。分布式资源和通信网络是网格的物理基础,网格上的资源包括计算机、集群、计算机池、仪器、设备、传感器、存储设施、数据、软件等实体,另外,这些实体工作时需要的相关软件和数据也属于网格资源。《网格:一种未来计算基础设施蓝图》[1]一书中把网格描述为:“网格是构筑在互联网上的一组新兴技术,它将高速互联网、计算机、大型数据库、传感器、远程设备等融为一体,为科技人员和普通老百姓提供更多的资源、功能和服务。 2网格中的数据管理 众多的科学和工程应用计算都需要处理大量的数据,需要处理的数据量级可达1012 B 或1015 B 数量级。像天气预报的计算、飞机模型的计算、流场计算等领域都是把连续变量离散化,用差商来代替微商进行计算的。计算问题的精度要求越高,变量离散的区间越小,计算的数据量也就越大。这类问题的求解一般都需要访问和存储大量的数据。 应用领域中不仅一个程序需要访问大量的数据,不同的程序之间也需要传输大量的数据。常见的数据分析应用程序和可视化显示的应用程序需要访问在地理位置上分布的大量数据,其数据量达到了109 B 甚至1012 B 数量级。 虽然数据也是一种资源,但它有自己不同于其他资源的特点。 ①其他资源的用途由资源提供者或资源本身的构造来决定,但数据资源在访问控制权限许可的情况下,其用途由数据请求者决定,应用可以对数据可视化,也可以对数据加密,还可以进行过滤和统计等其他处理。 ②数据资源具有可无限复制的特点。合法用户发送一个请求,一份数据就可以被复制成请求指定的多个备份,这个操作只需要得到管理机构的许可,几乎不需要什么代价。 ③数据可以创建副本。当一个用户请求该数据时,网格管理机构需要在原始数据以及它的多个备份中间选择一个合适的数据副本提供给用户使用。 ④数据资源可以被缓存,其他资源则不可以。用户被授权使用一个数据资源时,可以在本地或距离使用点近的范围中缓存该资源,当以后需要再次使用该资源时,只要访问本地缓存的该资源就可以了。

相关主题
文本预览
相关文档 最新文档