当前位置:文档之家› Recursive Agent Modeling Using Limited Rationality

Recursive Agent Modeling Using Limited Rationality

Proceedings of the First International Conference on Multi-Agent Systems,pages125-132,1995.

Recursive Agent Modeling Using Limited Rationality

Jos′e M.Vidal and Edmund H.Durfee?

Arti?cial Intelligence Laboratory

University of Michigan

Ann Arbor,Michigan48109-2122.

{jmvidal,durfee}@https://www.doczj.com/doc/4b8678724.html,

Abstract

We present an algorithm that an agent can use for determining which of its nested,recursive models of other agents are important to consider when choos-ing an action.Pruning away less important models allows an agent to take its“best”action in a timely manner,given its knowledge,computational capabili-ties,and time constraints.We describe a theoretical framework,based on situations,for talking about re-cursive agent models and the strategies and expected strategies associated with them.This framework al-lows us to rigorously de?ne the gain of continuing de-liberation versus taking action.The expected gain of computational actions is used to guide the pruning of the nested model structure.We have implemented our approach on a canonical multi-agent problem,the pursuit task,to illustrate how real-time,multi-agent decision-making can be based on a principled,combi-natorial model.Test results show a marked decrease in deliberation time while maintaining a good perfor-mance level.

Keywords:Algorithms for multi-agent interaction in time-constrained systems;Conceptual and theoretical foundations of multi-agent systems.

Introduction

A rational agent that can be impacted by other agents will bene?t if it can predict,with some degree of re-liability,the likely actions of other agents.With such predictions,the rational agent can choose its own ac-tions more judiciously,and can maximize its expected payo?in an interaction(or over a set of interactions). To get such predictions,an agent could rely on com-munication for explicit disclosure by other agents,or on super?cial patterns of actions taken by others,or on deeper models of other agents’state.While each of these approaches has its merits,each also involves some cost or risk:e?ective communication requires careful decision-making about what to say,when to say it,and whether to trust what is heard(Durfee,?Supported,in part,by NSF grant IRI-9158473.

Gmytrasiewicz,&Rosenschein1994)(Gmytrasiewicz &Durfee1993);relying on learned patterns of action (Sen&Durfee1994)risks jumping to incorrect ex-pectations when environmental variations occur;and using deeper models of other agents can be more ac-curate but extremely time-consuming(Gmytrasiewicz 1992).

In this paper,we concentrate on coordinated decision-making using deeper,nested models of agents.

Because these models allow an agent to fully represent and use all of its available knowledge about itself and others,the quality of its decision-making can be high.

As noted above,however,the costs of reasoning based on recursive models can be prohibitive,since the mod-els grow exponentially in size as the number of levels increases.Our goal,therefore,has been to devise an algorithm which can prune portions of the recursive structure that can be safely ignored(that do not al-ter an agent’s choice of action).We have done this by incorporating limited rationality capabilities into the Recursive Modeling Method(RMM).Our new algo-rithm allows an agent to decrease its deliberation time without trading o?too much quality(expected payo?) in its choice of action.Our contribution,therefore,is to show how a principled approach to multi-agent rea-soning such as RMM can be made practical,and to demonstrate the practical mechanisms in a canonical multi-agent decision-making problem.

We start by providing a short description of RMM and the Pursuit task,on which we tested our algo-rithm.Afterward we present situations as our way of representing RMM hierarchies,and de?ne their ex-pected gain.These concepts form the basis of our algo-rithm,shown in the Algorithm section,which expands only those nodes with the highest expected gain.The Implementation section presents the results of an im-plementation of our algorithm,from which we derive some conclusions and directions for further work,dis-cussed in the Conclusion.

RMM and the Pursuit Task:The basic mod-1

eling primitives we use are based on the Recursive Modeling Method(RMM)(Gmytrasiewicz1992; Gmytrasiewicz,Durfee,&Wehe1991;Durfee,Gmy-trasiewicz,&Rosenschein1994;Durfee,Lee,&Gmy-trasiewicz1993).RMM provides a theoretical frame-work for representing and using the knowledge that an agent has about its expected payo?s and those of others.To use RMM,an agent is expected to have a payo?matrix where each entry represents the payo?s the agent expects to get given the combination of ac-tions chosen by all the agents.Typically,each dimen-sion of the matrix corresponds to one agent,and the entries along it to all the actions that agent can take. The agent can(but need not)recursively model others as similarly having payo?matrices,and them model-ing others the same way,and so on....The recur-sive modeling only ends when the agent has no deeper knowledge.At this point,a Zero Knowledge(ZK) strategy can attributed to the particular agent in ques-tion,which basically says that,since there is no way of knowing whether any of its actions are more likely than others,all of the actions are equally probable.If an agent does have reason to believe some actions are more likely than others,this di?erent probability dis-tribution can be used.RMM provides a method for propagating strategies from the leaf nodes to the root. The strategy derived at the root node is what the agent performing the reasoning should do.

For example,a simpli?ed payo?matrix for the pur-suit task is shown in Figure3.An RMM hierarchy would have such a matrix for one of the agents(say P1)at the root.If that agent knows something about how the other agents represent the situation(their pay-o?s),then it will model them in order to predict their strategies,so it can generate its own strategy.If it knows something about what the others know about how agents model the situation,another nested layer can be constructed.When it runs out of knowledge,it can adopt the ZK strategy at those leaves,yielding a hierarchy like that shown abstractly in Figure1.

To evaluate our algorithm,we have automated the construction of payo?matrices for the pursuit task, in which four predators try to surround a prey which moves randomly.The agents move simultaneously in a two-dimensional square grid.They cannot occupy the same square,and the game ends when all four preda-tors have surrounded the prey on four sides(“cap-ture”),when the prey is pushed against a wall so that it cannot make any moves(“surround”)or when time runs out(“escape”).The pursuit task has been in-vestigated in Distributed AI(DAI)and many di?erent methods have been devised for solving it(Korf1992) (Stephens&Merx1990)(Levy&Rosenschein1992).These either impose speci?c roles on the predators, spend much time computing,or fail because of lack of coordination.RMM provides another method for providing this coordination but,to avoid the“time-consuming”aspects of game-theoretic approaches,we must devise a theory to support selective expansion of the hierarchy.

Theory

The basic unit in our syntactical framework for recur-sive agent modeling is the situation.At any point in time,an agent is in a situation,and all the other agents present are in their respective situations.An agent’s situation contains not only the situation the agent thinks it is in but also the situations the agent thinks the others are in(these situations might then refer to others and so on...).In this work,we have adopted RMM’s assumption that common knowledge1 cannot arise in practice,and so it is impossible for a situation to refer back to itself either directly or tran-sitively.

A situation has both a physical and a mental com-ponent.The physical component refers to the physical state of the world and the mental component to the mental state of the agent,i.e.what the agent is think-ing about itself and about the other agents around it. Intuitively,a situation re?ects the state of the world from some agent’s point of view by including what the agent perceives to be the physical state of the world and what the agent is thinking.A situation evaluates to a strategy,which is a prescription for what action(s) the agent should take.A strategy has a probability as-sociated with each action the agent can take,and the sum of these probabilities must always equal1.

Let S the the set of situations an agent might en-counter,and A the set of all other relevant agents.A particular situation s is recursively de?ned as:

s=(M,f,W,{{(p,r,a)|

p=1,

r∈(S∪ZK)}|a∈A})∈S

The matrix M has the payo?the agent,in situation s,expects to get for each combination of actions that all the agents might take.The matrix M can either be stored in memory as is,or it can be generated from a function(which is stored in the agent’s memory)that takes as inputs any relevant aspects of the physical world,and previous history.The relevant aspects of the physical world are stored in W,the physical com-ponent of the situation.The rest of s constitutes the mental component of the situation.

1Common knowledge about a fact x means that every-body knows that everybody knows...about x.

t.β2s ZK ZK

ZK ZK ZK T T \\ \

\

c

c c S5

S4S3

S2S1t.α >ZK

ZK t ZK ZK \\ \

\

c c c S3

S2S1

t.β1a

b Figure 1:These are graphical representations of (a)a

situation s (the agent in s expects to get the payo?s given by S1and believes that its opponents will get S2and S3,respectively),and (b)a partially expanded situation t that corresponds to s .ZK stands for the zero knowledge strat-egy.

The probabilistic distribution function f (x )gives the probability that the strategy x is the one that the sit-uation s will evaluate to.It need not be a perfect probabilistic distribution;it is merely used as a useful approximation.We use f (x )to calculate the strategy that the agent in the situation is expected to choose,using standard expected value formulas from proba-bility theory.The values of this function are usually calculated from previous experience.A situation s also includes the set of situations which the agent in s be-lieves the other agents are in.Each agent a is believed to be in r ,with probability p .The value of r can ei-ther be a situation (r ∈S )or,if the modeling agent has no more knowledge,it can be the Zero Knowledge strategy (r =ZK ).

Notation and Formalisms:Our implementation of limited rationality and our notation closely parallels Russell and Wefald’s work (Russell &Wefald 1991),al-though with some major di?erences as will be pointed out later.We de?ne a partially expanded situation as a subset of a situation where only some of the nodes have been expanded.That is,if t is a partially ex-panded situation of s ,then it must contain the root node of s ,and all of t ’s nodes must be directly or indi-rectly connected to this root by other nodes in t .The situation t might have fewer nodes than s .At those places where the tree was pruned,t will insert the zero knowledge strategy,as shown in Figure 1.

We also de?ne our basic computational action to cor-respond to the expansion of a leaf node l in a partially expanded situation t ,plus the propagation of the new strategy up the tree.The expansion of leaf node l corresponds to the creation of the matrix M ,and its placement in l along with f (x ),W ,and pointers to the children.The children are set to the ZK strategy be-cause we have already spent some time generating M

and we do not wish to spend more time calculating a

child’s matrix,which would be a whole other “step”or computational action.We do not even want to spend time calculating each child’s expected strategy since we wish to keep the computation fairly small.The strat-egy that results from propagating the ZK strategies past M is then propagated up the tree of t ,so as to gen-erate the new strategy that t evaluates to.The whole process,therefore,is composed of two stages.First we Expand l and then we Propagate the new strategy up t .We will call this process Propagate Expand(t,l ),or P E (t,l )for short.The procedure P E (t,l )is our basic computational action.

We de?ne the time cost for doing P E (t,l )as the time it takes to generate the new matrix M ,plus the time to propagate a solution up the tree,plus the time costs incurred so far.This time cost is:

T C (t,l )=(c ·size of matrix M in l )+

(d ·propagate time)+g (time transpired so far)where c and d are constants of proportionality.The function g (x )gives us a measure of the “urgency”of the task.That is,if there is a deadline at time T be-fore which the agent must act,then the function g (x )should approximate in?nity as x approximates T .We now de?ne some more notation to handle all the strate-gies associated with a particular partially expanded sit-uation.

t :Partially expanded situation,all of the values below are associated with it.Corresponds to the fully expanded situation s .

t.α:The strategy the agent currently favors in the partial situation t .

t.β1···n :The strategies the agent in t believes the n opponents will favor.

t .leaves:These are the leaf nodes of the partially expanded situation t that can still be expanded (i.e.those that are not also leaf nodes in s ).t.?αl :The strategy an agent expects to play given that,instead of actually expanding leaf l (where l ∈t .leaves)he simply uses its expected value,which he got from f (x ),and propagates this strat-egy up the tree.

t.?β1···n l

:The strategies an agent expects the other agents to play given that,instead of actually ex-panding leaf l ,he simply uses its expected value.P (t.α,t.β1···n ):This is the payo?an agent gets in his current situation.To calculate it,use the root matrix and let the agent’s strategy be t.αwhile the opponents’strategies are t.β1···n .

Given all this notation,we can?nally calculate the utility of a partially expanded situation t.Let t be situation t after expanding some leaf node l in it,i.e. t ←PE(t,l).The utility is the payo?the agent will get,given the current evaluation of t .The expected utility of t is the utility that an agent in t expects to get if he expands l so as to then be in t .

U(t )=P(t .α,t .β1···n)

E(U(t ))=P(t.?αl,t.?β1···n

l

)

Note that the utility of a situation depends on the shape of the tree below it since both t.αand t.βare calculated by using the whole tree.There are no util-ity values associated with the nodes by themselves.2 Unfortunately,therefore,we cannot use the utilities of situations in the traditional way,which is to expand leaves so as to maximize expected utility.Such tradi-tional reasoning in a recursive modeling system would mean that the system would not introspect(examine its recursive models)more deeply if it thought that do-ing so might lead it to discover that it in fact will not receive as high a payo?as it thought it would before doing the introspection.Ignoring more deeply nested models because they reduce estimated payo?is a bad strategy,because ignoring them does not change the fact that other agents might take the actions that lead to lower payo?s.Thus,we need a di?erent notion of what an agent can“gain”by examining more deeply nested knowledge.

The point of the agent’s reasoning is to decide what its best course of action is.Thus,expanding a situ-ation is only useful if,by doing so,the agent chooses a di?erent action than it would have before doing the expansion.And the degree to which it is better o?with the di?erent action rather than the original one (given what it now knows)represents the gain due to the expansion.More speci?cally,the Gain of situation t ,G(t ),is the amount of payo?an agent gains if it previously was in situation t and,after expanding leaf l∈t.leaves,is now in situation t .Since t ←PE(t,l), we could also view this as G(P E(t,l)).The Expected Gain,E(G(t )),is similarly E(G(P E(t,l))),which is approximated by G(Propagate(E(Expand(t,l))))in our algorithm.We can justify this approximation be-cause Propagate()simply propagates the strategy at the leaf up to the root,and it usually maps similar strategies at the leaves to similar strategies at the root. So,if the expected strategy at the leaf,as returned by E(Expand(t,l)),is close to the real one,then the strat-egy that gets propagated up to the root will also be 2This is in contrast to Russell’s work,where each node has a utility associated with it.close to the real one.3The formal de?nitions of gain

and expected gain are:

G(t )=P(t .α,t .β1···n)?P(t.α,t .β1···n)

?T C(t,l)

E(G(t ))=P(t.?αl,t.?β1···n

l

)?P(t.α,t.?β1···n

l

)

?T C(t,l)

Notice that E(G(t ))reaches a minimum when t.α= t.?αl,since t.?αl is chosen so as to maximize the payo?

given t.?β1···n

l

.In this case,G(t )=?T C(t,l).This means that the gain will be negative if the possible ex-pansion l does not lead to a di?erent and better strat-egy.

Algorithm

Our algorithm starts with the partially expanded root situation,which consists only of the payo?matrix for the agent.ZK strategies are used for the situations of other agents,forming the initial leaves.The algorithm proceeds by expanding,at each step,the leaf node that has the highest expected gain,as long as this gain is greater than K.For testing purposes,we set K=0, but setting K to some small negative number would allow for the expansion of leaves that do not show an immediate gain,but might lead to some gains later on. The function Expected Strategy(l)takes as input a leaf node situation l and determines the strategy that its expansion,to some number of levels,is expected to return.It does this by calculating the expected value of the corresponding f(x).The strategy is propagated up by the function Propagate Strategy(t,l),which

returns the expected t.?αl and t.?β1···n

l

.These are then used to calculate the expected gain for leaf l.The leaf with the maximum expected gain,if it is greater than K,is expanded,a new strategy propagated,and the whole process is repeated.Otherwise,we stop expand-ing and return the current t.α.

The algorithm,shown in Figure2,is started with a call to Expand Situation(t).Before the call,t must be set to the root situation,and t.αto the strat-egy that t evaluates to.The strategy returned by Expand Situation could then be stored away so that it can be used,at a later time,by Expected Strategy. The decision to do this would depend on the particu-lar implementation.One might only wish to remember those strategies that result from expanding a certain number of nodes or more,that is,the“better”strate-gies.In other cases,it could be desirable to remem-ber all previous strategies that correspond to situa-tions that the agent has never experienced before,on 3Test results show that this approximation is,indeed, good enough.

Expand Situation(t)

max gain←?∞;max sit←nil;t ←t For l∈t.leaves/*l is a structure*/

t .leaf←l

l.α←Expected Strategy(l)

t.?αl,t.?β1···n

l

←Propagate Strategy(t,l)

t .gain←P(t.?αl,t.?β1···n

l )?P(t.α,t.?β1···n

l

)?T C(t,l)

If t .gain>max gain Then

max gain←t .gain

max sit←t

If max gain>K Then

t←Propagate Expand(max sit)

Return(Expand Situation(t))

Return(t.α)

Propagate Expand(t)/*l is included in t.leaf*/

leaf←t.leaf

leaf.α←Expand(leaf)

t.α,t.β1···n←Propagate Strategy(t,leaf)

t.leaves←New Leaves(t)

Return(t)

Propagate Strategy(t,l)

Propagates l.αall the way up the tree and returns

the best strategy for the root situation,and the strategies for the others.

Expand(l)

Returns the strategy we?nd by?rst calculating the matrix that corresponds to l,and then plugging the zero-knowledge solution for all the children of l.

New Leaves(t)

Returns only those leaves of t that can be expanded into full situations.

Figure2:Limited Rationality RMM Algorithm the theory that it is better to have some information

rather than none.This is the classic time versus mem-

ory tradeo?.

Time Analysis:For the algorithm to work cor-rectly,the time spent in metalevel thinking must be small compared to the time it takes to do an actual node expansion and propagation of a new solution(i.e.

a P E(t,l)).Otherwise,the agent is probably better o?choosing nodes to expand at random without spending time considering which one might be better.

A node expansion involves the creation of a new ma-trix M.If we assume that there are k agents and each one has n possible actions,then the number of elements in the matrix will be n k.Each one of these elements represents the utility of the situation that results after all the agents take their actions.In order to calculate this utility,it is necessary to simulate the new state that results from all the agents taking their respective actions(i.e.as dictated by the element’s position in the

2

1

P

Old Situation

I

2

1

P

P1

P2

New Situation

Utility

Figure3:Given the old situation,each one of the el-ements of the matrix corresponds to the payo?in one of the many possible next situations.Notice that,sometimes, the moves of the predators might interfere with each other. The predator calculating the matrix has to resolve these con?icts when determining the new situation,before calcu-lating its utility.

matrix)and then calculate the utility,to some partic-ular agent,of this new situation(see Figure3).The calculation of this utility can be an arbitrarily complex function of the state,and must be performed for each of the n k elements in the matrix.

The next step is the propagation of the expected strategy.If we replace one of the ZK leaf strategies by some other strategy,then the time to propagate this change all the way to the root of the tree depends both on the distance between the root and leaf,and the time to propagate a strategy past a node.Because of the way strategies are calculated,it is almost cer-tain that any strategy that is the result of evaluating a node will be a pure strategy4.If all the children of a node are pure strategies,then the strategy the node evaluates to can be found by calculating the maximum of n numbers.In other words,the k-dimensional ma-trix collapses into a one-dimensional vector.If c of the children have mixed strategies,we would need to add n c+1numbers and?nd the maximum partial sum. In the worst case,c=k?1,so we need to add n k numbers.

We can conclude that propagation of a solution will require a good number of additions and max functions, and these must be performed for each leaf we wish to consider.However,these operations are very simple since,in most instances,the propagation past a node will consist of one max operation.This time is small, especially when compared to the time needed for the simulation and utility calculation of n k di?erent pos-sible situations.A more detailed analysis can only be performed on an application-speci?c basis,and it would have to take into account actual computational

4In a pure strategy one action is chosen with probability 1,the rest with probability0.

Method Avg.Nodes Avg.Time to

Expanded Capture/Surround BFS1Level1∞/Never Captured BFS2Levels423.4 BFS3Levels1322.4 BFS4Levels4017.3 LR RMM 4.218.5 Greedy NA41.97

Table1:Results of implementation of the Pursuit prob-lem using simple BFS to various levels(i.e.regular RMM), using our Limited Rationality Recursive Modeling algo-rithm,and using a simple greedy algorithm,in which each predator simply tries to minimize its distance to the prey. times5.

Implementation Strategies:A strategy we adopt to simplify the implementation is to classify agents ac-cording to types.Associated with each type should be a function that takes as input the agent’s physi-cal situation and generates the full mental situation for that agent.This technique uses little memory and is an intuitive way of programming the agents.It assumes,however,that the mental situation can be derived solely from the physical.This was true in our experiments but need not always be so,such as when the mental situation might depend on individ-ual agent biases or past experiences.The function Expected Strategy,which calculates the expected value of f(x),can then be implemented by a hash ta-ble for the agent type.The keys to the table would be similar physical situations and the values would be the last N strategies played in those situations.Grouping similar situations is needed because the number of raw physical situations can be very big.The de?nition of what situations are similar is heuristic and determined by the programmer.

Implementation of Pursuit Task

The original algorithm has been implemented in a simulation of the pursuit task,using the MICE sys-tem(Montgomery&Durfee1990).The experiments showed some promising results.The overall perfor-mance of the agents was maintained while the total number of expanded nodes was reduced by an order of magnitude.

We designed a function that takes as input a physical situation and a predator in that situation and returns the payo?matrix that the predator expects to get given each of the possible moves they all take.This function is used by all the predators(that is,we assumed all 5Some basic calculations,along these lines,are shown in the results section.

15

20

25

30

35

40

00.0050.010.0150.020.0250.03 T

o

t

a

l

T

i

m

e

Time to Expand One Node

BFS 2 Levels

BFS 3 Levels

BFS 4 Levels

LR RMM

Figure4:Plot of the total time that would,on average, be spent by the agents before capturing the prey,for each method.The x axis represents the amount of time it takes to expand a node as a percentage(more or less)of the time it takes the agent to take action.

predators were of the same type).A predator’s pay-o?is the sum of two values.The?rst value is the change in the distance from the predator to the prey between the current situation and the new situation created after all agents have made their moves.The second value is5k,where k is the number of quadrants around the prey that have a predator in them in the new situation.The quadrants are de?ned by drawing two diagonal lines across the prey(Gasser et al.1989; Levy&Rosenschein1992).

Since the matrices take into account all possible com-binations of moves(5for each predator and4for the prey),they are?ve-dimensional with a total of2500 payo?entries.With matrices this big,even in a simple problem,it is easy to see why we wish to minimize the number of matrices we need to generate.We de?ned the physical situation which the predator is in as its rel-ative position to the prey and to the other predators. These situations were then generalized such that dis-tances greater than two are indistinguishable,except for quadrant information.This generalization formula served to shrink the number of buckets or keys in our hash table to a manageable number(from4.1·1011to around3000).Notice also that the number of buckets remains constant no matter how big the grid is.

Results:We?rst ran tests without our algorithm and using simple Breadth First Search of the situation hierarchy.When using BFS to one level,the preda-tors could not predict the actions of the others and so, they never got next to the prey.As we increased the number of levels,the predators could better predict what the others would do,which made their actions more coordinated and they managed to capture the prey.The goal for our algorithm was to keep the same good results while expanding as few nodes as possible. For comparison purposes,we also tested a very simple

greedy algorithm where each predator simply tries to minimize its distance to the prey,irrespective of where the other predators are.

We then ran our Limited Rationality RMM algo-rithm on the same setup.We had previously compiled a hash table which contained several entries(usually around5,but no more than10)for almost all situa-tions.This was done by generating random situations and running RMM to3or4levels on them to?nd out the strategy.The results,as seen in Table1,show that our algorithm managed to maintain the perfor-mance of a BFS to4levels while only expanding little more than four nodes on the average.All the results are the averages of20or more runs.

Another way to view these results is by plotting the total time it takes the agents,on average,to surround the prey as a function of the time it takes to expand a node.If we assume that the time for an agent to make a move is1unit and we let the time to expand a node be x,the average number of turns before surrounding the prey be t s,and the average number of nodes N,then the total time T the agents spends before surrounding the prey is approximately T=t s·(N·x+1),as shown in Figure4.LR RMM is expected to perform better than all others except when the time to expand a node is much smaller(.002times smaller)than the time to perform an action.

Conclusions

We have presented an algorithm that provides an ef-fective way of pruning a recursive model so that only a few critical nodes need to be expanded in order to get a quality solution.The algorithm does the double task of delivering an answer within a set of payo?/time cost constraints,and pruning unnecessary knowledge from the recursive model of a situation.We also gave a for-mal framework for talking about strategies,expected strategies,and expected gains from expanding a recur-sive model.

The experimental results proved that this algorithm works,in the example domain.We expect that the algorithm will provide similar results in other prob-lem domains,provided that they show some correspon-dence between the physical situation and the strategy played by the agent.The test runs taught us two main lessons.Firstly that there is a lot of useless informa-tion in these recursive models(https://www.doczj.com/doc/4b8678724.html,rmation that does not in?uence the agent’s decision).Secondly,they showed us how much memory is actually needed for handling the computation of expected strategies,even without considering the agent’s mental state.This seems to suggest that more complete implementations will require a very smart similarity function for detect-ing which situations are similar to which others if we want to maintain the solution quality.We are cur-rently investigating which techniques(e.g.reinforce-ment learning,neural networks,case-based reasoning) are best suited for this task.

Given what we have learned from our tests,there is a de?nite set of challenges that lie ahead in terms of improving and expanding the algorithm.First of all, we have to determine how we can include the mental state of an agent into the hashing function,which now contains only the generalized physical situation.This enhancement,along with the addition of more hash tables for di?erent agent types,would be very useful for moving the algorithm to more complex domains. Also,we are studying possible methods that an agent can use for determining which strategy to play when it has little or no knowledge of it’s opponents’behavior, except for whatever observations it has managed to record.

References

[Durfee,Gmytrasiewicz,&Rosenschein1994] Durfee,E.H.;Gmytrasiewicz,P.J.;and Rosenschein, J.S.1994.The utility of embedded communications and the emergence of protocols.In Proceedings of the 13th International Distributed Arti?cial Intelligence Workshop.

[Durfee,Lee,&Gmytrasiewicz1993]Durfee, E.H.; Lee,J.;and Gmytrasiewicz,P.J.1993.Overeager reciprocal rationality and mixed strategy equilibria. In Proceedings of the Eleventh National Conference on Arti?cial Intelligence.

[Gasser et al.1989]Gasser,L.;Rouquetter,N. F.; Hill,R.W.;and Lieb,J.1989.Representing and using organizational knowledge in distributed AI systems. In Gasser,L.,and Huhns,M.N.,eds.,Distributed Arti?cial Intelligence,volume2.Morgan Kau?man Publishers.55–78.

[Gmytrasiewicz&Durfee1993]Gmytrasiewicz,P.J., and Durfee,E.H.1993.Toward a theory of honesty and trust among communicating autonomous agents. Group Decision and Negotiation2:237–258. [Gmytrasiewicz,Durfee,&Wehe1991] Gmytrasiewicz,P.J.;Durfee,E.H.;and Wehe,D.K. 1991.A decision-theoretic approach to coordinating multiagent interactions.In Proceedings of the Twelfth International Joint Conference on Arti?cial Intelli-gence.

[Gmytrasiewicz1992]Gmytrasiewicz,P.J.1992.A Decision-Theoretic Model of Coordination and Com-munication in Autonomous Systems(Reasoning Sys-tems).Ph.D.Dissertation,University of Michigan.

[Korf1992]Korf,R.E.1992.A simple solution to pur-suit games.In Proceedings of the11th International Distributed Arti?cial Intelligence Workshop.

[Levy&Rosenschein1992]Levy,R.,and Rosen-schein,J.S.1992.A game theoretic approach to the pursuit problem.In Proceedings of the11th Interna-tional Distributed Arti?cial Intelligence Workshop. [Montgomery&Durfee1990]

Montgomery,T.A.,and Durfee,https://www.doczj.com/doc/4b8678724.html,-ing MICE to study intelligent dynamic coordination. In Proceedings of IEEE Conference on Tools for AI. [Russell&Wefald1991]Russell,S.,and Wefald, E. 1991.Do The Right Thing.Cambridge,Mas-sachusetts:The MIT Press.

[Sen&Durfee1994]Sen,S.,and Durfee,E.H.1994. Adaptive surrogate agents.In Proceedings of the13th International Distributed Arti?cial Intelligence Work-shop.

[Stephens&Merx1990]Stephens,L.M.,and Merx, M.B.1990.The e?ect of agent control strategy on the performance of a DAI pursuit problem.In Pro-ceedings of the9th International Distributed Arti?cial Intelligence Workshop.

[Vidal&Durfee1994]Vidal,J.M.,and Durfee,E.H. 1994.Agent modeling methods using limited rational-ity.In Proceedings of the Twelfth National Conference on Arti?cial Intelligence,1495.

深度认识价格通道线

IB8网可申请福汇、嘉盛、艾拓思、GKFX、铁汇、IFX一级代理 IB8深度认识价格通道线 炒外汇的方法有很多,趋势分析是必不可少的。外汇价格通道就是技术分析后的一个整体结果,它由两部分组成:主趋势线和通道线。下面我们来看看它们之间是怎么最终形成价格通道的。 炒外汇的价格通道(Price Channels)是延续图形,它的斜度倾向上或向下,视乎它的价格成交集中于上趋势线或下趋势线之内。上趋势线是一条阻力线,而下趋势线则是一条支持线。当价格通道有向下趋势倾斜时便视为跌市,而价格通道向上倾斜时,则视为升市。 以两条趋势线形成的价格通道,一条称为主趋势线,另一条称为通道线。主趋势线决定有力的趋势。如上升(下跌)通道向上(向下)倾斜时,最少以两点的低点(高点)连成一线而绘出。 另一条炒外汇趋势线称为通道线,与主趋势线平衡的。通道线以高点及低点绘出。在上升通道时,通道线是一条阻力(支持)线。在下降通道时,通道线则是一条支持线。 当价格持续上升并在通道范围内波动,趋势便可看作涨市。当价格未能到达通道线(阻力线)时,便可预料到趋势将会有急切的转变。随后下破主趋势线(支持线)时,便可提供确认市况将会逆转。相反,当上穿通道线时,便可视为牛市及暗示价格将持续上升。 如果在炒外汇的过程中发现一个货币对还处在通道的上半部分那么说明这个货币对的整体还是较强的涨势。这个时候的处理方式有两种: 一是,在价格运行在通道之内的时候按照通道的规律高卖低买; 二是,等到期突破,一旦炒外汇形成突破都可在确认有效后跟进。 炒外汇其实就是在正确判断趋势的前提之下做出合理的交易取向并且实施相应有效的交易策略,但是不管是这个过程中的哪一个环节想要做好的话都是很难的,既需要投资者的灵活思维也需要坚定的执行自己的交易计划并且努力使它不断完善

PCOS不孕患者长方案IVF中采用不同口服避孕药预处理的结果比较

?短篇论著? PC OS 不孕患者长方案I VF 中采用不同 口服避孕药预处理的结果比较 杨学舟,章汉旺 3 (华中科技大学同济医学院附属同济医院生殖中心,武汉 430027) 【关键词】 多囊卵巢综合征;达英235;妈富隆;体外受精2胚胎移植长方案 中图分类号:R711.75 文献标识码:A 文章编号:1004-7379(2010)06-0456-03 多囊卵巢综合征(PCOS )是育龄妇女不孕较常见的原因,随着促排卵方案的改进和辅助生殖技术的发展,近年其妊娠率有了很大提高。但是由于其特殊的内分泌改变,抱婴回家率尚不理想。Da mari o 等[1] 报道,在长方案I V F 降调节前口服避孕药(OC )预处理1周期,可减少高反应患者的周期取消率,提高种植率和临床妊娠率。本研究比较了两种避孕药预处理后的临床及实验室结局,以冀寻求更适宜的治疗方案。1 资料与方法 1.1 研究对象 回顾分析2008年1月~2009年3月我院 生殖中心用长方案行辅助生殖技术包括体外受精2胚胎移植 (I V F 2ET )、单精子卵胞浆内显微注射(I CSI )新鲜周期共计134例,患者年龄20~38岁,不孕时间0.5~16年。纳入标 准:依据鹿特丹标准确诊为PCOS,但无胰岛素抵抗或糖耐量异常等其他内分泌异常,排除子宫内膜异位症(E M s ),具有行I V F 助孕标准的不孕患者。 1.2 研究方法 使用促性腺素释放激素激动剂(GnRHa )降 调前按给予达英235或妈富隆预处理分为两组:A 组52例:从月经周期第5天开始口服达英235(每片含2mg 醋酸环丙孕酮和35 μg 乙炔雌二醇,21片/盒,德国拜耳),每日1片共25天,服药第17天来院皮下注射GnRHa 达菲林(D i phere 2line,博福益普生,天津)0.1mg,隔日1次降调节;B 组82例: 从月经周期第5天开始口服妈富隆(每片含0.15mg 去氧孕烯和0.03mg 炔雌醇,21片/盒,欧加农,荷兰),每日1片共 25天,服药第17天隔日皮下注射达菲林0.1mg 行垂体降调 节。两组均于下一周期月经第3天测血清LH 、E 2,达到降调标准(LH <3m I U /m l,E 2<30pg/m l )加用促性腺激素(Gn ) FSH (瑞士Gonal 2F,Ser ono )和(或)HMG (珠海丽珠)150~375I U /d 进行控制性超排卵,未达标者继续加用达菲林降调 并B 超监测,结合窦卵泡大小适当延后促排时间。当有两个卵泡直径达18mm 或3个卵泡直径达17mm 时停药,当晚注射HCG 10000U 。34~36h 后取卵,行I V F /I CSI 受精,取卵后 48h 选择优质胚胎两枚移植。ET 后14天验尿确定生化妊 娠,ET 后4周B 超检查确定临床妊娠。 1.3 统计学处理 用SPSS15.0软件统计分析数据,计量资 料用t 检验;计数资料用χ2 检验或Fisher 精确概率检验,P <0.05为差异有统计学意义。 2 结 果 两组一般情况均无统计学差异(P >0.05),见表1;两组间GnRHa 及Gn 用量、Gn 使用天数无统计学差异(P >0.05),用药后血激素水平及内膜厚度差异亦无统计学意义(P >0.05),见表2。 两组在获卵数、成熟卵子数、所获胚胎数、优质胚胎数以及受精率、卵裂率、妊娠率以及因反应不良而取消周期和因有OHSS 风险而取消移植的周期,均无统计学差异(P >0.05)。使用达英235预处理组的早期流产率略低于妈富隆组,但无统计学差异(P =0.082),见表3。 表1 两组一般情况比较( x ±s ) 达英235(n =52)妈富隆(n =82) P 年龄(岁)30.14±3.76830.45±3.5010.286不孕时间(t /年)4.82±3.1455.11±3.6040.199BM I (kg/m 2 )26.10±5.16924.187±4.5400.208 FS H (I U /L )5.03±1.2435.60±1.9240.254LH (I U /L ) 13.36±5.27111.22±6.7630.265E 2(ρ/pg ?m l -1 ) 58.97±20.380 51.30±19.9290.226T (c /n mol ?L -1 )2.49± 0.742.23±0.82 0.327 PRL (ρ/ng ?m l -1 )17.43± 11.15918.00±11.8870.873 6 54现代妇产科进展2010年6月第19卷第6期 Pr og Obstet Gynecol,June 2010,Vol 119,No 16 3 通讯作者 E mail:h wzhang605@https://www.doczj.com/doc/4b8678724.html,

我对数学的认识和理解

我对数学的认识和理解文稿归稿存档编号:[KKUY-KKIO69-OTM243-OLUI129-G00I-FDQS58-

我对数学的认识和理解 “数学是什么?”这个问题对于不同的人有不同的回答。对于数学专业的人来说,数学是一种关于模式即关于空间和数量关系的科学;对于中学生来说,它是一门必修的基础课;而在非数学专业的我看来,最方便的回答是:数学是一种文化。 数学的确是一种文化,而且是人类文化的重要组成部分。 “知识就是力量”。人类改造自然,创造物质财富和精神财富的力量,都来源于知识。知识是人的社会实践经过大脑思维加工的结果。人类在时间和思考中获得知识、技能和智慧。数学,不仅是自然科学和社会科学的共同基础,而且是思维的体操。人们通过学习数学,不仅可以把握事物的空间形式和数量关系,而且培养和发展了思维能力。所以,数学不仅是人类文化的组成部分,而且是其中的重要组成部分。 数学无时无刻不在,并且伴随我的成长! 对于每个中国的孩子,也可以说世界上的每个孩子,自从上学的那天开始,数学便走进了他(她)的生活,并且一直陪伴他走过十几二十几年的时光。但是,那时数学仅仅是一门必须去学的课程,我们的学习可以说不是自发的,而且是被动的。而对于每个对世界充满好奇,充满了求知欲的人来说,数学不单单是一门课程了,她是我们认识世界、探索世界、乃至改造世界的一个窗口,一个工具。她的身上散发了迷人的魅力。她不再是分数的一种表达,她是有血有肉的精灵。“为了理解宇宙,人们先要学习描写它们所用的语言,并且解释这种语言的字母。宇宙是用数学语言写成的,它的字母是……几何图

形,如果没有这些字母,人类将对它一字不识,只能在黑暗迷宫里徘徊。”从这里可以看出数学不光是描述地球的,她还是整个宇宙的最佳文字!我虽然没有这样的大数学家的高度,将一切事物都归纳为数学,但是我知道我们身边的一切都离不开数学。当我们环顾四周,偶尔可见数学风采的微妙印记,令人神往。这些印记让我感受数学对生活的巨大影响,从而可以帮助我了解我们的世界和宇宙。 有一种教育观点认为,“我们长大以后,永远也不需要用到这些数学知识”,尽管它听起来有些道理,但你会发现,数字在生活的各个领域都有深远的影响。数学真的无处不在。我们生活中到处都是数字和计算:无论是算出买东西要找的零钱、制定一份家庭预算表,还是丈量衣服的尺寸。当你对数字越来越敏感时,你的兴趣会进一步被激发,因为你发现数学充斥着从股票市场到车载导航系统的每个生活角落。我们生活中的条形码就是一个很好的例证,你可能从来没有注意过条形码,但这些条纹和数字却神奇地体现了数字世界中的隐藏信息。 我们经常面临一些需要认真思考的问题:特价销售真的像它说的那么好吗?怎样才能找出最好的贷款方案呢?各种不同的利率用金融术语来描述,意味着什么呢?做一直逃避现实的鸵鸟而不是勇于面对数字背后的现实,将会使你失去很多的机会,甚至无法驾驭生活。我们的生活中充满了数学,我们离不开数学! 我觉得,与其他知识部门相比,数学是一门历史性或者说积累性很强的科学。重大的数学理论总是在继承和发展原有理论的基础上建立起来的,它不仅不会推翻原有的理论,而且总是包容原先的理论,对于我们在小学乃至中学大

全面认识中国式英语

题目:全面认识中国式英语 摘要:中国式英语是在使用英语时,因受汉语思维方式或文化影响而拼造出不符合英语表达习惯的,具有中国特征的英语。这是中国人在英语学习过程中出现的,是必然的一种语言现象。中国式英语具有双重性,然而,在国内,多数人无法全面认识中国式英语,认为应该摒弃中国式英语,并且将其与中国英语混为一谈。本文旨在矫正大众对中国式英语的片面认识,对中国式英语与中国英语进行区分,希望能够为广大英语学习者英语水平的提高有所助益。 关键词:中国式英语,双重性,中国英语 目录: 1.中国式英语简介 2.中国式英语概述 2.1中国式英语的历史 2.2中国式英语成因分析 2.3中国式英语与中国英语 2.4中国式英语与大学英语学习 2.5中国式英语实例 2.6克服中国式英语的策略 3.中国式英语-国内外态度 3.1国内的态度 3.2国外的态度 4.中国式英语的发展前景 5. 中国式英语的积极意义 6. 总结 1.中国式英语简介 中国式英语指的在使用英语时,因受汉语思维方式或文化的影响而生搬硬套、拼造出不符合英语表达习惯的,具有中国特征的不规范的或畸形的英语。中国式英语指带有中文语音、语法、词汇特色的英语,是一种洋泾浜语言。中国式英语在英语中被称为“Chinglish”,是汉语及英语混合而成的合体字。中国式英语多用于标记牌。学习者在写作中往往用汉语打腹稿,或列出中文提纲,再把汉语机械地转换成英语,带有明显的汉语痕迹。 2.中国式英语概述 2.1中国式英语的历史 英语最先于1637年,英国商人在澳门和广州做生意时传入中国。17世纪,作为英国人说广东话的中国人之间做交易的通用语言,“洋泾浜英语”产生了。这一原始的Chinglish术语“pidgin”源自于中国人对英文单词“business”的错误发音。紧接着在第一和第二次鸦片战争(1839-1842)中,洋泾浜英语向北

对动画的认识和理解

对动画的认识和理解 动画,顾名思义就是会动的画。英文是“animated cartoon”,也就是会动的漫画了。动画之所以会动,是因为人们使用了“逐格拍摄”技术和利用“视觉残留”现象。我们现在所说的动画一般指的是动画片,较之电影有很多相同之处却有明显区别;来源于美术绘画却不限于死定在纸上,是处于电影和美术两大艺术门类之间的中间地带,当然随着现代计算机技术的发展有对动画片新添了更多技术元素,由此也引起了对动画片的艺术性的争议。动画片是从外国引进来的,我国动画与欧美日韩动画还相差很大一段距离。但我认为这也用不着太灰心,我们可以取其精华加以创新,也能提高我们动画水平。如此大环境下,正是我们这一代人大干一场的时候,正如发展中国家的城市化速度要远远大于发达国家的一样,我们中国动画水平的提高速度也可以比水平更高的国家发展得更快。 一、动画的起源与发展 如果仅从画面给人运动感这一点出发,动画的起源可以追溯到几千年甚至几万年之前。原始人的日常交流只用口头表达或最多加上肢体语言,没有文字的使用,只好结绳记事。可是结绳记事等最原始的记事方式不能把日常生活中所有值得记录的事都留传下来,加上原始人经常追捕上蹿下跳的猎物从而产生了表达运动的欲望,于是原始的重叠性绘画和连续性绘画便出现在原始人的壁画中。这些原始的表达运动的绘画还没有真正的动起来,只不过是人们对运动的拆分和叠加而已。随着科学技术的发展动画也逐渐开始形成。 英国彼得?罗杰发现的“视觉残留”现象经过约瑟夫?普拉托的论证,对动画的产生起了奠基性的作用。19世纪欧美出现了幻透镜和西洋镜,更得益于雷诺德光学影戏机的发明与使用,还有麦布里奇的变焦实用镜,是动画的产生成为理想中的现实。1895年,卢米?埃尔兄弟是电影诞生,不久后的1906年《一张滑稽面孔的幽默姿态》宣告了动画的诞生。 后来,动画史上就有“当代动画片之父”埃米尔?科尔的实验动画的风行,而美国温瑟?麦凯的

全面认识43号文及其影响

全面认识43号文及其影响——政府性债务和城投债 专题研究 继全国人大审议通过预算法修正案之后,《国务院关于加强地方政府性债务管理的意见》(以下简称43号文)公布,表明新一届 政府在地方政府债务管理上已经形成较为明确的政策框架和纲领, 在新的政策框架下,如何解读相关政策?如何看待存量城投债的信 用能力以及未来城投投资策略问题?下面我将围绕市场关心的问题 展开讨论。 一、分析之前,首先有几个概念要厘清一下: 1、原地方政府性债务范围的界定 因为43号文明确在存量债务处置中:以2013年政府性债务审计结果为基础,结合审计后债务增减变化情况,经债权人与债务人 共同协商确认,对地方政府性债务存量进行甄别。 所以原地方政府性债务的范围界定依据应该是2013年的审计结果。 按照《国务院办公厅关于做好全国政府性债务审计工作的通知(国办发明电【2013】20号)》,2013年地方政府性债务审计范围 包括:

(1)地方政府负有偿还责任的债务增长情况。是指地方政府(含政府部门和机构,下同)、经费补助事业单位、公用事业单位、政府融资平台公司和其他相关单位举借,确定由财政资金偿还的债务。一是地方政府债券、国债转贷、外债转贷、农业综合开发借款、其他财政转贷债务中确定由财政资金偿还的债务。二是政府融资平台公司、政府部门和机构、经费补助事业单位、公用事业单位及其他单位举借、拖欠或以回购(BT)等方式形成的债务中,确定由财政资金(不含车辆通行费、学费等收入)偿还的债务。三是地方政府粮食企业和供销企业政策性挂账。 (2)地方政府负有担保责任的债务增长情况。是指因地方政府提供直接或间接担保,当债务人无法偿还债务时,政府负有连带偿债责任的债务。一是政府融资平台公司、经费补助事业单位、公用事业单位和其他单位举借,确定以债务单位事业收入(含学费、住宿费等教育收费收入)、经营收入(含车辆通行费收入)等非财政资金偿还,且地方政府提供直接或间接担保的债务。二是地方政府举借,以非财政资金偿还的债务,视同政府担保债务。 (3)地方政府可能承担一定救助责任的其他相关债务(以下简称其他相关债务)增长情况。是指政府融资平台公司、经费补助事业单位和公用事业单位为公益性项目举借,由非财政资金偿还,且地方政府未提供担保的债务(不含拖欠其他单位和个人的债务)。政府在法律上对该类债务不承担偿债责任,但当债务人出现债务危机时,政府可能需要承担救助责任。 (4)通过新的举债主体和举债方式形成的地方政府性债务。即

网络安全监测方案

深信服网络安全监测解决方案 背景与需求分析 网络安全已上升到国家战略,网络信息安全是国家安全的重要一环,2015年7月1号颁布的《国家安全法》第二十五条指出:加强网络管理,防范、制止和依法惩治网络攻击、网络入侵、网络窃密、散布违法有害信息等网络违法犯罪行为,维护国家网络空间主权、安全和发展利益。国家《网络安全法》草案已经发布,正式的法律预计不久后也会正式颁布。保障网络安全,不仅是国家的义务,也是企业和组织机构的责任。对于企业来说,保障网络信息安全,防止网络攻击、网络入侵、网络窃密、违法信息发布,不仅能维护自身经济发展利益,还能避免法律风险,减少社会信誉损失。 Gartner认为,未来企业安全将发生很大的转变,传统的安全手段无法防范APT等高级定向攻击,如果没有集体共享的威胁和攻击情报监测,将很难全方位的保护自己网络安全。因此过去单纯以被动防范的安全策略将会过时,全方位的安全监控和情报共享将成为信息安全的重要手段。 因此,仅仅依靠防护体系不足以应对安全威胁,企业需要建立全面的监测机制,扩大监控的深度和宽度,加强事件的响应能力。安全监测和响应能力将成为企业安全能力的关键,在新的安全形势下,企业需要更加关注威胁监控和综合性分析的价值,使信息安全保障逐步由传统的被动防护转向“监测-响应式”的主动防御,实现信息安全保障向着完整、联动、可信、快速响应的综合防御体系发展。 然而,传统的网络安全设备更多关注网络层风险及基于已知特征的被动保护,缺乏对各种系统、软件的漏洞后门有效监测,缺乏对流量内容的深度分析及未知威胁有效识别,不具备多维全面的安全风险监测响应机制,已不能满足新形势下网络安全的需求。 深信服网络安全监测解决方案 深信服创新性的推出了网络安全监测解决方案,该方案面向未来的安全需求设计,帮助企业实现多层次、全方位、全网络的立体网络安全监测。该方案主要采用了深信服下一代防火墙NGAF作为监测节点,通过对应用状态、数据内容、用户行为等多个维度的全方位安全监测,并结合深信服云安全中心海量威胁情报快速共享机制,帮助企业构建立体化、主动化、智能化综合安全监测防御体系,有效弥补了传统安全设备只能防护已知常规威胁的被动局面,实现了安全风险全面识别和快速响应。

关于文网文、ICP的深度认知

关于文网文、ICP知识的深度认知 当代社会,随着互联网时代的快速发展,已全方位的覆盖到了我们生活的每个角落中,我们已经步入了大数据时代、云时代,现在的日常网络生活不仅仅局限于聊天,越来越多的人加入了互联网创业的热潮,这其中比较有名的企业像阿里巴巴、京东等综合性的购物门户网站;优酷、爱奇艺等视频网站;斗鱼、熊猫、战旗、全民、龙珠(TGA)、虎牙(YY)等网络直播平台;草根文化的自制视频点播等,这些都是利用互联网而兴起的以盈利性为目的的事业,更是以快捷的支付方式、良好的购物体验、便利的物流速递,甚至开展各类经营活动线上渠道与线下并行。这些都与以上两个概念相关。举例如下: 日常生活中,例如我们在京东网上购物的时候,需要浏览很多信息量,甚至会弹出广告,这些就是增值业务,京东是靠直接销售收入,直接向厂家进货,赚取差价,京东自己做中间商卖产品,以及收取大量的广告费等增值业务。 爱奇艺网站我们大家都熟悉,是国内比较知名的网络视频网站,也经常首发国内外各种热播影视剧,通过用户购买月、季、年卡的经营方式,以及分阶段的提供给网民抢先观看,很多追剧网友迫不及待的想要观看,就会购买爱奇艺公司提供的这些增值服务,这样,爱奇艺就达到营收的目的。 再比如说以上这些网络直播平台,主要是一些广告费用以及直播环节网友的打赏,这些打赏是网友通过网站提供的充值RMB服务,然后通过在线互动的方式消费。 新的互联网云时代、大数据时代下,我们必须去深度认识、认知互联网经营,并从中学习经验。通过查找资料的学习,初步了解了什么是文网文、ICP,和办理的流程,以及两者之间存在的联系是什么。 一、文网文相关概念及完整的申报流程 (1)、文网文,全称叫《网络文化经营许可证》,互联网文化活动分经营性和非经营性两类。经营性互联网文化活动指以营利为目的,通过向上网用户收费或者电子商务、广告、赞助等方式获取利益,提供互联网文化产品及其服务的活动。结合公司现在是网络经营的性质,应当划分在经营性的类别里,所以是必须要申请网文许可证的。 (2)、申报流程,申请设立经营性互联网文化单位,应当向所在地省、自治区、直辖市人民政府文化行政部门提出申请,由省、自治区、直辖市人民政府文化行政部门初审后,报文化部审批。对申请设立经营性互联网文化单位的,省、自治区、直辖市人民政府文化行政部门应当自受理申请之日起20个工作日内提出初审意见上报文化部,文化部自收到初审意见之日起20个工作日内做出批准或者不批准的决定。批准的,发给《网络文化经营许可证》;不予批准的,应当说明理由。文网文许可证所涉及的经营内容种类众多,但申报流程大同小异,基本相同,不再赘述。 (3)、申请材料,申请设立经营性互联网文化单位,应当采用企业的组织形式,并提交下列文件:

GnRH-a在妇科的临床应用

GnRH-a在妇科的临床应用 山东大学齐鲁医院——张培海 群主讲——夏雨 GnRH-a简介 ●促性腺激素释放激素(GnRH)是下丘脑分泌的10肽激素,是神经、免疫、 内分泌三大调节系统相互联系的重要信号分子,对生殖调控具有重要意 义。 ●从1971年Shally和Guillenmin从猪下丘脑首先分离出促性腺激素释放 激素( gonadotropin releasinghormone,GnRH)并由此获得诺贝尔奖至 今,已有40年的历史。但对GnRH及其受体的结构、分布、生物学功能的 研究,直到近十几年才逐渐被引起重视并成为研究热点,随后高活性的G nRH类似物大量人工合成并很快转入临床应用。 ●GnRH类似物,包括促性腺激素释放激素激动剂(gonadotropin releasin g hormone agonist,GnRH-a) 促性腺激素释放激素拮抗剂(gonadotropin releasing hormone antagonist,GnRH-A)现已成为近年来应用最广泛的 多肽类激素药物之一。 GnRH -a分子结构及作用机理 ●( GnRH)是下丘脑肽能神经元分泌的10肽激素,通过垂体门脉系统,以 脉冲式释放的形式刺激垂体前叶细胞合成卵泡刺激素( FSH)和黄体生成 素( LH)。在过去30年,许多Gn天然促性腺激素释放激素RH结构类似物 被合成,包括GnRH激动剂(GnRH-a)和GnRH拮抗剂(GnRH-A)。通过改变G nRH第6位和第10位氨基酸得到GnRH-a,其生物效应较天然GnRH提高5 0 -100倍。 ●研究发现GnRH -a首次给药初期,GnRH-a具有短暂刺激FSH和LH升高的 反跳作用,即“点火效应”(flare up),使卵巢激素短暂升高,大约持续7天。药物持续作用10-15天后,垂体表面的GnRH受体被全部占满或耗 尽,对GnRH-a不再敏感,即垂体GnRH-a受体脱敏,使FSH和LH大幅下 降,导致卵巢性激素明显下降至近似于绝经期或手术去势水平,出现人为 的暂时性绝经,又称作可逆性药物卵巢切除术。 GnRH -a分子结构及作用机理 GnRH受体属于G蛋白偶联受体家族,包括GnRH -I Receptor和GnRH -Ⅱ Receptor,在生物体内广泛分布,目前已在多种垂体外组织包括子宫肌层、子官内膜、卵巢、胎盘、乳腺、前列腺和血液单核细胞等中发现GnRH-mRNA 的表达。因此,GnRH-a在治疗性激素相关疾病,如子宫内膜异位症、功血、子宫肌瘤、女性不孕症及儿童中枢性性早熟中都可起到重要作用。 GnRH-a在妇产科临床的适应症 一、GnRH-a用于子宫内膜异位症 ●子宫内膜异位症 ●疾病特点

(完整版)对社会主义公有制的全面认识

对社会主义公有制的全面认识 湖北何细华 在发展社会主义市场经济的今天,随着人们对解决社会公平、逐步实现共同富裕关注程度的增加,要大力发展生产力,以经济建设为中心也越来越成为人们的共识。而要以经济建设为中心,就不得不提到我们的公有制,不得不深入理解公有制。下面对如何认识公有制作一些概括。 1、公有制与社会主义的关系 生产资料所有制是生产关系的基础。社会主义公有制的建立和由此产生的社会成员之间根本利益的一致性,适应了社会化大生产的要求,为生产力的进一步发展创造了条件。因而,生产资料公有制是社会主义的根本经济特征,是社会主义经济制度的基础。 2、公有制经济的范围 公有制经济的显著特征是:生产资料归大家共同所有(以国家或集体所有的形式出现),因而其范围主要包括国有经济和集体经济;而在社会主义市场经济条件下,经济成分(公有与非公有)相互融合、相互渗透、相互参股的形式较多,因而公有制经济还包括混合所有制经济中的国有成分和集体成分。 3、公有制在所有制结构中的主体地位 我国目前的经济成分包括公有制和非公有制,它们的地位及关系构成了我们的所有制结构。在我国目前的所有制结构中,公有制居于主体地位。主体地位主要表现为两个方面:①就全国而言,公有资产在社会总资产中占优势。既要有量的优势,又要注重质的提高。需要一提的是:公有资产占优势并非要求在所有的行业,所有的地方,所有的部门都占优势,只要在总量上占优势就可以。②国有经济控制国民经济命脉,对经济发展起主导作用。这主要是从质的角度提的要求,对于国有经济,我们要坚持“有进有退、有所为有所不为”的原则,即在关键行业、重点领域“进”“有所为”,起主导作用;而对于一般性竞争行业与领域“可以退”“可以不为”。 4、公有制与非公有制的关系 公有制经济与非公有制经济同为社会主义市场经济的重要组成部分,存在着既对立又统一的关系。对立即双方的差别、差异,主要体现在经济地位上,公有制居于主体地位;统一即双方的联系、相同点、共性等,主要表现在公有制与非公有制统一于社会主义市场经济体制当中,在市场上应平等地展开公平竞争,都是推动社会发展的重要力量,而且可以相互促进、共同进步。 5、公有制的主要实现形式——股份制 公有制的实现形式应是多种多样的。一切反映社会化生产规律的经营方式和组织形式,

互联网系统在线安全监测技术方案(标书)

1.1在线安全监测 1.1.1网站安全监测背景 当前,互联网在我国政治、经济、文化以及社会生活中发挥着愈来愈重要的作用,作为国家关键基础设施和新的生产、生活工具,互联网的发展极大地促进了信息流通和共享,提高了社会生产效率和人民生活水平,促进了经济社会的发展。 网络安全形势日益严峻,针对我国互联网基础设施和金融、证券、交通、能源、海关、税务、工业、科技等重点行业的联网信息系统的探测、渗透和攻击逐渐增多。基础网络防护能力提升,但安全隐患不容忽视;政府网站篡改类安全事件影响巨大;以用户信息泄露为代表的与网民利益密切相关的事件,引起了公众对网络安全的广泛关注;遭受境外的网络攻击持续增多;网上银行面临的钓鱼威胁愈演愈烈;工业控制系统安全事件呈现增长态势;手机恶意程序现多发态势;木马和僵尸网络活动越发猖獗;应用软件漏洞呈现迅猛增长趋势;DDoS攻击仍然呈现频率高、规模大和转嫁攻击的特点。 1.1.2网站安全监测服务介绍 1.1. 2.1基本信息安全分析 对网站基本信息进行扫描评估,如网站使用的WEB发布系统版本,使用的BBS、CMS版本;检测网站是否备案等备案信息;另外判断目标网站使用的应用系统是否存在已公开的安全漏洞,是否有调试信息泄露等安全隐患等。 1.1. 2.2网站可用性及平稳度监测 拒绝服务、域名劫持等是网站可用性面临的重要威胁;远程监测的方式对拒绝服务的检测,可用性指通过PING、HTTP等判断网站的响应速度,然后经分析用以进一步判断网站是否被拒绝服务攻击等。 域名安全方面,可以判断域名解析速度检测,即DNS请求解析目标网站域

名成功解析IP的速度。 1.1. 2.3网站挂马监测功能 挂马攻击是指攻击者在已经获得控制权的网站的网页中嵌入恶意代码(通常是通过IFrame、Script引用来实现),当用户访问该网页时,嵌入的恶意代码利用浏览器本身的漏洞、第三方ActiveX漏洞或者其它插件(如Flash、PDF插件等)漏洞,在用户不知情的情况下下载并执行恶意木马。 网站被挂马不仅严重影响到了网站的公众信誉度,还可能对访问该网站的用户计算机造成很大的破坏。一般情况下,攻击者挂马的目的只有一个:利益。如果用户访问被挂网站时,用户计算机就有可能被植入病毒,这些病毒会偷盗各类账号密码,如网银账户、游戏账号、邮箱账号、QQ及MSN账号等。植入的病毒还可能破坏用户的本地数据,从而给用户带来巨大的损失,甚至让用户计算机沦为僵尸网络中的一员。 1.1. 2.4网站敏感内容及防篡改监测 基于远程Hash技术,实时对重点网站的页面真实度进行监测,判断页面是否存在敏感内容或遭到篡改,并根据相应规则进行报警 1.1. 2.5网站安全漏洞监测 Web时代的互联网应用不断扩展,在方便了互联网用户的同时也打开了罪恶之门。在地下产业巨大的经济利益驱动之下,网站挂马形势越来越严峻。2008年全球知名反恶意软件组织StopBadware的研究报告显示,全球有10%的站点都存在恶意链接或被挂马。一旦一个网站被挂马,将会很快使得浏览该网站用户计算机中毒,导致客户敏感信息被窃取,反过来使得网站失去用户的信任,从而丧失用户;同时当前主流安全工具、浏览器、搜索引擎等都开展了封杀挂马网站行动,一旦网站出现挂马,将会失去90%以上用户。 网站挂马的根本原因,绝大多数是由于网站存在SQL注入漏洞和跨站脚本漏洞导致。尤其是随着自动化挂马工具的发展,这些工具会自动大面积扫描互联

深度认识人性人欲人事

深度认识人性人欲人事 1 三十而立立什么? 立三观,立志向,立事业,立家庭,立理想。三十岁,我们进入了社会,有了一定的人生阅历和人际交往,而且在两性方面开始寻求突破,建立家庭。三十岁,我们对世界,对人生,对自己在社会所处得位置,即我们的三观开始初步形成,好的三观对今后的人生起着决定性的影响。反之,我们的生活和人生就会出问题,走歪路。 三十岁,精力旺盛,活力四射,是人生中的最好时段,很多历史上的伟大先行者都是在这个时段开始树立理想,建立事业,培养和坚定自己向前的或宏大或生活化世俗化的目标。总之,一个人在三十岁前,没有在社会,两性,职业,理想之间有所平衡取舍和目标建树,那他今后的路会越来越难走。 树不立,不成活。人不立,不成器。三十不立,后悔自己。 2

朱熹的天理与老子的无为 存天理灭人欲,这是宋代朱熹的理学思想。而无欲无为从时间上追溯,是老子的道家思想。二者是明显不同的。朱熹强调的是天理,即万事万物本来就有的原理规律,要想认识天理,首先要格物致知,就是要认识事物,知道掌握事物生成发展的一系列特征和原理,尊重客观事实,剔除主观干扰,尽可能最真实地反映事物本来面貌。而人的欲望杂多无序无度,往往掺杂着个人主观。人的感觉和日常经验同样有此弊病,它们只能从某个侧面认识外部世界,揭示和提出的往往偏离客观本真。人的感觉经验让我们处于现象世界,无法进入本质的存在世界,从而也就无法认识证实事物本身的原理以及生发过程。 朱熹正是看到了欲望的多变和不可靠,看到了人欲对天理和良知的遮蔽,同时理清了天理的恒定和有序,才提出了这个观点。而老子的无欲无为倡导的是见素抱朴,回归自然,清心寡欲的同时,让事物自然而为,不人为干预,不违背事物规律作为。大道至简,大河自流,万物并行而不相悖,万物并育而不相害。朱熹和老子的这两点观点,其认识的角度和起点都是不一样的,由此发展出的理论学说也不同。在中国留传下来的优秀思想中,道家独树一帜,自成一派。而朱熹的理学只是漫长儒学历史中的一个分支。二者不能混淆! 3 孟子的人性善和荀子的人性恶

对美的定义的认识和理解

对美的定义的认识和理解 关于美的定义,美的释意有几方面: ⑴指味、色、声、态的好。如:美味;美观;良辰美景。《史记·吴太伯世家》:“见舞《大武》,曰:‘美哉!周之盛也、其若此乎!’” ⑵指才德或品质的好。如:美德;价廉物美。《管子·五行》:“人与天调,然后天地之美生。”王勃《滕王阁序》:“宾主尽东南之美。” ⑶善事;好事。《论语·颜渊》:“君子成人之美,不成人之恶。” ⑷赞美;称美。《庄子·齐物论》:“毛嫱、丽姬,人之所美也。” ⑸喜欢;称心。《醒世恒言·马当神风送滕王阁》:“满座之人见王勃年少,却又面生,心各不美。” 以上总结为一句,美的定义为:人对自己的需求被满足时所产生的愉悦反应的反应,即对美感的反应。 从事这个设计这个专业,需要掌握基本的对美的认识和理解。儿童时,觉得玩具是美的。有自己的兴趣时,觉得画笔是美的。拿着画笔,觉得自己所画的是美的。上学时,觉得有朋友一起玩耍是美的。劳累时,觉得放假是美的。小时候,觉得长大后的未来是美的。长大后,期待更久后自己创造出的美好生活。从小学美术的我,需要整理自己这么多年来对美这个概念的认识和理解。 小时候,画着儿童画,自己有无数的想法在头脑中,得过许多奖,也许那时候大人们对自己的作品感到很美。高中时,画了许多素描,色彩,速写,觉得能把它们都画得好,就是真正造出了美。写意的画法和写生的画法都能创造出美,所以让自己觉得美并不是绝对的。到了大学,看到许多新奇的作品和事物,自己在茫然中的时候,感到自己对这些如此陌生,感到自己的创新少之又少。时刻去告诫自己不能去依赖某些东西。应该需要怎么去进步呢要在大学中去积累,创造出更好的东西。涌现出更多好的想法。 也出去旅游过,看见过一些奇山妙水,在国家森林公园中呼吸大自然的空气,那种惬意让人不自然的感觉到大自然的美,站在高山上远眺,在船中漂流,感受奇观》自然所创造出的事物,体会其中的特点与形态,也许这就是美吧。 也品尝过许多美味,人类用自己的头脑创造着更多美味让自己享受,从味觉升华到人的心理愉悦舒畅,这也是一种美。 在枯燥烦躁时听听音乐,改善自己的心情。每个人有自己对美的评价标准,美妙的音乐能理清自己的思绪。从音调或歌词中体会到意境或体现出自己的切身实际,这是一种独特的美。美好的形态,受大家欣赏的建筑、物品、图案、产品、食物、颜色……。从眼中所见到呈现在头脑中,提高人们对美的认识,增长人们的见识,扩大人的眼界。 情感之间体现出的美,自己也难免体会过一些,对异性的欣赏。这些感性的事物也属于自己

GnRH-a简介

GnRH-a简介 l? ? ? 促性腺激素释放激素(GnRH)是下丘脑分泌的10肽激素,是神经、免疫、内分泌三大调节系统相互联系的重要信号分子,对生殖调控具有重要意义。 l? ? ? 从1971年Shally和Guillenmin从猪下丘脑首先分离出促性腺激素释放激素 ( gonadotropin releasinghormone,GnRH)并由此获得诺贝尔奖至今,已有40年的历史。但对GnRH及其受体的结构、分布、生物学功能的研究,直到近十几年才逐渐被引起重视并成为研究热点,随后高活性的GnRH类似物大量人工合成并很快转入临床应用。 l? ? ? GnRH类似物,包括促性腺激素释放激素激动剂(gonadotropin releasing hormone agonist,GnRH-a) 促性腺激素释放激素拮抗剂(gonadotropin releasing hormone antagonist,GnRH-A)现已成为近年来应用最广泛的多肽类激素药物之一。 GnRH -a分子结构及作用机理 l? ? ? ( GnRH)是下丘脑肽能神经元分泌的10肽激素,通过垂体门脉系统,以脉冲式释放的形式刺激垂体前叶细胞合成卵泡刺激素( FSH)和黄体生成素( LH)。在过去30年,许多Gn 天然促性腺激素释放激素RH结构类似物被合成,包括GnRH激动剂(GnRH-a)和GnRH拮抗剂(GnRH-A)。通过改变GnRH第6位和第10位氨基酸得到GnRH-a,其生物效应较天然GnRH提高50 -100倍。 l? ? ? 研究发现GnRH -a首次给药初期,GnRH-a具有短暂刺激FSH和LH升高的反跳作用,即“点火效应”(flare up),使卵巢激素短暂升高,大约持续7天。药物持续作用10-15天后,垂体表面的GnRH受体被全部占满或耗尽,对GnRH-a不再敏感,即垂体GnRH-a受体脱敏,使FSH和LH大幅下降,导致卵巢性激素明显下降至近似于绝经期或手术去势水平,出现人为的暂时性绝经,又称作可逆性药物卵巢切除术。 GnRH -a分子结构及作用机理 GnRH受体属于G蛋白偶联受体家族,包括GnRH -I Receptor和GnRH -Ⅱ Receptor,在生物体内广泛分布,目前已在多种垂体外组织包括子宫肌层、子官内膜、卵巢、胎盘、乳腺、前列腺和血液单核细胞等中发现GnRH-mRNA的表达。因此,GnRH-a在治疗性激素相关疾病,如子宫内膜异位症、功血、子宫肌瘤、女性不孕症及儿童中枢性性早熟中都可起到重要作用。 GnRH-a在妇产科临床的适应症 一、GnRH-a用于子宫内膜异位症 l? ? ? 子宫内膜异位症 l? ? ? 疾病特点 A 症状与体征及疾病的严重性不成比例 B 病变广发、形态多样 C 极具浸润性,可形成广泛而严重的粘连 D具有激素依赖性,易于复发 l? ? ? GnRH-a应用方法:

对力的认识和理解

1、对力的认识和理解 (1)力不能脱离开物体而单独存在 发生力的作用时,一定存在两个或两个以上的物体。一对相互作用的力同时产生,同时消失,而且是作用在两个物体上。施力物体和受力物体是人们为研究方便而规定的。事实上,施力物体同时也是受力物体,受力物体同时也是施力物体。 (2)物体是否接触不能成为判断是否发生力的作用的依据 相互接触的物体间不一定发生力的作用,没有接触的物体之间也不一定没有力的作用。 力的作用效果 1.改变物体的运动状态 若物体的运动状态发生了改变,说明物体一定受到力了的作用;若物体受到力了的作用,则物体的运动状态不一定发生改变。 2.改变物体的形状(使物体发生形变) 作力的图示求作气球中人受到的力的图示什么是力力的图示法 2、作力的图示 重力定义:地面附近的物体由于地球的吸引而受到的力叫做重力。 重力的大小可用弹簧测力计来测量,当挂在测力计钩上的物体静止时,测力计的读数就等于物体受到的重力。 物体所受的重力跟它的质量成正比。 关系式:G=mg (g=9.8N/㎏) g的含义:质量是1千克的物体受到的重力是9.8牛顿 重力的方向总是竖直向下的 应用:重垂线(用于检查物体是否竖直) 水平仪(用于检查物体的面是否水平) 重力的作用点重力在物体上的作用点叫做重心 记住几种物体的重心的位置均匀球体均匀棒均匀薄板 二力的合成 合力:如果一个力产生的效果跟两个力共同产生的效果相同,这个力就叫做那两个力的合力。 定义:求两个力的合力叫做力的合成。 同一直线上二力的合成同方向的二力的合成 作用在同一物体上,同一直线上,方向相同的两个力的合力,大小等于这两个力的大小之和,方向跟这两个力的方向相同。 反方向的二力的合成作用在同一物体上,同一直线上,方向相反的两个力的合力,大小等于这两个力的大小之差,方向跟较大的那个力相同。 合力等于两力之和或两力之差合力的方向总跟较大的那个力的方向相同。 分析:人在空中受竖直向下的重力,和竖直向上的浮力,两个力的作用。因他们静止不动,在竖直方向上受一对平衡力的作用,重力的大小为100N,浮力的大小也是100N。力的作用点作用在人体上。力的作用点50N G=100N F=100N 物体 例题1:弹簧测力计静止在光滑水平桌面上,挂钩和吊环上分别加10N的力,且这两个力方向相反在一条直线上,则此弹簧测力计的示数为() A 20N B 10N C 0N D 无法确定分析和解:弹簧测力计读数发生改变是由于弹簧受到了力的作用。而当弹簧测力计静止时,作用在吊环上的拉力不能使弹簧发生形变,当然不影响弹簧测力计的读数。一定要牢记“只有作用在挂钩上的力,才能使弹簧拉伸,才是测力计的读数。”故此的正确选项是B。B

用深度心理学来认识自己和他人

用深度心理学来认识自己和他人 深度心理学即精神分析理论,是奥地利精神病学家弗洛伊德创立的心理治疗体系。我们可以用深度心理学来认识自己和他人。 ●爱和恨就是硬币的两面,你恨一个人是因为你爱他;你喜欢一个人,是因为他身上有你没有的;你讨厌一个人是因为他身上有你有的东西;你经常在别人面前批评某人,其实潜意识中是想接近他。 ●如果你觉得世界充满爱,生活很美好,那是因为你自己内心充满爱和美好。如果有人对你说周围的人都对他不好,是因为他内心有对别人的敌意。经过心理防御机制的“化妆”,投射成是别人在恨他并且攻击他。这种人内心的“自我评价”较低,因此自我防御机制过强,影响了正常的人际关系。 ●当你想到一个人,只有喜悦的感觉,那并不是爱。因为当你爱上一个人,想念他而他不在身边的时候,会产生抑郁的内心体验。 ●很多时候,我们的当下都被记忆或者幻想污染了。如果一个人习惯于高度控制自己的伴侣以及身边人,这种人往往是从小在父母高度控制的紧张环境中长大的。潜意识中,他要保持对过去的忠诚,不能“背叛”过去。当他将这种状态传递给别人,就会让他和身边人都缺乏幸福感,这就是过去对现在的“污染”。

●有的人在人群中能谈笑风生,用快乐和愉悦感染大家,让人感到很有趣,这表示他的潜意识里没有压抑自己。若他是一个很无趣、拘谨的人,很有可能他从小的生长环境是压抑的。 ●在没有宗教传统的环境中成长的孩子,长大后突然信仰宗教,是早年创伤的延迟性反应。但是相反,如果在有宗教传统的家族中长大的孩子,身心会很健康,而且还比别的孩子更纯净,更舒服。 ●一切记忆都会寻求表达,哪怕是被深埋被压抑的。爱过的会寻求再爱,恨过的会寻求再恨,被虐待过的会寻求再被虐待。如果实在被压抑得太深,人体就会以疾病的方式来表达。心因性疾病,如胃溃疡、神经性皮炎、口腔溃疡,还有一切没有原因的疼痛,都有可能是心理因素导致的。 ●完美主义者是因为无法容忍自己的瑕疵,把坏的部分投射给别人,于是就可以指责别人了。 ●每个人都只对心中有的东西敏感。如对金钱吝啬的人,会对别人“小气”很敏感。 ●我们对一个人的态度、看法、情感和行为,部分地是被这个人“教会”的。如果一个人总认为自己是很倒霉的,表现在行为和态度上,那别人也会认为他是倒霉的。所以一个老师面对一个认为自己很差劲的学生,怎么能被他教会呢?老师要坚定地认为他是好孩子有优秀的一面,那才能帮助他提高。否则老师就变成被“教会”的学生了。

GnRH-a简介之欧阳家百创编

GnRH-a简介l 促性腺激素释放激素(GnRH)是下丘脑分泌的10肽激素,是神经、免疫、内分泌三大调节系统相互联系的重要信号分子,对生殖调控具有重要意义。l 从1971年Shally和Guillenmin从猪下丘脑首先分离出促性腺激素释放激素( gonadotropin releasinghormone,GnRH)并由此获得诺贝尔奖至今,已有40年的历史。但对GnRH及其受体的结构、分布、生物学功能的研究,直到近十几年才逐渐被引起重视并成为研究热点,随后高活性的GnRH类似物大量人工合成并很快转入临床应用。l GnRH类似物,包括促性腺激素释放激素激动剂(gonadotropin releasing hormone agonist,GnRH-a) 促性腺激素释放激素拮抗剂(gonadotropin releasing hormone antagonist,GnRH-A)现已成为近年来应用最广泛的多肽类激素

药物之一。 GnRH -a分子结构及作用机理 l ( GnRH)是下丘脑肽能神经元分泌的10肽激素,通过垂体门脉系统,以脉冲式释放的形式刺激垂体前叶细胞合成卵泡刺激素( FSH)和黄体生成素( LH)。在过去30年,许多Gn天然促性腺激素释放激素RH结构类似物被合成,包括GnRH 激动剂(GnRH-a)和GnRH拮抗剂(GnRH-A)。通过改变GnRH第6位和第10位氨基酸得到GnRH-a,其生物效应较天然GnRH提高50 -100倍。l 研究发现GnRH -a首次给药初期,GnRH-a具有短暂刺激FSH 和LH升高的反跳作用,即“点火效应”(flare up),使卵巢激素短暂升高,大约持续7天。药物持续作用10-15天后,垂体表面的GnRH受体被全部占满或耗尽,对GnRH-a不再敏感,即垂体GnRH-a受体脱敏,使FSH和LH大幅下降,导致

相关主题
文本预览
相关文档 最新文档