概率空间
- 格式:ppt
- 大小:1.03 MB
- 文档页数:24
概率空间和概率分布的关系Probability space and probability distribution are closely related concepts in the field of probability theory. A probability space consists of three components: a sample space, a set of events, and a probability measure. The sample space is the set of all possible outcomes of an experiment, the set of events is a collection of subsets of the sample space, and the probability measure assigns probabilities to each event in the set of events. The probability distribution, on the other hand, describes the likelihood of each possible outcome of a random variable. It provides a mathematical model for the randomness inherent in a system or process.概率空间和概率分布在概率论领域密切相关。
概率空间包括三个组成部分:样本空间、事件集和概率度量。
样本空间是实验的所有可能结果的集合,事件集是样本空间的子集的集合,概率度量给事件集中的每个事件分配概率。
另一方面,概率分布描述了随机变量每个可能结果的可能性。
它为系统或过程中固有的随机性提供了数学模型。
In a probability space, the sample space represents all the possible outcomes of an experiment, which is the foundation of the entireprobability theory. It provides a framework for analyzing uncertainty and making predictions based on statistical data. The set of events in a probability space is crucial for determining the probability of various outcomes and understanding the likelihood of different scenarios. The probability measure assigns a numerical value to each event in the set of events, representing the likelihood of that event occurring. It is a fundamental concept that enables us to quantify uncertainty and make informed decisions.在概率空间中,样本空间代表实验的所有可能结果,这是整个概率论的基础。
第六章 条件概率与条件期望6.1 定义和性质设为概率空间,),,(P F ΩF ∈B 且,记0)(>B P ())()()(B P AB P B A P A P B ==),P ,,则易证明为概率空间。
考虑F ∈∀A ),,(B P F Ω,(F Ω上的随机变量ξ在此概率空间上的积分,若存在则称它为∫ΩξB dP ξ在给定事件B 之下的条件期望,记为(B E ξ),即()B ∫Ω=B dP ξE ξ。
命题1:若ξE 存在,则(B E ξ)存在且()∫=BdP B P B E ξξ)(1。
由此可见,ξ在给定事件B 之下的条件期望的意义是ξ在B 上的“平均值”。
此外给定事件在给定事件A B 的条件概率)B ()(I E B A P A =0)(>n B P 可看成条件期望的特殊情形。
设{}为的一个分割且,令F ⊂n B Ω)2,1,L =(=n n B σA ,则。
若F A ⊂ξE 存在,()∑为nE B n I n B ξ),A (Ω上的可测函数,称其为给定σ-代数A 之下关于P 的条件期望,记作()A ξE ,即()()∑=E ξA nB n I ξn B E 。
命题2:A ∈B ∀且,0)(>B P ()()∫=BdP E B P B E A ξξ)(1。
证明:A ∈B ∀,{L ,2,1⊂}∃K 使得∑∈=K i i B B ,()()()()∑∫∫∑∑∫∑∫∈∈=====K i BB Ki i i nn n BnB nBdPdP B P B E B B P B E dP IB E dP E inξξξξξξ)()(I A由此可见,若称满足下式的(),A Ω上的可测函数()A ξE 为ξ在给定σ-代数A 的条件期望:()∫∫=BBdP dP E ξξA ,A ∈∀B则由于不定积分,∫=BdP B v ξ)(A ∈∀B 为),(A Ω上的符号测度且v ,由Radon-Nikodym 定理存在唯一的(P <<P s a ..),A Ω上的可测函数满足上式,即()dPdvE =A ξ(Ω,故由命题2,两者定义一样。
第一章:预备知识§1.1 概率空间随机试验,样本空间记为Ω。
定义1.1 设Ω是一个集合,F 是Ω的某些子集组成的集合族。
如果 (1)∈ΩF ;(2)∈A 若F ,∈Ω=A A \则F ; (3)若∈n A F , ,,21=n ,则∞=∈1n nAF ;则称F 为-σ代数(Borel 域)。
(Ω,F )称为可测空间,F 中的元素称为事件。
由定义易知: .216\,,)5)4(111F A A A i F A F B A F B A F i i n i i n i i i ∈=∈∈∈∈∅∞=== ,,则,,,)若(;则若(;定义1.2 设(Ω,F )是可测空间,P(·)是定义在F 上的实值函数。
如果()()()()∑∞=∞==⎪⎪⎭⎫ ⎝⎛∅=⋂≠=Ω≤≤∈1121,,,31210,)1(i i i i j i A P A P A A j i A A P A P F A 有时,当)对两两互不相容事件(;)(;任意则称P 是()F ,Ω上的概率,(P F ,,Ω)称为概率空间,P(A)为事件A 的概率。
定义1.3 设(P F ,,Ω)是概率空间,F G ⊂,如果对任意G A A A n ∈,,,21 ,,2,1=n 有: (),11∏===⎪⎪⎭⎫⎝⎛ni i n i i A P A P则称G 为独立事件族。
§1.2 随机变量及其分布随机变量X ,分布函数)(x F ,n 维随机变量或n 维随机向量,联合分布函数,{}T t X t ∈,是独立的。
§1.3随机变量的数字特征定义1.7 设随机变量X 的分布函数为)(x F ,若⎰∞∞-∞<)(||x dF x ,则称)(X E =⎰∞∞-)(x xdF为X 的数学期望或均值。
上式右边的积分称为Lebesgue-Stieltjes 积分。
方差,()()[]EY Y EX X E B XY --=为X 、Y 的协方差,而 DYDX B XYXY =ρ为X 、Y 的相关系数。
条件概率测度论
条件概率和测度论是概率论的两个重要概念。
条件概率是指在某个条件或限制下,某一事件发生的概率。
测度论则是概率论的基础,它定义了概率空间和事件集合,并给出了概率测度的性质和运算规则。
在测度论中,概率空间是一个三元组(Ω,F,P),其中Ω是一个样本空间,F是Ω上的一个σ代数,P是一个定义在F上的概率测度。
事件集合是由F中的元素构成的,每个元素都对应一个事件。
概率测度P给出了每个事件发生的概率。
条件概率是在某个已知条件下,某个事件发生的概率。
在测度论中,条件概率可以通过转移测度来定义。
转移测度是将一个概率测度从原来的样本空间Ω映射到另一个样本空间的一个函数。
在条件概率的定义中,转移测度的作用是将原来的概率测度P映射到一个新的概率测度P'上,使得P'满足条件概率的定义。
通过测度论和条件概率的定义,我们可以进一步探讨概率论中的其他概念,例如随机变量、分布函数、期望、方差等。
这些概念在概率论中有着广泛的应用,可以用于解决各种不确定性和风险问题。
Probability spaces.We are going to give a mathematical definition of probability space .We will first make some remarks which will motivate this definition.A fundamental notion in probability theory is that of an experiment .An experiment is an activity which can be repeated any number of times,each repetition of which has an outcome .We require that information about outcomes of past performances of the experiment provides no information about future outcomes of performances of the experiment.The set of outcomes of the experiment is called the sample space of the experiment.The points of the sample space,which are outcomes,are sometimes called sample points .(It is possible that the experiment have only one outcome so that the sample space has only one member,but this situation is probabilistically trivial.)Let us give two examples.For the first example,suppose I take a coin out of my pocket,flip it three times and observe for each flip whether it came up heads or tails.A natural sample space for this experiment would be the set of ordered triples{(H,H,H ),(H,H,T ),(H,T,H ),(H,T,T ),(T,H,H ),(T,H,T ),(T,T,H ),(T,T,T )}.Other sample spaces are possible.You may encode the outcomes as bit patterns,taking 1for heads and 0for tails,and ending up with{(1,1,1),(1,1,0),(1,0,1),(1,0,0),(0,1,1),(0,1,0),(0,0,1),(0,0,0)},or you could use the the numbers with the foregoing binary representations ending up with{7,6,5,4,3,2,1,0}.But wait —maybe you don’t care about the pattern of heads and tails.You might be interested in the sum of the angles with the vertical made by the plane of the coin the instant it strikes the table.This would be considered a different experiment.So the emphasis is on the outcome more than the activity which produced it.As a second example,consider a barrel full of batteries of all different sizes and shapes.You pick a battery out of the barrel (I didn’t and won’t say how,but that matters if we’re interested in probabilities)and we measure its voltage on my voltmeter.If you do that,the outcome will be a real number between 0and 15.Now you can arguethat there are lots of real numbers between 0and 15that will never arise as outcomes,like √2times 10−3000for example,and you’ll be right.As often happens in science,we arereplacing some physical reality by an idealization.In probability theory one associates with a sample space a family of subsets of the sample space the members of which are called events .In this course,for all practical purposes,every subset of the sample space will be an event.It turns out that there are serious technical and intuitive problems with this,but these are beyond the scope of the course and we will not go into them much.A natural event associate to the first example is “more heads than tails”which is{(H,H,H ),(H,H,T ),(H,T,H ),(T,H,H )}.A natural event associated with the second event is(1.2,1.8).If an outcome,which corresponds to a battery picked out of the barrel,is in this event then I might reasonably put the battery in my flashlight,assuming the battery is a “D”battery and my flashlight takes “D”batteries.Now we come to the key concept.Fix an experiment with sample space S and fix an event E .What is the probabilityP (E )of E ?Here is a very good nonmathematical definition.Perform the experiment indefinitely.Now that’s impossible but imagine doing it anyway.This results in a sequences 1,s 2,...,s n ,...of outcomes,or sample points if you prefer that terminology.(We’re using s’s here because“sample”starts with s.)By the way,the repetitions of the experiment should be independent,whatever that means.At the very least,you shouldn’t rig the experiment in any way that allows to obtain any information about the n-th outcome until the experiment has been performed for the n-th time.Given such a sequence s of outcomes we letν(E,n)be the number of i∈{1,...,n}such that s i∈E and define the probability of the event E to beP(E)=limn→∞ν(E,n)n.Now it is not at all clear that this limit exists.But let’s imagine that it does.There is an outstanding result in probability theory called the strong law of large numbers that says that the limit nearly always(whatever that means)does exist.Let us make some observations.First,0≤P(E)≤1.next,the probability of the empty event(no sample points)is zero and the probability of the certain event (all sample points)is one.Finally,suppose E1,...,E N are disjoint or mutually exclusive events.Thenν(∪N i=1E i,n)=Ni=1ν(E i,n).Passing to the limit,wefind the basic identityP(∪N i=1E i)=Ni=1P(E i).We have just described the relative frequency interpretation of probability.This is discussed in the book in some detail.The book also discusses some other ways of thinking about probability.Ifind them incomprehensible.There are other good ways of thinking about probability but we will not go into them.Now we’re ready to make the formal definition of a probability space.Definition.A probability space is an ordered triple(S,E,P)whereS is a set,E is a family of subsets of S andP is a function on E taking values in[0,1]such that the following conditions hold:∅∈E;S∈E;if E∈E then S∼E∈E;if C is a countable family of sets in E thenC∈E;P (∅)=0;P (S )=1andif C is a countable disjointed family of sets in E thenP ( C )= E ∈CP (E ).Here is a typical way one builds a probability space.Start with a finite set S .Next,letp :S →[0,1]be such thats ∈S p (s )=1.Let E be the family of all subsets of S .(There are 2N of them,right?)For each E ∈E letP (E )=s ∈E p (s ).Then (S,E ,P )is a probability space.For example,if the coin in the example above is fair it would be reasonable to take S to be the the set of ordered triples with entries H or T and to take p to be 18on each of the 8sample points.The probabilityof the event “more heads than tails”would then be 12.。