当前位置:文档之家› MathRep2

MathRep2

MathRep2
MathRep2

Seek simplicity, and distrust it.

-Alfred North Whitehead

In James R. Newman, The World of Mathematics, Vol. II,

p. 1055, Tempus, WA: Redmond (1988).

2 MATHEMATICAL

REPRESENTATIONS FOR COMPLEX

DYNAMICS 2.1 Mathematical Representations and Theoretical Thinking

Measurement cannot be separated from theory. Theoretical thinking is better through a mathematical representation. The choice of mathematical representations depends on the essential features in empirical observation and theoretical perspective. A mathematical representation should be powerful enough to display stylized features to be explained and simple enough to manage its mathematical solution to be solved. In the history of science, theoretical breakthrough often introduces radical changes in mathematical representation. For example, physicists once considered the Euclid geometry as an intrinsic nature of space. Einstein's work on gravitation theory offered a better alternative of a specific non-Euclid geometry in theoretical physics.

Mathematical representation is an integrated part of theoretical thinking. New mathematical representations are introduced under new perspectives. Newtonian mechanics was developed by means of deterministic representation. Probability representation made its way through kinetic theory of gas, statistical mechanics, and quantum mechanics. The study of deterministic chaos in Hamiltonian and dissipative systems reveals a complementary relation between these two representations.

There are different motives in choosing mathematical representation. For some scientists, the choice of mathematical representation is a choice of belief. Einstein refused the probability explanation of quantum mechanics because of his belief that God did not throw dice. Equilibrium economists reject economic chaos because of a fear that the existence of a deterministic pattern implies a failure of the perfect market. For some scientists, the issue of mathematical representation is a matter of taste and convenience. Hamiltonian formulation in theoretical economics has tremendous appeal because of its theoretical beauty and logical elegance. The discrete-time framework is dominated in econometrics because of its computational convenience in regression practice. For us, empirical relevance and theoretical generality are main drives in seeking new mathematical representations.

30 Persistent Business Cycles

The equilibrium feature of economic movements is characterized by the Gaussian distribution with a finite mean and variance. The disequilibrium features can be described by a unimodal distribution deviated from the Gaussian distribution. During a bifurcation or transition process, a U-shaped or multimodal distribution may occur under nonequilibrium conditions. We will study the deterministic and probabilistic representations for equilibrium, disequilibrium, and nonequilibrium conditions.

2.2 Trajectory and Probability Representation of Dynamical Systems

Both trajectory and probability representation are mathematical abstractions of the real world. In physics, a trajectory of a planet is an abstraction and approximation when we ignore the size of the planet and its perturbation during movement. In biologic and social science, the trajectory representation can be perceived as an average behavior over repeated observations. The same procedure of averaging can be applied to the probability representation. The probability representation holds for large ensembles with identical properties.

People may think that deterministic and stochastic approaches are conflicting representations. One strong argument in favor of stochastic modeling in economics is a human being's free will against determinism. However, this belief ignores a simple fact that these two representations coexist in theoretical literature. For example, the wave equation in quantum mechanics is a deterministic equation. However, its wave function has a probability interpretation. Traffic flow could be described by deterministic and stochastic models that were verified by extensive experiments (Prigogine and Herman 1971).

For a given deterministic equation, we can have both trajectory representation and probability representation.

The choice of mathematical representation depends on the question asked in your research. If your goal is forecasting a time path, you need the trajectory representation. If your interest is their average properties such as mean and variance, you need the probability representation.

2.2.1 Time Averaging, Ensemble Averaging, and Ergocity

Trajectory representation can be easily visualized by a time path of an observable such as a moving particle. Probability representation is more difficult because it contains more information.

There are two approaches to introduce the concept of probability distribution. From a repeated experiment such as the case of coin tossing, a static approach can define a probability distribution as the average outcome. Probability represents the expectation from an event or experiment. A dynamic approach can construct a histogram from a time series. A dynamic probability distribution can be revealed from a histogram if the underlying dynamic is not changing over time. The question is does the time average represent the true possibility in future events.

Mathematical Representations 31 In statistical physics, the probability distribution is described in an ensemble that consists of a large number of identical systems. The probability distribution is considered as an average behavior of these identical systems. In mathematical literature, if the time averaging is equal to the ensemble averaging, this property is called ergocity. In mathematical economics, ergodic behavior is often assumed for stochastic models in time series analysis (Granger and Ter svirta 1993, Hamilton 1994). In statistical physics, it is hard to establish the ergotic behavior for physical systems. For example, non-ergotic behavior was found from the anharmonic oscillators in Hamiltonian systems (Reichl 1998). Contrary to previous belief, ergocity and the approach to equilibrium do not hold for most Hamiltonian systems. Chaos in Hamiltonian systems plays an important role in approaching equilibrium and ergocity. Almost all dynamic systems exhibit chaotic orbits. Nonlinearity and chaos appear to be the rule rather than exception in dynamical systems.

Before discussing a probability distribution, we introduce the concept of a mean or average value μ which is a numerical measurement of the mathematical expectation E[x]. If there are N measurements of a variable X n , and N n ,...,2,1=, we have

μ = X X X N N 12+++... (2.2.1a) For a probability distribution P x () where P x ()≥0 and

P x dx ()=?1, we may have:

E[x ] = μ = <>=?x xP x dx ()

(2.2.1b) We also introduce the variance VAR[x ] and the standard deviation σ: VAR[x] = 22)(σμ>=-

(2.2.2)

In general, we can define a central moment about the mean:

μμμn n n x x P x dx =<->=-?()()() (2.2.3)

The most important distribution in probability theory is the Gaussian or normal distribution. A Gaussian distribution is completely determined by its first two moments: the mean μ and the standard deviation σ.

32 Persistent Business Cycles

Figure 2.1 Deterministic and probabilistic representation of

Gaussian white noise.

Mathematical Representations 33 P x x ()(exp(())=--12222

2πσμσ (2.2.4)

We may simply denote the Gaussian distribution as N (,)μσ.

As a numerical example, the time path and histogram of the Gaussian random noise is shown in Figure 2.1.

In principle, a stochastic system can be uniquely determined if its infinite moments are known. In empirical science, the critical issue is to determine the minimum moments for some characteristic behavior. In equilibrium statistical physics, the first two moments are good enough for many applications. In nonequilibrium statistical physics, higher moments may be needed. For example, the theory of non-Gaussian behavior in strong turbulence predicts up to the seventh moments observed in experiments. In economics, the first two to four moments are studied in empirical analysis.

2.2.2 The Law of Large Numbers, The Central Limit Theorem,

and Their Breakdown

For a large number of events, the Gaussian distribution provides a good description of its distribution. We have a set of N independent stochastic variables X 1, X 2, . . . X N with a common distribution. If their mean μ exist, the law of large numbers states that (Feller 1968):

P X X X N N {|...|}120+++->→με (2.2.5)

We denote its sum S N = X 1+X 2+ . . .+X N . Therefore, S N 抯 average (S N /N) approaches μ, and S N approaches N μ.

If the first two moments exist for the above stochastic variables, the central limit theorem states that the probability distribution of S N approaches a Gaussian distribution with a mean of N μ and a standard deviation of

N σ (van Kampen

1992):

P S N ()→ N (,)N N μσ

(2.2.6)

The Gaussian distribution is widely applied in statistics and econometrics because of the power of the law of large numbers and the central limit theorem. Therefore, the limitation of the Gaussian distribution can be seen when the law of large numbers and the central limit theorem break down.

34 Persistent Business Cycles One notable case is the non-existence of variance. For example, the Levy-Pareto distribution has infinite variance. The Levy distribution L(x) has an inverse power tail for large | x | (Montroll and Shlesinger 1984):

L x x ()||()sin()()→-+12αααπ

Γ (2.2.7)

where 0 < α < 2.

When α = 1, the Levy distribution has a special case of the Cauchy distribution which has finite mean b but infinite variance (Feller 1968).

)])[()(22a b x a x f +-=π (2.2.8)

Empirical evidence of the Levy distribution is found in a broad distribution of commodity prices long-range correlations in turbulent flow (Mandelbrot 1963, Cootner 1964, Bouchaud and Georges 1990, Klafter, Shlesinger, and Zumofen 1996, Reichl 1998).

Figure 2.2. Gaussian and Cauchy Distribution. The (0, 1) is the

tallest in solid line. The Cauchy(1, 0) distribution is in the middle in

dashed line, and Cauchy(π, 0) is the lowest and fattest distribution in

dotted line.

Mathematical Representations

35 A comparison of the Cauchy distribution and the Gaussian distribution is shown in Figure 2.2. Both are unimodal distributions with zero mean. The variance of the standard Gaussian distribution is 1, and the variance of Cauchy distribution is infinite caused by its long tails.

2.2.3 U-Shaped Distribution

Another interesting case is the U-shaped distribution such as the polarization in ferromagnetism and public opinion (Haken 1977, Chen 1991). For a U-shaped distribution, the concept of the expectation is meaningless, because the mean is the most unlikely event.

In probability theory, there is a case of the arc sine law for a last visits that indicates a startling result in chance fluctuation (Feller 1968). Consider an ideal coin-tossing in 2n trials. The chances of tossing a head or a tail are equal. If the accumulated numbers of heads and tails are 2k. The probability is related to the following U-shaped distribution function:

f x x x ()()=

-11π (2.2.9)

where x k n =

and 0 < x < 1.

Coin Tossing.

36 Persistent Business Cycles

Its probability distribution is shown in Figure 2.3. Its mean is 0.5. Its variance is 0.125. Its skewness is zero. Its kurtosis is 93. We must know that even the distribution of the sample mean approaches the Gaussian distribution, but the mean of the arc sine distribution represents the least likely event! This is a case that the central limit theorem is valid for a U-shaped distribution but the mean value may be quite misleading!

People may think the most likely event should be anyone leading half the trial numbers. It is not! The most probable values for k are the extremes 0 and n. It is quite likely that in a long coin-tossing game, one of the players remains the whole time on the winning side, the other on the losing side. For example, in 20 tossings, the probability of leading 16 times or more is about 0.685; but the probability of leading 10 times is only 0.06! Therefore, the intuition of time averaging may lead to an erroneous picture of the probable effects of chance fluctuations. In other words, the expected value is misleading under a U-shaped distribution.

2.2.4 Delta Function and Deterministic Representation

Deterministic representation can be considered as a special case of probability representation when the probability distribution is a delta function. The delta function is very useful in quantum mechanics (Merzbacher 1970).

0)'(=-x x δ when x x ≠' (2.2.10a) ∞=)0(δ

(2.2.10b) ?

∞∞-=1)(dx x δ (2.2.10c)

?∞∞--=dx x x x f x f )'()()'(δ (2.2.10d)

There are some useful properties for the delta function:

)()(x x δδ=-

(2.2.11a)

)()()()(a x a f a x x f -=-δδ

(2.2.11b) )(||1)(x a ax δδ= (2.2.11c)

)]()([|

|1)()(b x a x b a b x a x -+--=--δδδδ (2.2.11d)

Mathematical Representations 37

The delta function may have the following representations in terms of a limiting process (Merzbacher 1970, Stremler 1982):

(a) A harmonic wave:

?∞

∞--=-ω

ωπδd x x i x x )}'(exp{21)'( (2.2.11a) where i =-1.

(b) A Gaussian pulse:

}exp{lim )(22

0σπδσx x -=→

(2.2.11b)

(c) A two-sided exponential:

}|

2|exp{lim )(0σδσx x -=→

(2.2.11c) (d) A Cauchy distribution pulse:

)(lim 1

)(220σσπδσ+=→x x

(2.2.11d) (e) A Sinc function:

x x

x )

sin(lim 1)(0σπδσ→=

(2.2.11e) (f) A rectangular pulse function:

)]2()2([1lim )(0ε

εεδε--+=→x u x u x

(2.2.11f)

where the unit step function is:

u x x (')-=1 when x x ≥'

u x x (')-=0 when x x <'

(2.2.11g)

38 Persistent Business Cycles

(a)

(b)

Figure 2.4 The Relationship between Deterministic and Probabilistic

Representation. (a) A bifurcation tree of a deterministic system. (b) The

trajectory and probability distribution representation

In economic dynamics, both a deterministic equation and a stochastic equation can be considered as an average description of a large number of systems. If the probability has a unimodal distribution, the time-path of its average position can be described by a trajectory.

For a nonlinear deterministic system, bifurcation may occur like a bifurcation tree in Figure 2.4a. During a bifurcation point of a deterministic equation, the

Mathematical Representations 39 corresponding probability distribution must have a polarized distribution as shown in Figure 2.4b.

2.3 Linear Representations in the Time and Frequency Domain

In theoretical and empirical analysis, the time domain representation is applied in correlation analysis and the frequency domain representation is used in spectral analysis. They are useful tools for time series analysis for deterministic and stochastic systems.

Theoretically, we can consider the spectral analysis as a representation in a functional space whose base function is harmonic waves, and the base function of correlation analysis is delta functions. Therefore, linear representations of a time series have only two building blocks: harmonic cycle and white noise. These two extreme models have a remarkable similarity: both of them can be described by delta functions. The image of a sine wave is a delta function in spectral space and the image of white noise is a delta function in correlation space.

2.3.1 Time Domain Representation in Correlation Analysis

It is very useful to introduce the concept of the covariance and the correlation for two functions of X(t) and Y(t):

Cov X Y E X Y X Y [,][()()]=--μμ (2.3.1)

Cor X Y Cov X Y X Y [,][,]

=σσ (2.3.2)

If X(t) and Y(t) are independent, their covariance and correlation are zero. We can also define the autocorrelation R(τ) of a function X(t):

),()(ττ+=t t X X Cor R

(2.3.3)

We should note that correlation analysis is useful for stationary time series. For non-stationary time series, the time trend will introduce spurious correlations.

For a Gaussian white noise ξi , its autocorrelation is a delta function

Cor j k j k (,),ξξδ=

(2.3.4)

40

Persistent Business Cycles Here, δj k ,=0 when j k ≠ and δj j ,=1 when j k =.

For a harmonic function )sin()(φω+=t A t X , its autocorrelation is a cosine function (Otnes and Enochson 1972)

?-∞→++=2/2/)(*)(1lim

),(T T T t t dt t X t X T X X Cor ττ = ?-∞→+++2/2/))(sin()sin(1lim T T T dt t t T φτωφω = ?-∞→++-2/2/2)]22cos()[cos()2(1lim T T T dt t A T φωτωτω = ?-∞→++-2/2/2)22cos(lim )cos(2T T T dt t A φτωωτω )2(cos 2)(cos 222τπτωP

A A =? (2.3.5)

is white noise. The broken line is a sine wave with

period P = 1.

Here, the angular frequency

P f ππω22==, f is the frequency, and P the period. The phase information φ is lost in the correlation function. The numerical examples of a Gaussian white noise N (0,1) and a sine wave with period one is shown in Figure 2.5. Note: the autocorrelation of the sine wave decays slowly

Mathematical Representations 41 because of the finite time window T. We will discuss the role of the time window later.

If we measure the time lag with the first zero-autocorrelation T o , it is near zero for white noise and

4

P for a sine wave with period P.

2.3.2 The Frequency Domain Representation in Spectral Analysis In a linear (vector) space, a vector x can be represented by a combination of n elementary vectors if the n elementary vectors consist of a complete set of orthogonal basis. Similarly, a function f t ()can be expanded in a functional space by a set of orthogonal and complete functions B t n () when t t t 12≤≤. x a b n n N n ==∑1

(2.3.6a)

f t a B t n n n ()()=

=-∞∞∑

(2.3.6b) ?=21)()(*)(1

t t n n n dt t f t B t w a λ

(2.3.6c) Here, w(t) is the weight function.

There are many orthogonal polynomial functions in mathematical literature, such as the Hermite polynomials with a weight function

e t -2, the Laguerre polynomials with a weight function e t -, and the Lengendre polynomials with a unit weight

function.

Among all the orthogonal functions, the Fourier harmonic functions })({t n i n e t S ωω=or {)cos(),sin(t n t n ωω}plays an important role and have a wide range of applications. There are several reasons for its popularity in science literature. In physics, the harmonic oscillator, the plane waves in electromagnetic theory and quantum mechanics serve as the building blocks in classical and modern physics. In mathematics, harmonic functions are the basis for complex numbers and the theory of generalized functions. In information theory, harmonic signals can be transmitted in the most efficient way.

The Fourier transform is defined by:

42 Persistent Business Cycles ?-=

=dt t i t f S t f F }exp{)(21)()]([ωπω (2.3.7a)

and ?==-ωωωπωd t i S t f S F }exp{)(21

)()]([1 (2.3.7b)

The Levy distribution L(x) can be better described by its Fourier transform (Montroll and Shlesinger 1984):

ωωωπαd a x i x L }||exp{21

)(?∞

∞---= (2.3.8)

Here, 0 < α < 2. For the Gaussian distribution, α =2. For the Cauchy distribution, α = 1.

2.3.3 The Power Spectrum and The Wiener-Khinchin Theorem

The average power for f(t) is:

?

?∞∞-∞→-∞→==ωωπd S T dt t f Y T T T T 22/2/2|)(|211lim |)(|lim (2.3.9) Therefore, we have spectral density G f ()ω: ωωπ

d G Y f )(21?∞∞-=

(2.3.10) 2|)(|1lim )(ωωS T G T f ∞→= (2.3.11)

For a stationary time series with finite mean and variance, the time average is equal to its ensemble average, so that

)()(*)()(*)(1

lim τττR t f t f dt t f t f T T >=-=<-?∞→ (2.3.12)

Then we have the Wiener-Khinchin Theorem (Reichl 1998):

])([)exp{)()(τττωτωR F d i R G f =-=? (2.3.13)

Mathematical Representations 43

For the Gaussian white noise, its correlation is a delta function, so its power spectrum is a constant:

πδ21)]([=t F (2.3.14)

For a harmonic oscillation, its correlation function is )cos(τω (see Equation (2.3.5)), its power spectrum is:

)]'()'([22)]cos(2[2

2ωωδωωδπωτ-++=A A F (2.3.15)

Therefore, we have two frequencies ±ω for a cosine correlation.

For a correlation with a Gaussian wave packet, its power spectrum is still a Gaussian form. However, a fat bell curve will turn into a sharp one and vice versa.

αωαα42221][--=e e F t (2.3.16)

From the Wiener-Khinchin Theorem, we can see the correlation analysis and spectral analysis is closely related to each other, although they originated in the stochastic and deterministic approaches separately.

2.4 Measures of the Deviations from the Gaussian

Distribution

An equilibrium process is characterized by an unimodal distribution with finite variance. Long tails and long correlations indicate a deviation from equilibrium. For situations under far from equilibrium conditions, U-shaped and multimodal distributions are observed in natural and social phenomena (Chen 1987b, Wen, Chen, and Zhang 1996). Multimodal distribution may appear near a bifurcation point or in a transition regime. For example, option prices during the stock market crash have a bimodal distribution (Chen and Goodhart 1998).

2.4.1 Locations of A Unimodal Distribution:

the Mode, Mean, and Median

44 Persistent Business Cycles In addition to the arithmetic average, we have other useful measures of the central tendency of a distribution. The median X m is the value of the variate that divides the total frequency into two equal halves:

??∞-∞==m

m X X dx x f dx x f 21)()( (2.4.1)

The position of a local maximum in a probability distribution is called a mode X o . Similarly, we can define an antimode X a for a local minimum in distribution (Kendall 1987).

For a Gaussian distribution, we have μ = X m =X o because of its symmetry. For a skew unimodal distribution, the mean, median, and mode are different.

2.4.2 Deviations from Gaussian Distribution:

Skewness and Kurtosis

Current statistics have several indicators of the deviation from Gaussian distribution. The deviation of a unimodal distribution from the Gaussian distribution is measured by their ratio of moments.

For a Gaussian distribution, its symmetry is characterized by the fact that its odd moments are zero. For the even moments of a Gaussian distribution, we have:

μ2n = !)!2(2)(22n n x n n

n σμ>=-< (2.4.2)

For N (0, 1 ), we have μ2 = 1, μ4 = 3.

Therefore, the Gaussian distribution can serve as a benchmark in measuring the skewness 1β (the degree of distribution asymmetry) and the kurtosis 2β (the thickness of distribution tails). These can be done by comparing their third and forth moment in statistics (Kendall 1987, Greene 1993).

1β= μσ33 (2.4.3)

2β = 344-σμ (2.4.4)

For a positive 1β, μ > X m > X o , and the upper tail of the unimodal distribution is heavier. For a negative 1β, μ < X m < X o , and the lower tail is heavier.

Mathematical Representations

45 A distribution is called mesokurtic for 2β= 0, leptokurtic (thin) for 2β> 0, and platykurtic (flat) for 2β< 0. For a Gaussian distribution, its

2βis zero.

2.4.3 Information Entropy- A Measure of Homogeneity The information entropy is a measure of homogeneity. It starts from a consideration of two-state distribution. Given the n discrete states (n = 1, 2, . . . , N) and their probability p n , the information entropy H is defined as the following (Shannon and Weaver 1949):

H p p n n N n =-=∑1log()

(2.4.5) H H N ?=max log when p N

n =1 (2.4.6)

For N=2, we can set p p 1=and p q p 21==-. Therefore, we have:

H p p p q q p p p p ()log log log ()log()=--=----11

0)1(log )(=-=p

p p p H ?? when H p H ()max =.

Therefore, H H p max ()log ===12

2 (2.4.7)

46 Persistent Business Cycles

The information entropy for two discrete states is shown in Figure 2.6.

From the principle of maximum information entropy, we can find the corresponding distribution under a given condition (Goldman 1953). For a discrete distribution with a given mean m, the Poisson distribution has the maximum entropy.

Let us consider the continuous distribution,

?-=dx

x

p

x

p

H)]

(

[

log

)

((2.4.8a)

?=1

)

(dx

x

p(2.4.8b)

Under the condition of a constant variance σ, the information entropy reaches the maximum when the distribution is a Gaussian.

p x()=N

2

2

2

1

)

,0(σ

π

σ

σ

x

e-

=(2.4.9a)

]

2[

log max

σπe

H=(2.4.9b) and

2

2

2)

=

>=

x

p

x

x(2.4.9c)

Mathematical Representations

47 We can apply the Lagrange multipliers in the calculus of variations,

dx p x dx p p H F ??++=2)(βα

01log 221=++--=x p p

F λλ?? when H H ?max (2.4.10)

2211x e e p λλ-= (2.4.11)

Substituting this value into Eq. (2.4.8b) and (2.4.9c) to determine λ1 and λ2, we get Eq.(2.4.9a).

Similarly, we can find other distributions under different conditions.

For example, under the condition of a constant mean μ, the distribution with maximum entropy is:

μμx e x p -=

1)( (2.4.12a)

)(log max μe H =

(2.4.12b)

when 0)(>=>=

(2.4.12c)

Under the condition of limited peak range, say the range of

||x S 2≤ p x S ()=

12

(2.4.13a) H S max log()=

12

4 (2.4.13b) ?-=S

S dx x p 1)( (2.4.13c)

You can prove that: under constant information entropy H, the Gaussian distribution has the smallest variance among any one-dimensional probability distribution.

48 Persistent Business Cycles

The meaning of information entropy is sometimes confusing, since information has different implications under different situations. For example, the concept of average measures the central tendency of a distribution, the variance measures the range of the central tendency, and the information entropy measures the homogeneity of a distribution. The principle of maximum entropy indicates the tendency towards a homogeneous state without any structure.

There is a close relation between thermodynamic entropy in equilibrium statistical mechanics and information entropy in information theory. However, there is difficulty in defining entropy in nonequilibrium statistical mechanics (Penrose 1979). Correlation and non-Markovian property make the description of an open system much more complex than a closed or isolated system.

From the perspective of nonequilibrium physics, economies are open systems with expanding energy sources. The Gaussian distribution may appear in an isolated system with the conservation of energy or in a closed system with a stable environment. Under continuing flows of energy and negative entropy, probability distributions in open economies will significantly deviate from the Gaussian distribution. Observed deviations from a Gaussian distribution can serve as a measure of out-of-equilibrium in real economies.

The following factors may create nonequilibrium conditions: a finite number of economic players (such as the dominance of large firms), a finite range of variables (such as the credit ceiling and resource limitations), long-range correlations (such as credit history and social relations), and nonlinear interactions (such as overshooting and positive feedback in economic control), etc. The statistics of skewness and kurtosis provide some quantitative measure of the degree of out-of equilibrium. The most serious case is the Levy distribution with infinite variance. So far the discussion in this section is based on the stable distributions that are unimodal. As far as we know, there is no theoretical foundation to restrict economic study to unimodal distributions. Empirical observations of economic instabilities need a broader scope in mathematical modeling.

2.4.4 Polarization and Unpredictable Uncertainty

- Multimodal and U-Shaped Distribution

A Partial deviation from the central tendency can be measured by deviations from a Gaussian distribution. A radical deviation from the central tendency cannot be well described by any unimodal distribution. Multimodal distribution is observed in the far-from-equilibrium situations such as the cases of rapid chemical reactions and the transition period in a multi-staged growth (Chen 1987b). The bimodal and U-shaped distribution is discussed in critical phenomena such as ferromagnetism and fashion model of public opinion (Haken 1977, Chen 1991). The bimodal distribution is also observed in stock price changes and in option prices (Osborne 1959, Jackwerth and Rubinstein 1996).

The interest in bimodal and multimodal distribution mainly comes from studies of time evolution problems. The origin of the probability theory is static theory of

相关主题
文本预览
相关文档 最新文档