Multi-Scale Simulation

Generalizedlimit theorem

For a set of independent and identically distributed random variables (i.i.d) ( defines the equality in probability distribution) with , there exists positive and positive such as:

\sum \limits_{i=1}^{n} X_i \stackrel{d}{=} b_n X_1 + a_n

For a Gaussian variable with a finite variance corresponds to : where is the ensemble average. The limit of above sum corresponds to generalized limit theorem:

X = \lim \limits_{n \rightarrow \infty } \frac{\Big( \sum \limits_{i=1}^{n} X_i \Big) - n_a}{b_n}

This limit converge to a Gaussian variable , event if the variables are not Gaussian, bu have a finite variance: .

A generalization of the Gaussian case has been made by Levy (1954) by relaxing the assumption of finite variance by the variables (which means that each statistical moment to the limit is finite). Levy introduced the order of divergence for the moments of the variables :

\langle | X_{i} |^q \rangle < \infty \quad for \quad q < \alpha \\ \quad \\ \langle | X_{i} |^q \rangle = \infty \quad for \quad q \geqslant \alpha \\ \quad \\ \quad \\ With \\ \quad \\ b_n = n^{1/ \alpha} \quad (for \quad \alpha > 1); \quad a_n = (n - n^{1/ \alpha})\langle X_{1} \rangle

The order of divergence of the moment is called the Levy index .

PreviousPreviousNextNext
HomepageHomepagePrintPrintCreated with Scenari (new window)