- Random walk
- Martingale
- Brownian movement

Contents

Toggle## Markov process

A Markov process represents any process having random experience arguments.

A random experiment, noted E is an experiment whose outcome is subject to chance. We note Ω the set of all the possible results of this experiment, and is called universe, space of possibilities or even space of states. A result of E is an element of Ω denoted ω.

For example, in the heads or tails game, the universe of the "tossing a coin" experience is Ω={P, F}. For the experiment "tossing two coins one after the other", the universe is Ω={PP, PF, FP, FF}.

A random event A related to the experiment E is a subset of Ω of which we can say in view of the experiment if it is realized or not. On the previous example, the random event "getting heads" in heads or tails can easily be observed by tossing a coin. A random event is a set and therefore has the main properties of set theory.

Elementary operations on the parts of a set:

- Intersection: the intersection of sets A and B noted A ∩B is the set

points belonging to both A and B. - Union: the union of two sets A and B denoted A∪B is the set of

points belonging to at least one of the two sets. - Empty set: the empty set, denoted Ø, is the set containing no

element. - Disjoint sets: the sets A and B are said to be disjoint if A ∩B = Ø.
- Complementary: the complementary of the set A ⊂ Ω in Ω, denoted by A
^{vs}where Ω \ A, is the set of elements not belonging to A. The sets A

and A^{vs}are disjoint.

Set operations:

- No: the realization of the event contrary to A is represented by A
^{vs}: the

result of the experiment does not belong to A. - And: the event “A and B are realized” is represented by A∩B; the result of

the experience is found in both A and B. - Or: the event “A or B are realized” is represented by A∪B; the result

experience is either in A or B or both. - Implication: the fact that the realization of event A leads to the realization

of B translates to A ⊂ B. - Incompatibility: if A∩B = Ø, A and B are said to be incompatible. A result of

the experience cannot be both in A and in B.

With each event, we seek to associate a measure (which we will not define in this course) between 0 and 1 and representing the probability that the event will occur. For an experiment A, this measurement is denoted P(A).

Formally, let E be a random experiment of universe Ω. We call a probability measure on Ω (or more simply probability) an application P which associates with any random event A a real number P (A) such that

(i) For any A such that P (A) exists, we have 0 ≤ P (A) ≤ 1.

(ii) P (Ø) = 0 and P (Ω) = 1.

(iii) A∩B =; implies that P (A ∪B) = P (A) + P (B).

The probability of an event can be understood as P(A)=number of feasible cases/number of possible cases. The "number of cases" is the cardinal of an event/universe.

## Random variables and probability

A random variable is a function whose value depends on the outcome of a

random experiment E of universe Ω. A random variable X is said to be discrete if it takes a finite or countable number of values. The set of outcomes ω on which X takes a fixed value x forms the event {ω: X(ω) = x} which we note [X = x]. The probability of this event is denoted P(X = x).

The function p_{X} : x → P (X = x) is called the law of the random variable X. If {x_{1}, x_{2},…} is the set of possible values for X, we have:

Let S_{2} the number of stacks obtained when tossing two coins. The set of possible values for S_{2} is {0,1,2}. If we provide the universe Ω associated with this random experiment with the uniform probability P, the following solutions arise:

When it exists (expectation is always defined if X takes a finite number of values, or if X has positive values.), we call expectation or average of a discrete random variable X the quantity denoted E(X) defined by:

When it exists, we call the variance of a discrete random variable X the

quantity noted Var (X) defined by:

The basic idea of conditioning is as follows: additional information about the experience modifies the likelihood that one grants to the studied event.

For example, for a roll of two dice (one red and one blue), the probability of

the event "the sum is greater than or equal to 10" is equal to 1/6 without information

additional. On the other hand, if we know that the result of the red die is 6, it is

equal to 1/2 while it is equal to 0 if the result of the red die is 2.

Let P be a probability measure on Ω and B an event such that P (B)> 0. The

conditional probability of A knowing B is the real P (A | B) defined by:

Events A and B are said to be independent if:

We can extend the independent to n events. Let A_{1}, TO_{2},…, HAS_{not} events. They are said to be independent (as a whole) if for all k ∈ {1,…,n} and for any set of distinct integers {i_{1},…,i_{k} } ⊂ {1,…n}, we have:

Random variables can be independent two by two, without being independent as a whole: