{"id":3051,"date":"2016-02-25T10:30:31","date_gmt":"2016-02-25T09:30:31","guid":{"rendered":"http:\/\/smart--grid.net\/?page_id=3051"},"modified":"2022-12-03T22:59:03","modified_gmt":"2022-12-03T21:59:03","slug":"chaines-de-markov-en-temps-discret","status":"publish","type":"page","link":"https:\/\/complex-systems-ai.com\/en\/markov-process\/discrete-time-markov-chains\/","title":{"rendered":"Discrete time Markov chains"},"content":{"rendered":"<div data-elementor-type=\"wp-page\" data-elementor-id=\"3051\" class=\"elementor elementor-3051\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-ec5be09 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"ec5be09\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-33 elementor-top-column elementor-element elementor-element-af43361\" data-id=\"af43361\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-6f68148 elementor-align-justify elementor-widget elementor-widget-button\" data-id=\"6f68148\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"button.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<div class=\"elementor-button-wrapper\">\n\t\t\t\t\t<a class=\"elementor-button elementor-button-link elementor-size-sm\" href=\"https:\/\/complex-systems-ai.com\/en\/markov-process\/\">\n\t\t\t\t\t\t<span class=\"elementor-button-content-wrapper\">\n\t\t\t\t\t\t\t\t\t<span class=\"elementor-button-text\">Markov process<\/span>\n\t\t\t\t\t<\/span>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-33 elementor-top-column elementor-element elementor-element-6fb1ba4\" data-id=\"6fb1ba4\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-1cfc1f1 elementor-align-justify elementor-widget elementor-widget-button\" data-id=\"1cfc1f1\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"button.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<div class=\"elementor-button-wrapper\">\n\t\t\t\t\t<a class=\"elementor-button elementor-button-link elementor-size-sm\" href=\"https:\/\/complex-systems-ai.com\/en\/\">\n\t\t\t\t\t\t<span class=\"elementor-button-content-wrapper\">\n\t\t\t\t\t\t\t\t\t<span class=\"elementor-button-text\">Home page<\/span>\n\t\t\t\t\t<\/span>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<div class=\"elementor-column elementor-col-33 elementor-top-column elementor-element elementor-element-13df541\" data-id=\"13df541\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-6f849ca elementor-align-justify elementor-widget elementor-widget-button\" data-id=\"6f849ca\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"button.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<div class=\"elementor-button-wrapper\">\n\t\t\t\t\t<a class=\"elementor-button elementor-button-link elementor-size-sm\" href=\"https:\/\/fr.wikipedia.org\/wiki\/Cha%C3%AEne_de_Markov\" target=\"_blank\" rel=\"noopener\">\n\t\t\t\t\t\t<span class=\"elementor-button-content-wrapper\">\n\t\t\t\t\t\t\t\t\t<span class=\"elementor-button-text\">Wiki<\/span>\n\t\t\t\t\t<\/span>\n\t\t\t\t\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-35272cc elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"35272cc\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-6c5c480\" data-id=\"6c5c480\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-ab637cf elementor-widget elementor-widget-progress\" data-id=\"ab637cf\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"progress.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t<span class=\"elementor-title\" id=\"elementor-progress-bar-ab637cf\">\n\t\t\t\tDifficulty\t\t\t<\/span>\n\t\t\n\t\t<div aria-labelledby=\"elementor-progress-bar-ab637cf\" class=\"elementor-progress-wrapper\" role=\"progressbar\" aria-valuemin=\"0\" aria-valuemax=\"100\" aria-valuenow=\"25\" aria-valuetext=\"25% (Facile)\">\n\t\t\t<div class=\"elementor-progress-bar\" data-max=\"25\">\n\t\t\t\t<span class=\"elementor-progress-text\">Easy<\/span>\n\t\t\t\t\t\t\t\t\t<span class=\"elementor-progress-percentage\">25%<\/span>\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-2f301a4a elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"2f301a4a\" data-element_type=\"section\" data-e-type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-76ca6f63\" data-id=\"76ca6f63\" data-element_type=\"column\" data-e-type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-55d659f7 elementor-widget elementor-widget-text-editor\" data-id=\"55d659f7\" data-element_type=\"widget\" data-e-type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t<div class=\"elementor-widget-container\">\n\t\t\t\t\t\t\t\t\t<p><\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_2 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewbox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewbox=\"0 0 24 24\" version=\"1.2\" baseprofile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/complex-systems-ai.com\/en\/markov-process\/discrete-time-markov-chains\/#Chaines-de-Markov-en-temps-discret\" >Discrete time Markov chains<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/complex-systems-ai.com\/en\/markov-process\/discrete-time-markov-chains\/#Chaines-de-Markov-en-temps-discret-homogenes\" >Homogeneous discrete time Markov chains<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/complex-systems-ai.com\/en\/markov-process\/discrete-time-markov-chains\/#Representation-des-chaines-de-Markov-en-temps-discret\" >Representation of Markov chains in discrete time<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/complex-systems-ai.com\/en\/markov-process\/discrete-time-markov-chains\/#Graphes-reduits-des-chaines-de-Markov-en-temps-discret\" >Reduced graphs of discrete time Markov chains<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/complex-systems-ai.com\/en\/markov-process\/discrete-time-markov-chains\/#Exemple-sur-les-chaines-de-Markov-en-temps-discret\" >Example on discrete time Markov chains<\/a><\/li><\/ul><\/nav><\/div>\n<h2 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"Chaines-de-Markov-en-temps-discret\"><\/span>Discrete time Markov chains<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><\/p>\n<p>When we are in the presence of a random phenomenon, we notice that the future is dependent only on the present. It is in this condition that one can model the Markov chains in discrete time.<\/p>\n<p><\/p>\n<div style=\"padding: 5px; background-color: #d5edff; border: 2px solid #3c95e8; -moz-border-radius: 9px; -khtml-border-radius: 9px; -webkit-border-radius: 9px; border-radius: 9px;\">\n<p>Let (X<sub>not<\/sub>) a sequence of random variables with values in a finite set of J states, X<sub>t<\/sub>= j is the state of the system at time t. We say that X<sub>not<\/sub> is a transition Markov chain if qq is n, qq is i<sub>0<\/sub>,\u2026, I<sub>n + 1<\/sub> :<\/p>\n<p>P (X<sub>(n + 1)<\/sub>= i<sub>(n + 1)<\/sub> | X<sub>not<\/sub> = i<sub>not<\/sub>,\u2026, X<sub>0<\/sub>= i<sub>0<\/sub>) = P (X<sub>(n + 1)<\/sub> = i<sub>(n + 1)<\/sub> | X<sub>not<\/sub> = i<sub>not<\/sub>)<\/p>\n<p>Such a process is said to be without memory. The value of this probability is denoted by p<sub>n (n + 1)<\/sub>.<\/p>\n<\/div>\n<p><\/p>\n<p>We notice that X<sub>0<\/sub> is not fixed by the definition, this law is called the initial law. The vector of the initial probabilities is denoted by <b><span lang=\"grc\">\u03c0<\/span><\/b>, with <b><span lang=\"grc\">\u03c0<\/span><\/b><sub>j<\/sub>= P (S<sub>0<\/sub>= j) with j included in the finite set and the sum of <b><span lang=\"grc\">\u03c0<\/span><\/b><sub>j<\/sub>=1.<\/p>\n<p><\/p>\n<div style=\"padding: 5px; background-color: #d5edff; border: 2px solid #3c95e8; -moz-border-radius: 9px; -khtml-border-radius: 9px; -webkit-border-radius: 9px; border-radius: 9px;\">The vector of transition probabilities is denoted by v<sub>i<\/sub> (p<sub>i0<\/sub>,\u2026, P<sub>ij<\/sub>) with j included in the finite set and the sum of p<sub>ij<\/sub>=1.<\/div>\n<p><\/p>\n<p>The transition probability matrix is the concatenation of the transition probability vectors. All the terms are therefore positive or zero, the sum of the terms on a line is equal to 1. The powers of a transition matrix (or stochastic matrix) are stochastic matrices.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Chaines-de-Markov-en-temps-discret-homogenes\"><\/span>Homogeneous discrete time Markov chains<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><\/p>\n<div style=\"padding: 5px; background-color: #d5edff; border: 2px solid #3c95e8; -moz-border-radius: 9px; -khtml-border-radius: 9px; -webkit-border-radius: 9px; border-radius: 9px;\">A Markov chain is said to be homogeneous over time if the transition probabilities are not affected by a translation over time. That is, it does not depend on n. The transition probabilities remain stationary over time.<\/div>\n<p><\/p>\n<p>Let&#039;s take an example. As long as a player has money, he plays by wagering \u00a3 1. He wins \u00a3 1 with a probability of p and loses his stake with a probability (1-p) with p between 0 and 1. The game ends when he has \u00a3 3.<\/p>\n<p><\/p>\n<p>We can define four states: 0, 1, 2, 3, representing the money he has. The transition matrix is as follows:<\/p>\n<p><\/p>\n<div>\n<figure><img fetchpriority=\"high\" decoding=\"async\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2016\/02\/markov5.png\" alt=\"discrete time Markov chains\" width=\"388\" height=\"240\" title=\"\"><\/figure>\n<\/div>\n<p><\/p>\n<p>The discrete-time Markov chains can have an initial law which is presented in the form of a stochastic vector (the sum is equal to 1). This law represents the distribution at the origin.<\/p>\n<p><\/p>\n<h2 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"Representation-des-chaines-de-Markov-en-temps-discret\"><\/span>Representation of Markov chains in discrete time<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><\/p>\n<div style=\"padding: 5px; background-color: #d5edff; border: 2px solid #3c95e8; -moz-border-radius: 9px; -khtml-border-radius: 9px; -webkit-border-radius: 9px; border-radius: 9px;\">The graph associated with a <a href=\"https:\/\/complex-systems-ai.com\/en\/markov-process\/\">markov process<\/a> is formed of the points representing the states of the process of the finite set, and of arcs corresponding to the possible transitions p<sub>ij<\/sub>.<\/div>\n<p><\/p>\n<div>\n<figure><img decoding=\"async\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2016\/02\/markovchain.png\" alt=\"Discrete time markov\" width=\"512\" height=\"203\" title=\"\"><\/figure>\n<\/div>\n<p><\/p>\n<div>\n<figure><img decoding=\"async\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2016\/02\/markov4.png\" alt=\"Discrete time markov\" width=\"426\" height=\"222\" title=\"\"><\/figure>\n<\/div>\n<p><\/p>\n<p>Let us denote by Q the transition matrix. A sequence of states (x<sub>1<\/sub>, x<sub>2<\/sub>,. . . , x<sub>m<\/sub>) defines a path of length m going from x<sub>1<\/sub> to x<sub>m<\/sub> in the graph associated with the homogeneous Markov chain if and only if Q (x<sub>1<\/sub>, x<sub>2<\/sub>) Q (x<sub>3<\/sub>, x<sub>4<\/sub>). . .Q (x<sub>m-1<\/sub>, x<sub>m<\/sub>)&gt; 0.<\/p>\n<p><\/p>\n<p>When we try to simulate the first states of Markov chains in homogeneous discrete time (X<sub>not<\/sub>) of finite state space X = {1,. . . , N} described only by its initial law and its transition matrix Q we can use the following algorithm:<\/p>\n<p><\/p>\n<h2 class=\"wp-block-heading\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2018\/06\/markov1.png\" alt=\"Discrete time markov\" width=\"557\" height=\"149\" title=\"\"><\/h2>\n<p><\/p>\n<p>The probability of being in a state j from a state i after n iteration amounts to multiplying the transition matrix Q<sup>not<\/sup> by the initial vector. The answer is then Q<sup>not<\/sup>(i, j).<\/p>\n<p><\/p>\n<h2 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"Graphes-reduits-des-chaines-de-Markov-en-temps-discret\"><\/span>Reduced graphs of discrete time Markov chains<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><\/p>\n<p>A state j is accessible from a state i if there is a strictly positive probability of reaching state j from state i in a finite number of transitions. From a point of view of <a href=\"https:\/\/complex-systems-ai.com\/en\/graph-theory-2\/\">graph theory<\/a>, j is reachable from a state i if there is a path between i and j.<\/p>\n<p><\/p>\n<div style=\"padding: 5px; background-color: #ffdcd3; border: 2px solid #ff7964; -moz-border-radius: 9px; -khtml-border-radius: 9px; -webkit-border-radius: 9px; border-radius: 9px;\">If state j is accessible from state i and, conversely, state i is accessible from state j, then we say that states i and j communicate. This results in the fact that i and j are on the same circuit.<\/div>\n<p><\/p>\n<p>A reduced graph is a partition of a Markov chain into equivalence classes such that all states of a class communicate with each other.<\/p>\n<p><\/p>\n<div style=\"padding: 5px; background-color: #d5edff; border: 2px solid #3c95e8; -moz-border-radius: 9px; -khtml-border-radius: 9px; -webkit-border-radius: 9px; border-radius: 9px;\">\n<p>The equivalence classes are as follows:<\/p>\n<ul>\n<li>a class is said to be transitory if it is possible to leave it, but in this case, the process will never be able to return to it again;<\/li>\n<li>a class is said to be recurrent or persistent if it cannot be left. If a <a href=\"https:\/\/complex-systems-ai.com\/en\/markov-process\/recurrence-and-transition-criteria\/\">recurrent class<\/a> is composed of a single state, it is said to be absorbent.<\/li>\n<\/ul>\n<\/div>\n<p><\/p>\n<div>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2016\/02\/markov1.png\" alt=\"Discrete time markov\" width=\"801\" height=\"363\" title=\"\"><\/figure>\n<\/div>\n<p><\/p>\n<p>If the partition into equivalence classes induces only one recurrent class, the Markov chain is said to be irreducible. A Markov chain has at least one recurring class.<\/p>\n<p><\/p>\n<h2 class=\"wp-block-heading\"><span class=\"ez-toc-section\" id=\"Exemple-sur-les-chaines-de-Markov-en-temps-discret\"><\/span>Example on discrete time Markov chains<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p><\/p>\n<p>We are interested in the development of a natural forest in a temperate region on a plot. Our model has 3 states. State 1 is that of vegetation made up of grasses or other species with a low carbon balance; state 2 corresponds to the presence of shrubs whose rapid development requires maximum sunshine and whose carbon yield will be maximum, and state 3 that of larger trees that can grow in a semi-sunny environment (considered as a forest). If these three states are denoted respectively by h, a, f (for grass, shrubs, forest), the set of possible states for a given point of this plot is the set s={h, a, f}. On the plot, a large number of points distributed on a regular grid are identified on the ground and the state of the vegetation at each of these points is recorded at fixed time intervals. This type of program is called a <a href=\"https:\/\/complex-systems-ai.com\/en\/language-theory\/\">automaton<\/a> cellular.<\/p>\n<p><\/p>\n<p>By observing the evolution during an interval of time, one can determine for each state i\u2208S the proportion of points which passed to the state j\u2208S, and note p<sub>ij<\/sub> this proportion. If the different proportions thus noted (there are 9) change little from one time interval to the next, we can assume them to be unchanged over time and we can look at the probabilities for any point of passing from the state i in state j for an interval of time. Suppose for example that in this plot, these probabilities are as follows:<\/p>\n<p><\/p>\n<div>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2018\/09\/proba31.png\" alt=\"Discrete time markov\" width=\"340\" height=\"111\" title=\"\"><\/figure>\n<\/div>\n<p><\/p>\n<p>If X<sub>0<\/sub> indicate the state of a point at time t = 0 and X<sub>1<\/sub> the state of the same point at t = 1, for example we have the probability of passing from the shrub state at t = 0 to the forest state at t = 1 is written P (X<sub>1<\/sub>= f: X<sub>0<\/sub>= a) is equal to 0, 4.<\/p>\n<p><\/p>\n<p>The set of states S and the transition matrix P constitute an example of a Markov chain. We can also represent this Markov chain by the following graph:<\/p>\n<p><\/p>\n<div>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2018\/09\/proba32.png\" alt=\"Discrete time markov\" width=\"489\" height=\"201\" title=\"\"><\/figure>\n<\/div>\n<p><\/p>\n<p>In this model, we can thus calculate the probability of any succession of states, called the trajectory of the Markov chain. For example, the probability that at a point on the plot we observe the succession of states (h, h, a, f, f) is calculated as follows:<\/p>\n<p><\/p>\n<div>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2018\/09\/proba33.png\" alt=\"Discrete time markov\" width=\"540\" height=\"70\" title=\"\"><\/figure>\n<\/div>\n<p><\/p>\n<p>where \u03c0<sub>0<\/sub> is the probability of being in the state at the initial time t = 0.<\/p>\n<p><\/p>\n<p>Observation of the state in which the various points of the plot are located at the initial time t<sub>0<\/sub> allows to determine the initial proportions of each of the 3 states. For that, one notes for each point the state in which it is found and one calculates the proportion of points of each of the possible states. We can see each proportion as the probability for a point of the plot to be in one of the states at the initial instant. Thus, if we have for example \u03c0<sub>0<\/sub> = (0.5, 0.25, 0.25), this means that half of the points of the plot are initially in state h, a quarter in state a and a quarter in state f. But we can also interpret this by considering that any state has a 50% chance of being in state h, 25% of being in state a and 25% of being in state f. This is why the proportion of individuals of the studied population located in each of the states,<\/p>\n<p><\/p>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2018\/09\/proba34.png\" alt=\"Discrete time markov\" width=\"271\" height=\"54\" title=\"\"><\/figure>\n<p><\/p>\n<p>is called the initial probability law or the initial distribution. When one chooses a modeling by a Markov chain, the objective is often to determine the evolution of the distribution of states over time. For example, if the plot considered above is covered for a third of forest at the initial moment, will this proportion grow, tend towards 100%, on the contrary tend towards zero or approach a value? limit kind of ecological balance?<\/p>\n<p><\/p>\n<p>We will see that if we know the initial distribution we can calculate the distribution at time t = 1, then at time t = 2 and so on. Let us calculate for t = 1:<\/p>\n<p><\/p>\n<div>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2018\/09\/proba35.png\" alt=\"Discrete time markov\" width=\"621\" height=\"82\" title=\"\"><\/figure>\n<\/div>\n<p><\/p>\n<p>We deduce that \u03c0<sub>1<\/sub>(h) is the dot product of the vector \u03c0<sub>0<\/sub>\u00a0 with the first column of the matrix P. Similarly, we check that \u03c0<sub>1<\/sub>(a) is the dot product of the vector with the second column of the matrix P and that \u03c0<sub>1<\/sub>(f) is the scalar product of the vector with the third column of the matrix P. We summarize this: \u03c0<sub>1<\/sub>= \u03c0<sub>0<\/sub>P.<\/p>\n<p><\/p>\n<div>\n<figure><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/complex-systems-ai.com\/wp-content\/uploads\/2018\/09\/proba36.png\" alt=\"Discrete time markov\" width=\"579\" height=\"79\" title=\"\"><\/figure>\n<\/div>\n<p><\/p>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>","protected":false},"excerpt":{"rendered":"<p>Markov process Home page Wiki Difficulty Easy 25% Markov chains in discrete time When we are in the presence of a random phenomenon, we notice that \u2026 <\/p>","protected":false},"author":1,"featured_media":0,"parent":5007,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-3051","page","type-page","status-publish","hentry"],"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/complex-systems-ai.com\/en\/wp-json\/wp\/v2\/pages\/3051","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/complex-systems-ai.com\/en\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/complex-systems-ai.com\/en\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/complex-systems-ai.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/complex-systems-ai.com\/en\/wp-json\/wp\/v2\/comments?post=3051"}],"version-history":[{"count":12,"href":"https:\/\/complex-systems-ai.com\/en\/wp-json\/wp\/v2\/pages\/3051\/revisions"}],"predecessor-version":[{"id":18444,"href":"https:\/\/complex-systems-ai.com\/en\/wp-json\/wp\/v2\/pages\/3051\/revisions\/18444"}],"up":[{"embeddable":true,"href":"https:\/\/complex-systems-ai.com\/en\/wp-json\/wp\/v2\/pages\/5007"}],"wp:attachment":[{"href":"https:\/\/complex-systems-ai.com\/en\/wp-json\/wp\/v2\/media?parent=3051"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}