Applied Mathematics

p-ISSN: 2163-1409    e-ISSN: 2163-1425

2016;  6(2): 36-39

doi:10.5923/j.am.20160602.03

 

Applied Problems of Markov Processes

Roza Shakenova , Alua Shakenova

Kazach National Research Technical University Named after K. I. Satpaev, Almaty, Kazakhstan

Correspondence to: Roza Shakenova , Kazach National Research Technical University Named after K. I. Satpaev, Almaty, Kazakhstan.

Email:

Copyright © 2016 Scientific & Academic Publishing. All Rights Reserved.

This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativecommons.org/licenses/by/4.0/

Abstract

In this paper authors proposed that the human-being’s state of health can be described as the function of three main factors: economics, ecology and politics of government. We obtained three models of the state of health from the worst to the best using Markov processes. We hope that our theoretical models can be applied in practice.

Keywords: Limiting probability, Probability of state, Markov Processes

Cite this paper: Roza Shakenova , Alua Shakenova , Applied Problems of Markov Processes, Applied Mathematics, Vol. 6 No. 2, 2016, pp. 36-39. doi: 10.5923/j.am.20160602.03.

1. Introduction

Markov chains can be used in the description of different economical, ecological problems. The main problem is to find final (limit) probabilities of different states of a certain system. We may use graphs for the system with discrete states. The vertices of graph correspond to the states of the system on the picture #1. The arrows between graphs show the possibility of system’s transfer from one state to another. The final states of human-beings’ health are very important in medicine. Therefore, this paper considers the problem of determining final probabilities of human-being’s state of health. Let’s recall basic definitions.
Law of the random variable distribution is every relationship between the possible values of a random variable and their probabilities. The table below shows the law of distribution of a discrete random variable.
The set of different states of one physical system with discrete states, where random process occurs, is finite or countable.
(1)
Random process is the sequence of events of the following type (1). We will consider this sequence («chain») of events. We will confine to the finite number of states. The system is able to transfer from one state to another state directly or through other states [2], [3]. Usually the graph of states describes the states visually, where the vertices of graph correspond to the states. If the probability of each state in the future for every time of the system with discrete states,, depends on the state in the present and does not depend on its behavior in the past then the random process is Markov process. Let’s consider conditional probability of transferring of system S on the kth step in the state , if it is known that it was in the state on the (k-1)th step. Denote this probability by:
(2)
Probability is the probability that the system remains in the state on kth step. Probabilities are transition probabilities of Markov chain on the kth step. Transition probabilities can be written in the form of square table (matrix) of size n. This is Stochastic matrix; the sum of all probabilities of one row is equal to because the system can be in one of mutually exclusive states.
(3)
In order to find unconditional probabilities we need to know initial probability distribution, i.e. probabilities at the beginning of the process at time with their sum equal to one.
Markov chain is called uniform, if transition probabilities do not depend on the step’s number (3).
Let’s consider only uniform Markov chains to simplify life.

2. The Formula for Total Probability

Let the event can occur together with only one of following events which form the full group of pair wise mutually exclusive events, i.e. and Then the probability of can be calculated using the formula of total probability.
(4)
Where are called hypotheses, and the values - probabilities of hypotheses.
Make hypothesis such that the system was in the state at initial time with (k=0). The probability of this hypothesis is known and equal to Assuming that this hypothesis takes place, the conditional probability of System being in the state on the first step is equal to transition probability
Applying formula for total probability we obtain the following:
(5)
Now, find the distribution of probabilities on the second step, which depends on the distribution of probabilities on the first step and matrix of transition probabilities for Markov chain. Again make hypothesis such that the system was in the state on the first step, The probability of this hypothesis is known and equal to Given this hypothesis, the conditional probability of the system being in the state on the second step is equal to
Using the formula for total probability we obtain:
Applying this method several times we obtain recurrent formula:
(6)
In some conditions in Markov chain with the increase of the step the stationary mode sets up, at which the system continues to wander over states, but the probabilities of these states do not depend on the number of step. These probabilities are denoted final (limit) probabilities of Markov chains. The equations for such probabilities can be written using mnemonic rule: Given stationary mode, total probability flux of the system remains constant: the flow into the state s is equal to the flow out of the state s.
(7)
This condition is balance condition for the state . Add normalization condition to these n equations.

3. Markov Chains. The state of Health

Let the System be human-being from certain ecological, economic sphere. It can be in the following states:
The problem: To construct the equation and find the final probabilities of human-being’s state of health.
Solution: Let’s consider on the graph. Two arrows are directed into this state; consequently, there are two terms for addition on the left side (7) for (state ). One arrow is directed out of this state, subsequently, there is only one term on the right side (7) for (state ). Hence, using balance condition (7), we obtain the first equation:
(8)
Similarly, we write three more equations:
,
The fifth equation is the normalization condition:
We rewrite the system of equations in the following way:
Let’s solve the system of equations. From 2) we find:
where From 4) we find:
where From 3) find:
where From 1)
find:
where
Giving corresponding values of probabilities:
Calculating the following values:
,
According to the equality5) we have:
Normalization condition
works. We did not need the probabilities

4. Conclusions

In summary, we have found final probabilities of human-being’s state of health, considering the system C- a human-being from a certain ecological, economical sphere. Looking at initial stages of disease in medicine it is possible to make prognosis about the final probabilities of sick human-being’s state of health. Doctors together with researchers could invent such project for human-being’s health recovery.

Appendix

Picture #1.

References

[1]  R.A. Howard, Dynamic programming and Markov processes, Soviet Radio, Russia, 1964.
[2]  Roza Shakenova. The Limiting Probabilities in the Process of Servicing. Applied Mathematics 2012, 2(6):184-186 DOI: 10.5923/j.am.20120206.01 US.
[3]  R.K. Shakenova, “Markov processes of making decisions with an overestimation and some economic problems”, Materials of international scientific-practical conference “Problems of Applied Physics and Mathematics.", pp. 9-14, 2003.
[4]  R. Bellman, Introduction to Matrix., M, Science, 1969.
[5]  A.N. Kolmogorov, S.V. Fomin, Elements of the theory of functions and functional analysis, Science, 1972.
[6]  R. Campbell, McConnel, L. Brue Stanley, Economics. Principles, Problems and Policies, Tallin, 1993.
[7]  H. Mine, S. Osaki, Markov Decision Processes, Nauka, Moscow, 1977.
[8]  E.S Ventsel, and L.А. Ovcharov. The theory of random processes and its engineering applications. – М.: High School, 2000.
[9]  V.P. Chistyakov. Probability theory course – М.: «Science», 1982.
[10]  Chzhun Kai-Lai. Uniform Markov Chains. – М.: Мир, 1964.