C Continuoustime Markov Chain Ctmc Model 10 Points
Time Markov Chain
Markov Chain Monte Carlo
A.M. Johansen , in International Encyclopedia of Education (Third Edition), 2010
Markov Chains
A discrete-time Markov chain is, roughly speaking, some collection of random variables with a temporal ordering which have the property that conditional upon the present, the future does not depend upon the past. This concept, which can be viewed as a form of something known as the Markov property, can be formalized by saying that a collection of random variables X 1,X 2,… forms a Markov chain if, and only if, the joint pdf of the first n elements of the sequence may be decomposed in the following manner for any value of n:
Although a class of continuous-time stochastic processes with a similar lack-of-memory property can be defined, these are rarely used in an MCMC context.
The basic idea behind MCMC is that, if it is possible to construct a Markov chain such that a sequence of draws from that chain has similar statistical properties (in some sense) to a collection of draws from a distribution of interest, then it is also possible to estimate expectations with respect to that distribution by using the standard Monte Carlo estimator but using the dependent collection of random variables obtained by simulating a Markov chain rather than an independent collection.
A few concepts are required to understand how a suitable chain can be constructed. The conditional probability densities p(x n |x n−1) are often termed transition kernels, as they can be thought of as the probability density associated with a movement from x n−1 to x n . If p(x n |x n−1) does not depend directly upon the value of n, then the associated Markov chain is termed time homogeneous (as its transitions have the same distribution at all times). A time homogeneous Markov chain with transition kernel k, is said to have a probability density f as an invariant or stationary distribution if
If a Markov chain satisfies a condition known as detailed balance with respect to a distribution, then it is reversible (in the sense that the statistics of the time-reversed process match those of the original process) and hence invariant with respect to that distribution. The detailed balance condition states, simply, that the probability of starting at x and moving to y is equal to the probability of starting at y and moving to x. Formally, given a distribution f and a kernel k, one requires that f (x)k( y|x) = f ( y)k(x|y) and simple integration of both sides with respect to x proves invariance with respect to f under this condition.
The principle of most MCMC algorithms is that, if a Markov chain has an invariant distribution, f, and (in some suitable sense) forgets where it has been, then using its sample path to approximate integrals with respect to f is a reasonable thing to do. This can be formalized under technical conditions to provide an analog of the law of large numbers (often termed the ergodic theorem) and the central limit theorem. The first of these results tells us that we can expect the sample average to converge to the appropriate expectation with probability one as the number of samples becomes large enough; the second tells us that the estimator we obtain is asymptotically normal with a particular variance (which depends upon the covariance of the samples obtained, demonstrating that it is important that the Markov chain forgets where it has been reasonably fast). These conditions are not always easy to verify in practice, but they are important: it is easy to construct examples which violate these conditions and have entirely incorrect behavior.
In order to use this strategy to estimate expectations of interest, it is necessary to construct Markov chains with the correct invariant distribution. There are two common approaches to this problem.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780080448947013476
Continuous-Time Markov Chains
Sheldon M. Ross , in Introduction to Probability Models (Tenth Edition), 2010
6.5 Limiting Probabilities
In analogy with a basic result in discrete-time Markov chains, the probability that a continuous-time Markov chain will be in state j at time t often converges to a limiting value that is independent of the initial state. That is, if we call this value Pj , then
where we are assuming that the limit exists and is independent of the initial state i.
To derive a set of equations for the Pj , consider first the set of forward equations
(6.17)
Now, if we let t approach ∞, then assuming that we can interchange limit and summation, we obtain
However, as Pij (t) is a bounded function (being a probability it is always between 0 and 1), it follows that if converges, then it must converge to 0 (why is this?). Hence, we must have
or
(6.18)
The preceding set of equations, along with the equation
(6.19)
can be used to solve for the limiting probabilities.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123756862000054
Continuous Time Markov Chains
Mark A. Pinsky , Samuel Karlin , in An Introduction to Stochastic Modeling (Fourth Edition), 2011
Problems
- 6.6.1
-
Let Yn, n = 0, 1, …, be a discrete time Markov chain with transition probabilities P = || Pij ||, and let{N (t); t ≥ 0} be an independent Poisson process of rate λ. Argue that the compound process
- 6.6.2
-
A certain type component has two states: 0 = OFF and 1 = OPERATING. In state 0, the process remains there a random length of time, which is exponentially distributed with parameter α, and then moves to state 1. The time in state 1 is exponentially distributed with parameter β, after which the process returns to state 0.
The system has three of these components, A, B, and C, with distinct parameters:
Component Operating Failure Rate Repair Rate A βA αA B βB αB C βc αC In order for the system to operate, component A must be operating, and at least one of components B and C must be operating. In the long run, what fraction of time does the system operate? Assume that the component stochastic processes are independent of one another.
- 6.6.3
-
Let X 1(t), X 2(t), …, XN (t) be independent two-state Markov chains having the same infinitesimal matrix
Determine the infinitesimal matrix for the Markov chain Z (t) = X 1(t) + ·· ·+ XN (t).
- 6.6.4
-
A system consists of two units, both of which may operate simultaneously, and a single repair facility. The probability that an operating system will fail in a short time interval of length Δ t is μ (Δ t) + o (Δ t). Repair times are exponentially distributed, but the parameter depends on whether the failure was regular or severe. The fraction of regular failures is p, and the corresponding exponential parameter is α. The fraction of severe failure is q = 1 − p, and the exponential parameter is β < α.
Model the system as a continuous time Markov chain by taking as states the pairs (x, y), where x = 0, 1, 2 is the number of units operating and y = 0, 1, 2 is the number of units undergoing repair for a severe failure. The possible states are (2,0), (1, 0), (1, 1), (0, 0), (0, 1), and (0, 2). Specify the infinitesimal matrix A. Assume that the units enter the repair shop on a first come, first served basis.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012381416600006X
Continuous-Time Markov Chains
Oliver C. Ibe , in Markov Processes for Stochastic Modeling (Second Edition), 2013
5.4 First Passage Time
Consider the CTMC with state space , The first passage time into state k is defined as follows:
Let be defined as follows:
That is, is the mean first passage time to state k given that the process started in state i. It can be shown that if is the total rate of transition out of state i and is the rate of transition from state i to state j, then in a manner similar to the discrete-time case,
The intuitive meaning of this equation can be understood from recognizing the fact that is the mean holding time (or mean waiting time) in state i. Thus, given that the process started in state i, it would spend a mean time of in that state and then move into state j with probability . Then from state j it takes a mean time of to reach state k. Thus, the equation can be rearranged as follows:
This form of the equation is similar to that for the mean first passage time for the discrete-time Markov chain (DTMC) that is discussed in Chapter 4.
Example 5.4
Consider the Markov chain whose state-transition-rate diagram is given in Figure 5.5. Find .
Solution
Because the transition rates are specified in the figure, we first obtain the rates , which are as follows:
Thus,
The solution to the system of equations is
Thus, it takes 1.3333 units of time to go from state 1 to state 4.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124077959000050
Formal verification of robotic cell injection systems
Adnan Rashid , ... Iram Tariq Bhatti , in Control Systems Design of Bio-Robotics and Bio-mechatronics with Advanced Applications, 2020
6 Discussions
The probabilistic model checking-based formalization of the robotic cell injection systems using PRISM, presented earlier, is based on DTMC and thus involves the discretization of the continuous dynamics (modeling differential equations) of these systems. Moreover, due to the assignment of probabilities to transitions of the state-based PRISM model, it incorporates various disturbances and measurement noises associated with the underlying system. However, the proposed framework only involves the development of the formal model and thus it lacks the property verification corresponding to this model, which can be done automatically. Moreover, it enables the formalization of 2-DOF cell injection system and cannot be used to reason about 3-DOF and 4-DOF robotic injection system.
In comparison to the model checking-based analysis, the higher-order-logic theorem proving-based approach allows us to model the dynamics of the cell injection systems involving differential and derivative (Eqs. 2, 9, 11) in their true form, whereas, in their model checking-based analysis (Sardar and Hasan, 2017), they are discretized and modeled using a state-transition system. Moreover, all the verified theorems are universally quantified and can thus be specialized to the required values based on the requirement of the analysis of the cell injection systems. However, due to the undecidable nature of the higher-order logic, the verification results involve manual interventions and human guidance. Moreover, it only provides the formalization of 2-DOF cell injection system and cannot be used to reason about 3-DOF and 4-DOF robotic injection system. Table 3 presents a comparison of various analysis techniques, summarizing their strength and weaknesses, for analyzing the robotic cell injection systems. This comparison is performed based on various parameters such as expressiveness, accuracy, and automation. For example, in model checking, we cannot truly model the differential equations, and their discretization results in an abstracted model, which makes it less expressive. Moreover, higher-order-logic theorem proving enables the verification in an interactive manner due to the undecidable nature of the underlying logic.
Paper-and- pencil proof | Simulation | Computer algebra system | Model checking | Theorem proving | |
---|---|---|---|---|---|
Expressiveness | ✓ | ✓ | ✓ | ✓ | |
Accuracy | ✓ (?) | ✓ | ✓ | ||
Automation | ✓ | ✓ | ✓ |
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128174630000058
Reaction Kinetics and the Development of Catalytic Processes
Morteza Sohrabi , Jamshidi Amir Masoud , in Studies in Surface Science and Catalysis, 1999
3.1. Modelling the residence time distribution in the reactor
A model for the RTD in CISR was developed based on models first proposed by Van de Vusse [9].
As the collision of the droplets in the impingement zone is random, a suitable mathematical technique to handle such a process could be Markov chain models [10,11].
According to discrete-time Markov chains, the probability of an event at time t+1 (t = 0, 1, 2, …) given only the outcome at time t is equal to the probability of the event at time t+1 given the entire history of the system. In other words, the probability of the event at t+1 is not dependent upon the state history prior to time t. Thus, the values of the process at the given time t determines the conditional properties for future values of the process. These values are called the state of the process and the conditional properties are thought of as transition propbabilities between the states i and j, pij. These values may be displayed in a matrix (P = [pij ]) called the one step transition matrix.
The matrix P has has N rows and N columns, where N is the number of possible states for transition of the system.
The rows of matrix P consist of the probabilities of all possible transitions from a given state and so sum to 1.
(1)
This matrix completely describes the Markov process. Further details of Markov models may be found elsewhere [8, 10, 11].
By considering the patterns of liquid flow within the vessel, the reaction compartment was divided into eight regions with equal volumes (Fig. 2). Each region represents a state in the Markov process. A recycle stream R was also assumed due to counter current flows.
A typical RTD curve calculated for the reactor is shown in Fig. 3.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S0167299199801773
Controlled Markov Processes
Oliver C. Ibe , in Markov Processes for Stochastic Modeling (Second Edition), 2013
13.2.3 Markov Reward Processes
The Markov reward process (MRP) is an extension of the basic Markov process that associates each state of a Markov process with a reward. Specifically, let be a discrete-time Markov chain with a finite state space and transition probability matrix P. Assume that when the process enters state i, it receives a reward when it makes a transition to state j, where can be positive or negative. Let the reward matrix R be defined as follows:
That is, R is the matrix of the rewards. We define the process to be a discrete-time MRP.
Let denote the expected total earnings in the next n transitions, given that the process is currently in state i. Assume that the process makes a transition to state j with probability It receives an immediate reward of , where . To compute , let the reward when there are n transitions to be made be represented by , and let denote the current state. Then we have
The interpretation of the above equation is as follows. The first sum denotes the expected immediate reward that accrues from making a transition from state i to any state. When this transition takes place, the number of remaining transitions out of the n transitions is . Thus, the second sum represents the expected total reward in these transitions given that the process is now in state j, , over all possible j that a transition from state i can be made.
If we define the parameter by
then is basically the expected reward in the next transition out of state i. Thus, we obtain
If we define the column vector and the column vector , then we can rewrite that equation in following matrix form:
which is equivalent to the following:
Finally, if we denote the z-transform of by , then taking the z-transform on both sides of the above equation, we obtain
where . From the nature of the problem, we can determine and thus obtain the solution. Note that is the terminal cost incurred when the process ends up at state i.
Recall that in Chapter 4, it was stated that the inverse transform can be expressed in the form
where the constant term C has the characteristic that all the n rows are identical, and the elements of the rows are the limiting-state probabilities of the system whose transition probability matrix is P. Thus, if is the sequence whose z-transform is , we have
From this, we obtain the solution
If we define , we obtain the solution
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012407795900013X
Continuous-Time Markov Chains
Sheldon M. Ross , in Introduction to Probability Models (Twelfth Edition), 2019
6.2 Continuous-Time Markov Chains
Suppose we have a continuous-time stochastic process taking on values in the set of nonnegative integers. In analogy with the definition of a discrete-time Markov chain, given in Chapter 4, we say that the process is a continuous-time Markov chain if for all and nonnegative integers
In other words, a continuous-time Markov chain is a stochastic process having the Markovian property that the conditional distribution of the future given the present and the past , depends only on the present and is independent of the past. If, in addition,
is independent of s, then the continuous-time Markov chain is said to have stationary or homogeneous transition probabilities.
All Markov chains considered in this text will be assumed to have stationary transition probabilities.
Suppose that a continuous-time Markov chain enters state i at some time, say, time 0, and suppose that the process does not leave state i (that is, a transition does not occur) during the next ten minutes. What is the probability that the process will not leave state i during the following five minutes? Since the process is in state i at time 10 it follows, by the Markovian property, that the probability that it remains in that state during the interval [10,15] is just the (unconditional) probability that it stays in state i for at least five minutes. That is, if we let denote the amount of time that the process stays in state i before making a transition into a different state, then
or, in general, by the same reasoning,
for all . Hence, the random variable is memoryless and must thus (see Section 5.2.2) be exponentially distributed.
In fact, the preceding gives us another way of defining a continuous-time Markov chain. Namely, it is a stochastic process having the properties that each time it enters state i
- (i)
-
the amount of time it spends in that state before making a transition into a different state is exponentially distributed with mean, say, , and
- (ii)
-
when the process leaves state i, it next enters state j with some probability, say, . Of course, the must satisfy
In other words, a continuous-time Markov chain is a stochastic process that moves from state to state in accordance with a (discrete-time) Markov chain, but is such that the amount of time it spends in each state, before proceeding to the next state, is exponentially distributed. In addition, the amount of time the process spends in state i, and the next state visited, must be independent random variables. For if the next state visited were dependent on , then information as to how long the process has already been in state i would be relevant to the prediction of the next state—and this contradicts the Markovian assumption.
Example 6.1 A Shoe Shine Shop
Consider a shoe shine establishment consisting of two chairs—chair 1 and chair 2. A customer upon arrival goes initially to chair 1 where his shoes are cleaned and polish is applied. After this is done the customer moves on to chair 2 where the polish is buffed. The service times at the two chairs are assumed to be independent random variables that are exponentially distributed with respective rates and . Suppose that potential customers arrive in accordance with a Poisson process having rate λ, and that a potential customer will enter the system only if both chairs are empty.
The preceding model can be analyzed as a continuous-time Markov chain, but first we must decide upon an appropriate state space. Since a potential customer will enter the system only if there are no other customers present, it follows that there will always either be 0 or 1 customers in the system. However, if there is 1 customer in the system, then we would also need to know which chair he was presently in. Hence, an appropriate state space might consist of the three states 0, 1, and 2 where the states have the following interpretation:
We leave it as an exercise for you to verify that
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128143469000111
Introduction to Markov Processes
Oliver C. Ibe , in Markov Processes for Stochastic Modeling (Second Edition), 2013
3.2 Structure of Markov Processes
A jump process is a stochastic process that makes transitions between discrete states at times that can be fixed or random. In such a process, the system enters a state, spends an amount of time called the holding time (or sojourn time), and then jumps to another state where it spends another holding time, and so on. If the jump times are , then the sample path of the process is constant between and . If the jump times are discrete, the jump process is called a jump chain.
There are two types of jump processes: pure (or nonexplosive) and explosive. In an explosive jump process, the process makes an infinitely many jumps within a finite time interval. In a pure jump process, there are a finite number of jumps in a finite interval. Figure 3.2 illustrates a realization of a pure jump process.
If the holding times of a continuous-time jump process are exponentially distributed, the process is called a Markov jump process. A Markov jump process is a continuous-time Markov chain if the holding time depends only on the current state. If the holding times of a discrete-time jump process are geometrically distributed, the process is called a Markov jump chain. However, not all discrete-time Markov chains are Markov jump chains. For many discrete-time Markov chains, transitions occur in equally spaced intervals, such as every day, every week, and every year. Such Markov chains are not Markov jump chains.
Unfortunately, not every physical system can be modeled by a jump process. Such systems can be modeled by processes that move continuously between all possible states that lie in some interval of the real line. Thus, such processes have continuous space and continuous time. One example of a continuous-time continuous-space process is the Brownian motion, which was first described in 1828 by the botanist Robert Brown, who observed that pollen particles suspended in a fluid moved in an irregular random manner. In his mathematical theory of speculation, Bachelier (1900) used the Brownian motion to model the movement of stock prices. Arguing that the Brownian motion is caused by the bombardment of particles by the molecules of the fluid, Einstein (1905) obtained the equation for Brownian motion. Finally, Wiener (1923) established the mathematical foundation of the Brownian motion as a stochastic process. Consequently, the Brownian motion is also called the Wiener process and is discussed in great detail in Chapter 9. The Brownian motion has been successfully used to describe thermal noise in electric circuits, limiting behavior of queueing networks under heavy traffic, population dynamics in biological systems, and in modeling various economic processes.
A related process is the diffusion process. Diffusion is the process by which particles are transported from one part of a system to another as a result of random molecular motion. The direction of the motion of particles is from a region of higher concentration to a region of lower concentration of the particle. The laws of diffusion were first formulated by Fick, and Fick's first law of diffusion states that the diffusion flux (or amount of substance per unit area per unit time, or the rate of mass transfer per unit area) between two points of different concentrations in a fluid is proportional to the concentration gradient between these points. The constant of proportionality is called the diffusion gradient and is measured in units of area per unit time. Fick's second law, which is a consequence of his first law and the principle of conservation of mass, states that the rate of change of the concentration of a solute diffusing in a solvent is equal to the negative of the divergence of the diffusion flux. In 1905 Einstein, and independently in 1906, Smoluchowski demonstrated theoretically that the phenomenon of diffusion is the result of Brownian motion.
There is a subtle difference between Brownian motion and diffusion process. Brownian motion is the random motion of molecules, and the direction of motion of these molecules is random. Diffusion is the movement of particles from areas of high concentration to areas of low concentration. Thus, while Brownian motion is completely random, diffusion is not exactly as random as Brownian motion. For example, diffusion does not occur in a homogeneous medium where there is no concentration gradient. Thus, Brownian motion may be considered a probabilistic model of diffusion in a homogeneous medium.
Consider a physical system with state . The behavior of the system when an input , is presented to it is governed by a differential equation of the following form that gives the rate of change of the state:
(3.1)
where the functions a and b depend on the system properties. Equation (3.1) assumes that the system properties and the input are perfectly known and deterministic. However, when the input is a random function, the state function will be a stochastic process. Under this condition, it is a common practice to assume that the input is a white noise process. Also, instead of dealing with a differential equation, we deal with increments in the system state. Thus, the evolution of the state is given by the following stochastic differential equation:
(3.2)
For a diffusion process, the function a is called the drift coefficient, the function b is called the diffusion coefficient, and is the Brownian motion. Thus, a stochastic differential equation can be regarded as a mathematical description of the motion of a particle in a moving fluid. The Markov property of the diffusion process is discussed in Chapter 10. The solution to the stochastic differentiation is obtained via the following stochastic integral equation:
(3.3)
Different types of diffusion processes are discussed in Chapter 10, and they differ in the way the drift and diffusion coefficients are defined.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124077959000037
Markov Renewal Processes
Oliver C. Ibe , in Markov Processes for Stochastic Modeling (Second Edition), 2013
6.7 Markov Regenerative Process
Markov regenerative processes (MRGPs) constitute a more general class of stochastic processes than traditional Markov processes. Markovian dependency, the first-order dependency, is the simplest and most important dependency in stochastic processes. The past history of a Markov chain is summarized in the current state and the behavior of the system thereafter only depends on the current state. The sojourn time of a homogeneous CTMC is exponentially distributed. However, nonexponentially distributed transitions are common in real-life systems. SMPs have generally distributed sojourn times but lack the ability to capture local behaviors during the intervals between successive regenerative points. MRGPs are discrete-state continuous-time stochastic processes with embedded regenerative time points at which the process enjoys the Markov property. MRGPs provide a natural generalization of SMPs with local behavior accounted for. Thus, the SMP, the discrete-time Markov chain, and the CTMC are special cases of the MRGP.
A stochastic process is called an MRGP, if there exists a Markov renewal sequence of random variables such that all the conditional finite dimensional distributions of given are the same as those of given . The above definition implies that
(6.64)
The MRGP does not have the Markov property in general, but there is a sequence of time points at which Markov property holds. From the above definition, it is obvious that every SMP is an MRGP. The difference between SMP and MRGP is that in an SMP every state transition is a regeneration point, which is not necessarily true for the MRGP. The requirement of regeneration at every state transition makes the SMP of limited interest for transient analysis of systems that involve deterministic parameters as in a communication system. An example of the sample path of MRGP is shown in Figure 6.8, where it is shown that not every arrival epoch is a regeneration point.
As discussed earlier, MRGPs are a generalization of many stochastic processes including Markov processes. The difference between SMP and MRGP can be seen by comparing the sequence with the sequence obtained from the state transition instants . In an SMP, every state transition is a regeneration point, which means that . For MRGP, every transition point is not a regeneration point and as such is a subsequence of . The fact that every transition is a regeneration point in the SMP limits its use in applications that involve deterministic parameters.
The time instants at which transitions take place are called regeneration points. The behavior of the process can be determined by the latest regeneration point, which can specify the elapsed time and the latest state visited. A regeneration point is independent of the elapsed time up to the instant of jump instant.
For an SMP, no state change occurs between successive regeneration points and the sample paths are piecewise constant and is when the nth jump occurs. But for an MRGP, the stochastic process between and can be any continuous-time stochastic process, including CTMC, SMP, or another MRGP. This means that the sample paths are no longer piecewise constant, because local behaviors exist between consecutive Markov regeneration points. The jumps do not necessarily have to be at the . More information on MRGPs can be found in Kulkarni (2010).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124077959000062
Source: https://www.sciencedirect.com/topics/mathematics/time-markov-chain
0 Response to "C Continuoustime Markov Chain Ctmc Model 10 Points"
Post a Comment