DP Mathematics: Applications and Interpretation Questionbank
AHL 4.19—Transition matrices – Markov chains
Description
[N/A]Directly related questions
-
EXN.1.AHL.TZ0.5a:
Find the transition matrix for the maze.
-
EXN.1.AHL.TZ0.5b:
A scientist sets up the robot and then leaves it moving around the maze for a long period of time.
Find the probability that the robot is in room when the scientist returns.
-
21M.1.AHL.TZ2.13a:
Determine the transition matrix for this graph.
-
21M.1.AHL.TZ2.13b:
If the mouse was left to wander indefinitely, use your graphic display calculator to estimate the percentage of time that the mouse would spend at point .
-
21M.1.AHL.TZ2.13c:
Comment on your answer to part (b), referring to at least one limitation of the model.
-
21M.2.AHL.TZ1.5a:
It is sunny today. Find the probability that it will be sunny in three days’ time.
-
21M.2.AHL.TZ1.5b:
Find the eigenvalues and eigenvectors of .
-
21M.2.AHL.TZ1.5d:
Hence find the long-term percentage of sunny days in Vokram.
-
21M.2.AHL.TZ1.5c.i:
Write down the matrix .
-
21M.2.AHL.TZ1.5c.ii:
Write down the matrix .
-
21N.1.AHL.TZ0.9a:
Complete the following transition diagram to represent this information.
-
21N.1.AHL.TZ0.9b:
Katie works for days in a year.
Find the probability that Katie cycles to work on her final working day of the year.
-
22M.1.AHL.TZ1.11b:
Using your answer to (a), or otherwise, find the long-term probability of the switch being in state . Give your answer in the form , where .
-
SPM.2.AHL.TZ0.6e:
Hence write down the number of customers that company X can expect to have in the long term.
-
SPM.2.AHL.TZ0.6a:
Write down a transition matrix T representing the movements between the two companies in a particular year.
-
SPM.2.AHL.TZ0.6b:
Find the eigenvalues and corresponding eigenvectors of T.
-
SPM.2.AHL.TZ0.6c:
Hence write down matrices P and D such that T = PDP−1.
-
SPM.2.AHL.TZ0.6d:
Find an expression for the number of customers company X has after years, where .
-
22M.2.AHL.TZ2.5a.ii:
What does represent in this context?
-
EXM.1.AHL.TZ0.17a:
Write down the transition matrix for this Markov chain.
-
EXM.3.AHL.TZ0.4a.ii:
Explain why for any transition state diagram the sum of the out degrees of the directed edges from a vertex (state) must add up to +1.
-
EXM.1.AHL.TZ0.18a:
Show that is always an eigenvalue for M and find the other eigenvalue in terms of and .
-
EXM.3.AHL.TZ0.4g:
Explain how your answer to part (f) fits with your answer to part (c).
-
EXM.3.AHL.TZ0.4b:
Write down the transition matrix M, for this Markov chain problem.
-
EXM.3.AHL.TZ0.4c.ii:
Explain which part of the transition state diagram confirms this.
-
22M.2.AHL.TZ2.5a.i:
Write down the value of .
-
22M.2.AHL.TZ2.5d.i:
when .
-
EXM.3.AHL.TZ0.4e:
Find .
-
EXM.1.AHL.TZ0.17b:
We know that she went out for lunch on a particular Sunday, find the probability that she went out for lunch on the following Tuesday.
-
EXM.1.AHL.TZ0.18b:
Find the steady state probability vector for M in terms of and .
-
EXM.3.AHL.TZ0.4f:
Hence, deduce the form of .
-
EXM.3.AHL.TZ0.4c.i:
Find the steady state probability vector for this Markov chain problem.
-
EXM.1.AHL.TZ0.17c:
Find the steady state probability vector for this Markov chain.
-
EXM.3.AHL.TZ0.4d:
Explain why having a steady state probability vector means that the matrix M must have an eigenvalue of .
-
EXM.3.AHL.TZ0.4h:
Find the minimum number of tosses of the coin that Abi will have to make to be at least 95% certain of having finished the game by reaching state C.
-
EXM.3.AHL.TZ0.4a.i:
Draw a transition state diagram for this Markov chain problem.
-
22M.2.AHL.TZ2.5d.ii:
in the long term.