| Term / Concept | Definition / short explanation |
|---|---|
| Transition matrix (T) | Square matrix of probabilities Tij = P(next state = i | current = j). Columns sum to 1 when using column state vectors. |
| State vector (sn) | Column vector of probabilities for each state at step n. sn = Tn s0. |
| Steady state / stationary vector | Vector s satisfying T s = s and sum of entries = 1. It is the eigenvector of T with eigenvalue 1 (normalised). |
π What is a transition matrix?
- Definition: For a discrete-time Markov chain with finite states, a transition matrix T gives the probabilities of moving between states at each step.
- Column convention used here: sn+1 = T sn (so columns of T sum to 1). If you use row vectors, transpose the formulas accordingly.
- Evolution over n steps: sn = Tn s0. Use n
π Real-World Connection
Transition matrices model many processes: customer flows between product categories, disease compartment flows, or habitat occupancy in ecology. The same linear algebraic tools apply.
π How to compute Tn and state probabilities (GDC instructions)
Use your GDC to avoid manual matrix multiplication β always use the calculator for powers of T. Below are stepwise GDC instructions (TI / Casio / Sharp menus vary but the operations are the same).
- Enter T into the matrix editor (label it Matrix A or M1). Make sure each column sums to 1 (column convention).
- Enter the initial state s0 as a column vector (Matrix B or vector list): e.g., s0 = [1,0,0]T if the system starts in state 1.
- Compute Tn: use matrix power function (e.g., Mn) or repeated multiplication on systems without direct power. On many calculators: Matrix An β then multiply by s0.
- Multiply: sn = (Tn) Γ s0. Read the resulting vector β entries are probabilities (should sum to 1 within rounding error).
π§ Examiner Tip
- Always show the GDC command you used (e.g., “Matrix A10 ΓΒ Vector s0″ or “eigenvector(M) -> v”).
- Label matrices clearly in your script/answer (A = T, v = s0), and show the normalising check sum(v) = 1.
π Steady state: eigenvector method and normalisation
- Definition: steady state s satisfies T s = s (so s is an eigenvector with eigenvalue 1).
- Compute with GDC (recommended):
- Use your calculator’s eigen solver: compute eigenvectors of T and identify eigenvector(s) for eigenvalue 1.
- Normalise the eigenvector v: divide by the sum of its entries so they sum to 1 (s = v / sum(v)).
- Alternative solve: Solve linear system (I – T) s = 0 with constraint sum(s) = 1. Many calculators can solve linear systems; include the normalising equation as an extra row (replace one redundant equation with sum = 1).
- Interpretation: steady state gives long-run distribution if chain is regular (ergodic) β i.e., regardless of starting state the chain tends to s as n β β.
π IA Spotlight
For an IA you can gather transition data (e.g., student study locations over weeks), build T from observed frequencies, compute steady state via GDC, and discuss whether the chain is regular and what that implies about long-run behaviour.
π PDP-1 concept, diagonalisation note & solving via linear algebra
If T can be diagonalised as T = P D P-1 (where D is diagonal of eigenvalues), then Tn = P Dn P-1. This helps analyse behaviour when eigenvalues less than 1 decay; eigenvalue 1 dominates long-term limit. Use your GDC for eigen-decomposition rather than computing P or P-1 manually.
π TOK Perspective
When using Markov models to predict real systems, which assumptions (stationarity, independence between steps) are being made, and how do they affect the trustworthiness of predictions?
π Worked examples (AIHL IB style) β GDC based solutions
Below are three short problems similar to AIHL exam style. Each solution shows explicit GDC steps (general instructions β menus differ by calculator).
Q1 β One step probability
Transition matrix (states A,B,C):
T = [ [0.2, 0.1, 0.4], [0.5, 0.6, 0.3], [0.3, 0.3, 0.3] ] (columns shown).
If start s0 = [1, 0, 0]T (state A), find s2.
GDC steps:
- Enter T into Matrix A (3Γ3) exactly as above (columns must match convention).
- Enter s0 as Matrix B (3Γ1): [1;0;0].
- Compute A2 (Matrix A2) then multiply: (A2) Γ B β result is s2. Read entries (probabilities).
Interpretation: entries show probability of being in A,B,C after 2 steps.
Q2 β Steady state (eigenvector method)
Use the same T as Q1. Find steady state s (long-run distribution).
GDC steps:
- Compute eigenvectors/values of Matrix A (often menu: Matrix β Eigen β Eigenvectors).
- Identify eigenvector corresponding to eigenvalue 1 (v).
- Normalise: s = v / sum(v). Show sum(s) β 1.
Note: If eigenvalue 1 is repeated or chain not regular, explain implications in text.
Q3 β Absorbing state / long-run behaviour
Suppose state C in Q1 becomes absorbing (probability 1 to stay). Modify T accordingly and find probability of eventual absorption in C starting from A.
GDC approach:
- Change third column to [0,0,1] to make C absorbing (or rebuild T).
- Compute Tn for large n (e.g., n = 50 or 100) using Matrix A100 Γ s0. The third entry of the result approximates eventual absorption probability.
- Confirm by eigenvector or by solving absorbing probabilities via fundamental matrix for exact analysis (optional advanced step).
π Paper tip
- When asked “eventual probability”, show An Γ s0 for large n and state the n used. Write “T100Γs0 (GDC) β approx.” so markers know you used technology.
- If chain has absorbing states, name them and explain why the chain converges to a distribution concentrated on absorbing classes.
π EE Focus
Extended essay idea: compare Markov modelling across domains (biology: species movement; CS: PageRank; economics: consumer states). Use GDC eigen analyses and discuss model assumptions and data collection issues.
β€οΈ CAS Link
A CAS project could track individual habits (study, exercise) across weeks, create transition matrices from observed transitions and present findings on habit stability and interventions to change long-term behaviour.
π Final checklist (what to show in exam answers)
- Label your matrices and vectors (e.g., T, s0), show the GDC command used (Matrix An Γ s0).
- Show normalising sum(s) = 1 when presenting state vectors or steady state.
- If you use eigenvectors, show eigenvalue = 1 and the normalised eigenvector (s = v / sum(v)).
- Comment on chain regularity, absorbing states or periodicity if relevant to convergence.
π Final real-world note
Markov chains are powerful but depend on the quality of transition data and validity of the time-homogeneity assumption; always check whether transition probabilities vary over time before claiming long-run results.