AHL 4.19 β€” Markov Chains & Transition Matrices

Term / Concept Definition / short explanation
Transition matrix (T) Square matrix of probabilities Tij = P(next state = i | current = j). Columns sum to 1 when using column state vectors.
State vector (sn) Column vector of probabilities for each state at step n. sn = Tn s0.
Steady state / stationary vector Vector s satisfying T s = s and sum of entries = 1. It is the eigenvector of T with eigenvalue 1 (normalised).

πŸ“Œ What is a transition matrix?

  • Definition: For a discrete-time Markov chain with finite states, a transition matrix T gives the probabilities of moving between states at each step.
  • Column convention used here: sn+1 = T sn (so columns of T sum to 1). If you use row vectors, transpose the formulas accordingly.
  • Evolution over n steps: sn = Tn s0. Use n

🌍 Real-World Connection

Transition matrices model many processes: customer flows between product categories, disease compartment flows, or habitat occupancy in ecology. The same linear algebraic tools apply.

πŸ“Œ How to compute Tn and state probabilities (GDC instructions)

Use your GDC to avoid manual matrix multiplication β€” always use the calculator for powers of T. Below are stepwise GDC instructions (TI / Casio / Sharp menus vary but the operations are the same).

  1. Enter T into the matrix editor (label it Matrix A or M1). Make sure each column sums to 1 (column convention).
  2. Enter the initial state s0 as a column vector (Matrix B or vector list): e.g., s0 = [1,0,0]T if the system starts in state 1.
  3. Compute Tn: use matrix power function (e.g., Mn) or repeated multiplication on systems without direct power. On many calculators: Matrix An β†’ then multiply by s0.
  4. Multiply: sn = (Tn) Γ— s0. Read the resulting vector β€” entries are probabilities (should sum to 1 within rounding error).

🧠 Examiner Tip

  • Always show the GDC command you used (e.g., “Matrix A10 Γ—Β Vector s0″ or “eigenvector(M) -> v”).
  • Label matrices clearly in your script/answer (A = T, v = s0), and show the normalising check sum(v) = 1.

πŸ“Œ Steady state: eigenvector method and normalisation

  • Definition: steady state s satisfies T s = s (so s is an eigenvector with eigenvalue 1).
  • Compute with GDC (recommended):
    1. Use your calculator’s eigen solver: compute eigenvectors of T and identify eigenvector(s) for eigenvalue 1.
    2. Normalise the eigenvector v: divide by the sum of its entries so they sum to 1 (s = v / sum(v)).
  • Alternative solve: Solve linear system (I – T) s = 0 with constraint sum(s) = 1. Many calculators can solve linear systems; include the normalising equation as an extra row (replace one redundant equation with sum = 1).
  • Interpretation: steady state gives long-run distribution if chain is regular (ergodic) β€” i.e., regardless of starting state the chain tends to s as n β†’ ∞.

πŸ“ IA Spotlight

For an IA you can gather transition data (e.g., student study locations over weeks), build T from observed frequencies, compute steady state via GDC, and discuss whether the chain is regular and what that implies about long-run behaviour.

πŸ“Œ PDP-1 concept, diagonalisation note & solving via linear algebra

If T can be diagonalised as T = P D P-1 (where D is diagonal of eigenvalues), then Tn = P Dn P-1. This helps analyse behaviour when eigenvalues less than 1 decay; eigenvalue 1 dominates long-term limit. Use your GDC for eigen-decomposition rather than computing P or P-1 manually.

πŸ” TOK Perspective

When using Markov models to predict real systems, which assumptions (stationarity, independence between steps) are being made, and how do they affect the trustworthiness of predictions?

πŸ“Œ Worked examples (AIHL IB style) β€” GDC based solutions

Below are three short problems similar to AIHL exam style. Each solution shows explicit GDC steps (general instructions β€” menus differ by calculator).

Q1 β€” One step probability

Transition matrix (states A,B,C):
T = [ [0.2, 0.1, 0.4], [0.5, 0.6, 0.3], [0.3, 0.3, 0.3] ] (columns shown).
If start s0 = [1, 0, 0]T (state A), find s2.

GDC steps:

  1. Enter T into Matrix A (3Γ—3) exactly as above (columns must match convention).
  2. Enter s0 as Matrix B (3Γ—1): [1;0;0].
  3. Compute A2 (Matrix A2) then multiply: (A2) Γ— B β†’ result is s2. Read entries (probabilities).

Interpretation: entries show probability of being in A,B,C after 2 steps.

Q2 β€” Steady state (eigenvector method)

Use the same T as Q1. Find steady state s (long-run distribution).

GDC steps:

  1. Compute eigenvectors/values of Matrix A (often menu: Matrix β†’ Eigen β†’ Eigenvectors).
  2. Identify eigenvector corresponding to eigenvalue 1 (v).
  3. Normalise: s = v / sum(v). Show sum(s) β‰ˆ 1.

Note: If eigenvalue 1 is repeated or chain not regular, explain implications in text.

Q3 β€” Absorbing state / long-run behaviour

Suppose state C in Q1 becomes absorbing (probability 1 to stay). Modify T accordingly and find probability of eventual absorption in C starting from A.

GDC approach:

  1. Change third column to [0,0,1] to make C absorbing (or rebuild T).
  2. Compute Tn for large n (e.g., n = 50 or 100) using Matrix A100 Γ— s0. The third entry of the result approximates eventual absorption probability.
  3. Confirm by eigenvector or by solving absorbing probabilities via fundamental matrix for exact analysis (optional advanced step).

πŸ“ Paper tip

  • When asked “eventual probability”, show An Γ— s0 for large n and state the n used. Write “T100Γ—s0 (GDC) β†’ approx.” so markers know you used technology.
  • If chain has absorbing states, name them and explain why the chain converges to a distribution concentrated on absorbing classes.

🌐 EE Focus

Extended essay idea: compare Markov modelling across domains (biology: species movement; CS: PageRank; economics: consumer states). Use GDC eigen analyses and discuss model assumptions and data collection issues.

❀️ CAS Link

A CAS project could track individual habits (study, exercise) across weeks, create transition matrices from observed transitions and present findings on habit stability and interventions to change long-term behaviour.

πŸ“Œ Final checklist (what to show in exam answers)

  • Label your matrices and vectors (e.g., T, s0), show the GDC command used (Matrix An Γ— s0).
  • Show normalising sum(s) = 1 when presenting state vectors or steady state.
  • If you use eigenvectors, show eigenvalue = 1 and the normalised eigenvector (s = v / sum(v)).
  • Comment on chain regularity, absorbing states or periodicity if relevant to convergence.

🌍 Final real-world note

Markov chains are powerful but depend on the quality of transition data and validity of the time-homogeneity assumption; always check whether transition probabilities vary over time before claiming long-run results.