Category: caterscam.github.io

caterscam.github.io

  • AHL 4.14 — Linear combinations of random variables (expectation & variance)

    This section explains how expectations and variances behave under linear transformations and linear combinations.
    You will learn formulas for E(aX + b), Var(aX + b), and for sums of independent variables, plus why
    the sample mean and the sample variance (with n−1) are unbiased estimators. Emphasis is on interpretation and use.

    Term / concept Definition / short explanation
    Expectation / Mean (E) The long-run average value of a random variable. Notation: E(X) = μ.
    Variance (Var) Expected squared deviation from the mean: Var(X) = E[(X − μ)2] = σ2.
    Linear transform Y = aX + b. Scale by a, shift by b. Affects mean and variance in specific ways (below).
    Sample mean & variance \u0305X = (1/n) Σ xi; s2n−1 = (1/(n−1)) Σ (xi − \u0305X)2 — unbiased estimates of μ and σ2.

    📌 1. Expectation under linear transformation

    If Y = aX + b (a and b constants) then:

    E(aX + b) = a E(X) + b.

    • Why: expectation is linear — sum and constants can be pulled out of E(·).
    • Interpretation: scaling X by a multiplies the mean by a; shifting by b adds b to the mean.
    • Example (font Times Newer Roman):
      If E(X) = 10 and Y = 3X − 4 then E(Y) = 3×10 − 4 = 26.

    📐 IA spotlight

    When creating an IA where you transform data, explicitly compute E before and after transformation to show understanding.
    If you scale raw measurements (e.g., convert metres to centimetres), show how the mean scales accordingly.

    📌 2. Variance under linear transformation

    For Y = aX + b:

    Var(aX + b) = a2 Var(X).

    • Why b disappears: adding a constant shifts every observation by the same amount — it does not change spread.
    • Effect of a: multiplying by a scales spread by |a|, and variance (which squares deviations) scales by a2.
    • Example (Times Newer Roman):
      If Var(X) = 4 and Y = −2X + 5 then Var(Y) = (−2)2 × 4 = 4 × 4 = 16.

    🧠 Examiner tip

    • When asked for Var(aX + b) show the substitution and explicitly state Var(aX + b) = a2Var(X). If given numbers, compute both steps (square a, multiply by given variance).
    • Always comment in words: “variance increases (or decreases) by factor a2.”

    📌 3. Expectation and variance of linear combinations (several variables)

    Let X1, X2, …, Xn be random variables and ai constants. For the linear combination
    S = Σ ai Xi:

    • Expectation (always): E(S) = Σ ai E(Xi).
    • Variance (if Xi are independent): Var(S) = Σ ai2 Var(Xi).
    • If not independent: covariances appear: Var(S) = Σ ai2Var(Xi) + 2 Σi<j aiaj Cov(Xi,Xj).
    • Practical point: Many exam problems assume independence so the simpler sum-of-variances formula applies.

    Example (font Times Newer Roman):

    Let X and Y be independent with Var(X)=9, Var(Y)=4. For Z = 2X − 3Y:
    Var(Z) = 22×9 + (−3)2×4 = 4×9 + 9×4 = 36 + 36 = 72.

    🌍 Real-world connection

    Portfolio variance in finance is computed by combining variances of asset returns and their covariances.
    Independence is rare — covariances matter. This shows why the “sum of variances” formula must be used carefully in applications.

    📌 4. Sample mean and unbiasedness

    Given a random sample X1, …, Xn from a population with mean μ and variance σ2:

    • Sample mean: SX = (1/n) Σ si. It is an unbiased estimator of μ:
      E(x̄) = μ.
    • Variance of sample mean (if Xi independent):
      Var() = σ2 / n. So averaging reduces variance by factor n.

    Example:

    Population σ2 = 16, n = 25 → Var() = 16 / 25 = 0.64. The standard error = √0.64 = 0.8.

    📌 5. Unbiased sample variance (s2n−1)

    The sample variance with denominator (n−1) is:

    s2n−1 = (1/(n−1)) Σ (xi − x̄)2

    • Unbiasedness: E[s2n−1] = σ2. Using n in denominator would systematically underestimate σ2.
    • Why n−1? Because we used the sample mean (an estimate) when computing deviations — one degree of freedom is lost.
    • Classroom check: for grouped data replace sums by Σ fi(xi)2 and use n = Σ fi.

    🔍 TOK perspective

    Discuss whether unbiasedness is always the best property to prioritise. In practice, a biased estimator might have smaller mean squared error — what should scientists value more: unbiasedness or lower overall error?

    🌐 EE focus

    An EE might explore properties of estimators (biased vs unbiased) and compare MSE (mean squared error) for different estimators in simulations to justify estimator choice.

    📌 Final checklist & common exam tasks

    • When given a linear transform, write both E(aX + b) and Var(aX + b) and compute numerically.
    • When summing independent variables use Var(sum) = sum Var — show independence assumption.
    • For sample statistics: show formula for and s2n−1, state unbiasedness and compute sample standard error when asked.
    • If asked to interpret, always give a one-line plain-English sentence (e.g., “scaling by 3 triples the mean, variance multiplies by 9”).

    🧠 Paper tip

    • Write the formula then substitute numbers. Examiners give method marks for clear symbolic steps even if arithmetic slips.
    • If a question mentions independence, explicitly include “assuming independence” when using Var(sum) = Σ Var.
    • Label units and give short interpretations of numerical answers.

    ❤️ CAS idea

    Run a school survey; compute sample mean and sample variance for different classes, show how averaging reduces variance and explain the practical meaning.

  • AHL 4.12 — designing data collection, categorisation, reliability & validity

    This topic explains how to design valid data collection methods (surveys, questionnaires, sampling),
    how to choose sensible categories when converting numerical data into a χ2 table, and how to
    assess reliability and validity of measures.

    Term / concept Definition / short explanation
    Survey / questionnaire design Structured tool to collect data. Good design uses clear, unbiased questions, consistent answer choices, and pilot testing.
    Categorisation for χ2 Grouping continuous data into classes so that expected frequencies ≥ 5 and categories are meaningful and justified.
    Reliability Consistency of a measure (repeatability). A reliable test gives similar results under similar conditions.
    Validity The test measures what it is intended to measure (content, criterion-related validity). A valid test may not be reliable if poorly administered.

    📌 1. Designing valid data collection methods (surveys & questionnaires)

    When designing a survey, follow these core principles (each explained below with practical notes):

    • Clear purpose: define the research question and which variables are needed.
    • Question clarity: use simple language, avoid double-barrelled questions, define timeframes and units.
    • Answer choices: provide mutually exclusive, exhaustive response options; use Likert scales consistently.
    • Sampling plan: decide who will be sampled and how (random, stratified, convenience) and justify your choice.
    • Pilot testing: trial the questionnaire to find ambiguous items and estimate completion time.
    • Ethics and consent: obtain informed consent, anonymise data, and consider sensitive questions carefully.

    📐 IA Spotlight

    For an Internal Assessment you can base your investigation on a well-designed survey: state your sampling frame, pilot the questionnaire, present the final instrument in an appendix, and discuss how design choices (question wording, response categories) influence validity and reliability.

    📌 2. Sampling methods and selecting relevant variables

    • Simple random sampling: each member has equal chance — minimises selection bias but may be impractical for large populations.
    • Stratified sampling: divide population into subgroups (strata) and sample proportionally — ensures key groups are represented and reduces sampling error for subgroup estimates.
    • Cluster sampling: sample entire clusters (e.g., classes) when individual sampling is costly — efficient but increases design effect and standard errors.
    • Convenience sampling: easy but biased — use only when limitations are acknowledged and results are not generalised beyond the sample.

    Explicit guidance on variable selection:

    • Choose variables that directly address the research question — avoid collecting more variables than necessary which increases respondent burden and noise.
    • Prefer objective measures where possible (e.g., logged minutes, measured height) rather than subjective recall which may be biased.
    • If using derived variables (e.g., indices, ratios), define calculation steps clearly and test for sensitivity to outliers.

    🔍 TOK Perspective

    Consider the role of question framing and sampling in shaping “what we know”. How do choice of sample and wording influence the reliability of the knowledge claims derived from data?

    📌 3. Categorising numerical data for χ2 analysis — principled choices

    When converting continuous numerical data into categories for a χ2 goodness-of-fit or contingency table:

    • Meaningful boundaries: choose class limits with practical meaning (e.g., age groups 0–17, 18–34, 35–54, 55+), not arbitrary tiny splits.
    • Ensure expected counts ≥ 5: combine adjacent classes where necessary to keep expected frequencies above the commonly used threshold (reduces χ2 bias).
    • K (number of classes): prefer fewer classes with sufficient counts rather than many sparse classes; justify K based on sample size and research aim.
    • Document decisions: always explain why classes were chosen and give the expected counts used in the χ2 calculation (transparency builds trust in conclusions).

    Example scenarios:

    • Public health: grouping blood pressure readings into ‘normal’, ‘elevated’, ‘stage 1’, ‘stage 2’ — categories chosen based on clinical thresholds, ensuring enough observations per category.
    • Education research: grouping exam scores into performance bands (fail / pass / merit / distinction) with bands wide enough to avoid very small expected counts.

    🌍 Real-World Connection

    Governments categorise income into brackets for policy analysis; choices of bracket width affect analyses of inequality. Analysts must justify bracket selection and show expected counts before reporting χ2 results.

    📌 4. Reliability and validity — definitions, tests and interpretation

    These concepts are distinct and both essential. Below are explicit methods and how to interpret them.

    Subheading / Key pointer — Reliability (14px, Times Newer Roman)

    • Test–retest: apply the same instrument to the same group at two times (with conditions stable). High correlation between scores suggests temporal reliability. Interpret carefully: true change vs measurement error must be considered.
    • Parallel forms: two different versions of the test administered to same subjects; if scores correlate highly, forms are consistent.
    • Internal consistency (conceptual): for multi-item scales (Likert), check whether items measuring the same construct agree (Cronbach’s alpha is a statistic used outside SL; mention conceptually).
    • Practical note: high reliability does not guarantee validity — a consistently wrong ruler gives reliable but invalid length measurements.

    Subheading / Key pointer — Validity (14px, Times Newer Roman)

    • Content validity: does the instrument cover the construct fully? For example, a “physical activity” questionnaire should cover intensity, duration and frequency, not just frequency.
    • Criterion-related validity: does the measure agree with an established standard? e.g., a new fitness test compared to VO2 max measurements.
    • Face validity: does the instrument seem to measure what it should? This is weaker but useful for survey acceptance by respondents.
    • Practical checks: cross-validate findings where possible (compare with external data sources); discuss threats to validity such as social desirability bias or misreporting.

    ❤️ CAS Ideas

    Run a data literacy workshop for younger students: design a short questionnaire, collect data, show how wording affects responses and discuss ethical consent and anonymity.

    📌 5. Choosing degrees of freedom and justifying categorisation for χ2 tests

    • Degrees of freedom when estimating parameters: if you estimate k parameters from data (for example, mean and variance when fitting a normal), reduce df accordingly when using χ2 goodness-of-fit (commonly df = number of categories − 1 − number_of_estimated_parameters).
    • Document the estimation: if you estimate parameters from the same data you test, explain method and adjust df; this affects critical values and p-values.
    • Practical justification: always provide a short paragraph explaining why categories were chosen, how many degrees of freedom were used, and why expected frequencies meet the required thresholds.

    📝 Paper tip — categorisation & χ2

    • Always show the raw observed counts, the formula or table used to compute expected counts, the χ2 sum and df, then state p and conclusion in context.
    • When you combine classes to get expected ≥ 5, write exactly which classes were combined and why — examiners look for this justification.
    • If small expected counts remain, briefly state the limitation and how it affects reliability of the χ2 result.

    📌 6. Final recommendations & best practice (explicit checklist)

    • Design checklist: define aims → choose variables → select sampling method → draft questions → pilot → finalise instrument → collect data ethically.
    • Categorisation checklist: pick meaningful boundaries → ensure expected ≥ 5 → document combining of classes → adjust df if parameters are estimated.
    • Reliability & validity checklist: test–retest or parallel forms where possible; check content coverage and compare with external measures if available.
    • Reporting: include your instrument (appendix), sampling frame, response rate, treatment of missing data, and a short paragraph on limitations and possible biases.
  • SL 4.11 — hypothesis testing, Chi Square tests and t-tests

    Term / concept Definition / short explanation
    Null hypothesis (H0) The default claim to be tested (e.g., “no association”, “population mean = μ0“). We assume H0 unless data gives strong evidence otherwise.
    Alternative hypothesis (H1) The claim we suspect may be true instead of H0 (e.g., “μ ≠ μ0“, “association exists”). Can be one- or two-sided.
    Significance level (α) The threshold probability for rejecting H0 (common values 0.05, 0.01, 0.10). If p ≤ α, reject H0.
    p-value Probability (under H0) of obtaining data at least as extreme as observed. Small p supports H1.
    χ2 statistic Σ (Observed − Expected)2 / Expected measured across cells; compares observed counts to expected under H0.
    Degrees of freedom (df) For χ2 goodness-of-fit df = k − 1 (k categories). For contingency table df = (rows − 1)(cols − 1).

    📌 1. Formulating hypotheses (H0 and H1)

    Follow these rules when writing H0 and H1:

    • State H0 as an equality or “no effect” claim (e.g., H0: p = 0.5, H0: no association).
    • State H1 as the alternative (e.g., H1: p ≠ 0.5, H1: association exists).
    • Decide one-tailed vs two-tailed before seeing the data (affects p-value interpretation).

    🔍 TOK Perspective

    Consider how the wording of hypotheses shapes evidence. Does rejecting H0 demonstrate the alternative is true, or only that H0 is unlikely under the data observed?

    📌 2. Significance levels and p-values (interpretation)

    • Decision rule: choose α before testing; if p ≤ α → reject H0; if p > α → fail to reject H0.
    • p-value meaning: not the probability H0 is true; rather, how surprising the data are if H0 were true.
    • Reporting: give numeric p-value and conclusion in context (e.g., “There is evidence at the 5% level that …”).

    🌍 Real-World Connection

    Medical trials report p-values when testing new treatments. Policymakers must interpret small p with caution — consider effect size and sample design, not p alone.

    📌 3. χ2 goodness-of-fit test (categorical data)

    Purpose: compare observed counts to expected counts under a specified probability model.

    1. State H0 (e.g., “data follow the claimed distribution”) and H1 (“do not follow”).
    2. Compute expected counts: Expected = n × pcategory for each category.
    3. Calculate χ2 = Σ (O − E)2 / E across categories.
    4. Degrees of freedom df = k − 1 (k = number of categories). For parameters estimated from data, df reduces accordingly.
    5. Find p-value from χ2 distribution with df (use technology in exam). Compare to α.

    Worked example — goodness-of-fit

    A six-sided die is rolled 120 times; observed counts for faces 1–6 are: 18, 20, 19, 24, 20, 19. Test H0: die is fair (p = 1/6 each) at α = 0.05.

    Expected per face E = 120 × 1/6 = 20. χ2 = Σ (O − 20)2/20 = ((−2)2 + 0 + (−1)2 + 42 + 0 + (−1)2)/20 = (4+0+1+16+0+1)/20 = 22/20 = 1.1.

    df = 6 − 1 = 5. p ≈ 0.96 (use GDC). Since p > 0.05, fail to reject H0; no evidence die is unfair.

    📌 4. χ2 test for independence (contingency tables)

    Purpose: test whether two categorical variables are independent.

    1. Form contingency table of observed counts Oij (rows × columns).
    2. Compute expected counts Eij = (row total × column total) / grand total.
    3. Compute χ2 = Σcells (Oij − Eij)2 / Eij.
    4. Degrees of freedom df = (r − 1)(c − 1). Use technology to get p-value and conclusion.
    5. Check expected counts: best practice expected ≥ 5; if several expected ≤ 5, interpret χ2 with caution (consider Fisher’s exact test for small tables).

    📐 IA Spotlight

    • Use contingency tables when investigating relationships in survey data (e.g., gender vs. preference). Show how expected counts are computed and discuss limitations when expected counts are small.

    Worked example — independence (2×2)

    Surveyed 100 students for (A) studies online (Yes/No) and (B) prefers recorded lectures (Yes/No). Observed:

    Table: rows = Online study Yes (30), No (70); columns = Prefers recorded Yes (40), No (60).

    Expected for cell (Yes, Yes): E = (row total 30 × column total 40) / 100 = 12. Compute χ2 across 4 cells (use GDC). df = (2−1)(2−1)=1. Compare p-value to α.

    Yates continuity correction: sometimes applied for 2×2 tables with small counts to reduce χ2 bias. In exams, technology will usually handle this; mention continuity correction if counts are small.

    📌 5. The t-test (comparing two means) — SL perspective

    SL conditions: two independent (unpaired) samples; population variances unknown and assumed equal → use pooled two-sample t-test. Technology computes t and p.

    • Hypotheses: example H0: μ1 = μ2; H1: μ1 ≠ μ2 (two-sided) or >/< for one-sided.
    • Assumptions: both populations approx normal (especially important for small samples), independent samples, equal variances (pooled t-test).
    • Test statistic (pooled): technology computes t and df ~ n1 + n2 − 2; report p and conclude.

    🧠 Examiner Tip

    • Always state H0 and H1 clearly (equation / inequality).
    • Show method: write which test was used (e.g., “pooled two-sample t-test”) and justify it (independence, approx normal, equal variances).
    • Include numeric result and context: show t, df (if asked), p-value and interpret in plain language (conclusion about means in context).

    📌 6. Use of technology and practical advice

    • In examinations use GDC or software to compute χ2, t and p-values — display key intermediate values (observed & expected counts, t-statistic) for clarity.
    • Always check assumptions: expected counts in χ2, normality and equal variances for t-test. If assumptions fail, mention limitations.
    • For small sample counts in χ2 (expected ≤ 5), note that results may be unreliable and consider alternative tests (Fisher exact for 2×2).

    ❤️ CAS Link

    Run a small community survey (e.g., about transport choices) and use χ2 tests to check associations. Present results and discuss limitations of small expected counts.

    Worked example — two-sample t-test (illustrative)

    Sample A: n1=12, mean = 50, s = 5. Sample B: n2=14, mean = 46, s = 6. Test H0: μ12 at α = 0.05.

    Use technology (LinReg / T-Test): pooled t ≈ 2.03, df = 24, p ≈ 0.053 (two-sided) → p slightly above 0.05 so fail to reject H0; no strong evidence means differ (mention exact p and context).

    🌐 EE Focus

    Explore statistical testing choices in an EE: comparing χ2 vs Fisher for small counts, or studying the robustness of t-tests to non-normality with simulations.

    📌 Quick summary & checklist

    • Write H0 and H1 clearly, state α.
    • Choose correct test: χ2 goodness-of-fit (categorical vs model), χ2 independence (contingency), t-test for two means (SL conditions).
    • Check assumptions (expected counts, normality, equal variances). Use technology for calculations and give contextual interpretation of p.
    • When small expected counts appear, mention Yates/Fisher as appropriate and highlight limitations.

    📝 Paper tips — hypothesis tests

    • Label everything: show O and E tables, state df, give χ2, p and conclusion in context.
    • When using technology: still present the formula or intermediate E-values to earn method marks.
    • Always interpret: end with a one-line sentence linking conclusion to the real-world context of the question.
    • At the end of the sum: The way to conclude is to say “We do/do not have enough evidence to reject null hypothesis”(very important to remember)

    📌 SL 4.11 — Hypothesis Testing, Chi-Square Tests & t-Tests

    Multiple Choice Questions

    MCQ 1
    A hypothesis test is carried out at the 5% significance level. The p-value obtained is 0.032.
    Which conclusion is correct?

    • A. Accept H0 because the p-value is small
    • B. Reject H0 because the p-value is less than 0.05
    • C. Accept H1 because the p-value is greater than 0.05
    • D. Do not reject H0 because the result is inconclusive
    Answer & Explanation

    Correct answer: B

    In hypothesis testing, the decision rule using the p-value is:

    If p-value < α, reject H0.

    Here, the p-value is 0.032 and the significance level is α = 0.05.
    Since 0.032 < 0.05, the result is considered statistically significant.

    This means the observed data are unlikely under the assumption that H0 is true,
    so we state ” we have enough evidence to reject the null hypothesis”.


    MCQ 2
    Which of the following situations is most appropriate for a chi-square test for independence?

    • A. Comparing two population means with unknown variance
    • B. Testing whether a die is fair
    • C. Testing whether two categorical variables are related
    • D. Estimating a confidence interval for a mean
    Answer & Explanation

    Correct answer: C

    A chi-square test for independence is used when:

    • Both variables are categorical
    • Data are presented in a contingency table
    • We want to see whether the variables are associated or independent

    Options A and D involve means, which require t-tests.
    Option B refers to a chi-square goodness-of-fit test, not a test for independence.


    MCQ 3
    In a one-sample t-test, which assumption is required?

    • A. The population standard deviation must be known
    • B. The population must be normally distributed
    • C. The sample size must be greater than 30
    • D. The data must be categorical
    Answer & Explanation

    Correct answer: B

    A one-sample t-test is used when the population standard deviation is unknown.

    The key assumption is that the population distribution is normal, especially for small sample sizes.
    If the sample size is large, the test is more robust, but normality is still the formal assumption in IB.


    Short Answer Questions

    Short Question 1
    Explain what is meant by a Type I error in hypothesis testing.

    Model Answer

    A Type I error occurs when the null hypothesis H0 is rejected even though it is actually true.

    In other words, it is a false positive result, where the test suggests evidence for an effect or difference
    that does not truly exist.

    The probability of making a Type I error is equal to the chosen significance level α.
    For example, if α = 0.05, there is a 5% chance of rejecting a true null hypothesis.


    Short Question 2
    State two conditions required for a chi-square test to be valid.

    Model Answer

    First, all expected frequencies in the contingency table should be sufficiently large,
    typically at least 5, to ensure the chi-square approximation is valid.

    Second, the observations must be independent, meaning that each individual or outcome
    contributes to only one cell of the table.

    If these conditions are not met, the conclusions of the test may not be reliable.


    Long Answer Questions

    Long Question 1 — One-Sample t-Test

    A manufacturer claims that the mean lifetime of a certain type of battery is 120 hours.
    A random sample of 10 batteries has a mean lifetime of 114 hours with a sample standard deviation of 8 hours.

    (a) State the null and alternative hypotheses.
    (b) Explain why a t-test is appropriate.
    (c) Determine the test statistic.
    (d) State the conclusion at the 5% significance level.

    Full Worked Solution

    (a) Hypotheses

    H0: μ = 120
    H1: μ < 120

    The alternative hypothesis reflects suspicion that the true mean lifetime is less than the advertised value.

    (b) Choice of test

    The population standard deviation is unknown and the sample size is small.
    Therefore, a one-sample t-test is appropriate.

    (c) Test statistic

    t = (x̄ − μ0) / (s / √n)

    t = (114 − 120) / (8 / √10) ≈ −2.37

    (d) Conclusion

    Using the GDC, the p-value corresponding to t ≈ −2.37 with 9 degrees of freedom is less than 0.05.

    Since p-value < 0.05, we reject H0.
    There is sufficient evidence at the 5% level to suggest that the mean battery lifetime is less than 120 hours.


    Long Question 2 — Chi-Square Test for Independence

    A school records whether students prefer online or in-person learning, classified by gender.
    The results are shown in a contingency table.

    (a) State the null and alternative hypotheses.
    (b) Explain how expected frequencies are calculated.
    (c) Describe how the test statistic is obtained.
    (d) Interpret a decision to reject H0.

    Full Worked Solution

    (a) Hypotheses

    H0: Gender and learning preference are independent.
    H1: Gender and learning preference are not independent.

    (b) Expected frequencies

    Expected frequency = (row total × column total) / grand total.

    This represents the frequency we would expect if the variables were truly independent.

    (c) Test statistic

    The chi-square statistic is calculated using:

    χ² = Σ ( (Observed − Expected)² / Expected )

    Each cell’s contribution is summed to obtain the final test statistic.

    (d) Interpretation

    If H0 is rejected, this indicates there is a statistically significant association
    between gender and learning preference.

    This does not imply causation, only that the variables are related in the population.

  • Reactivity 1.4 – Entropy and spontaneity (HL)

    R1.4.3 – ΔG and spontaneity

    • When carrying out calculations, we know that the value of T is always positive
    • Using the Gibbs formula, we can also find the temperature at which any reaction will become spontaneous

    ΔG = ΔH – TΔS system

    We know a negative G gives a spontaneous reaction

    Therefore when ΔH TΔS system < 0

    which then gives ΔH < TΔS system

    Using this we can then calculate the temperature at which a particular reaction will be spontaneous

    ΔH°ΔS°TΔGSpontaneity
    positive (endothermic)positive lowpositive ≈ ΔH°not spontaneous
    positive (endothermic)positivehighnegative ≈ –TΔS°spontaneous
    positive (endothermic)negative lowpositive ≈ ΔH°not spontaneous
    positive (endothermic)negative highpositive ≈ –TΔS°not spontaneous
    negative (exothermic)positive lownegative ≈ ΔH°spontaneous
    negative (exothermic)positivehighnegative ≈ –TΔS°spontaneous
    negative (exothermic)negative lownegative ≈ ΔH°spontaneous
    negative (exothermic)negative highpositive ≈ –TΔS°not spontaneous
    • This table outlines the different possibilities for reactions and how we can make reasonable guesses as to whether or not they can occur spontaneously
  • Reactivity 1.4 – Entropy and spontaneity (HL)

    1.4.2 – Gibbs energy

    • Exothermic reactions cause an increase in entropy of the surroundings
    • Thus, we know that entropy has a proportional relationship to the negative change in enthalpy (more exothermic means greater increase in entropy)
    • The impact of temperature on entropy is also considered based on the dispersal of molecules/atoms
    • Therefore, we can also make the statement that the entropy of the surroundings is inversely proportional to the negative value of the absolute temperature (T)
    • The following expression represents these two statements :

    ΔS surroundings = (-ΔH)/T

    here, the entropy refers to the surroundings and the enthalpy refers to the system

    • We know that the expression given above gives us the change in entropy of the system. We also know that the total entropy change will be the change in entropy of the surroundings plus the change in entropy of the system
    • This is represented by the following :

    ΔS TOTAL = ΔS surroundings + ΔS system

    This can then be re-written as the following :
    ΔS TOTAL = [(-ΔH)/T ] + ΔS system

    • To know the true feasibility of a reaction, we must find an expression that combines both the enthalpy and entropy
    • This is known as ‘Gibbs energy’ which is the measure of the quality of energy available for a reaction
    • It is given by the symbol ‘G’
    • The following expression represents how the formula for change in Gibbs energy can be derived :

    ΔS TOTAL = [(-ΔH)/T ] + ΔS system

    We can multiply both sides with T to get :

    TΔS TOTAL = [(-ΔH)] + TΔS system

    We then multiply either side by -1 to get :

    -TΔS TOTAL = ΔH TΔS system

    We then combine the left hand side into one expression for Gibbs (G)

    ΔG = Δ H TΔS system

    • When the change in Gibbs energy is negative, the reaction is spontaneous
    • If we know the signs for the change in entropy and enthalpy, we can also guess whether the reaction will be spontaneous or not

  • S2.3 The Metallic Model

    S2.3 The Metallic Model :

    S2.3.1 and S2.3.2 The Metallic Bond :

    ⭐️ Metallic bond is the electrostatic attraction between a lattice of cations and delocalised electrons

    • Metals are found on the left side of the periodic table
    • They have low ionization energies, enabling them to react with other atoms by donating their electrons and forming positive ions
    • In elemental states, electrons are held loosely and are referred to as ‘delocalised’
    • Metals contain a regular lattice arrangement of cations surrounded by a ‘sea’ of delocalised electrons
    • Physical properties
      • Lustrous – delocalised electrons in crystal structure reflect light
      • Good conductors of electricity (solid/liquid state) as electrons are free to move
      • Good conductors of heat – delocalised electrons and packed structure of ions enable efficient thermal transfer
      • Malleable and ductile – movement of delocalised electrons is non-directional so bond remains in place even under pressure
      • High melting points
    • Chemical properties
      • Form cations
      • Usually form ionic compounds
      • Oxides are basic
    • Non directional nature of metallic bonding allows for enhancement of properties through addition of different elements – results in the formation of alloys
    • Steel is an example of an alloy of iron that contains small amounts of carbon

    🔍 TOK Connect : The ability to explain natural phenomena such as metallic properties through the application of theory (eg. atomic theory) is an important feature of science.

    • Strength of a metallic bond depends on the charge and radius of metal ions
      • Number of delocalised electrons
      • Charge on the cation
      • Radius of the cation
    • Increased delocalised electron density and smaller cations result in strong bonds (Mg has stronger metallic bonding than Na)
    • Strength of metallic bonding tends to decrease down a group as the size of cations increase
    • Strength of metallic bonding tends to increase across a period due to increased charge density and smaller atomic radii
    • Charge density = charge/volume

    S2.3.3 Transition Metals [HL]

    ⭐️ Transition metals are elements that have incomplete d-subshells or can give rise to cations with incomplete d-subshells

    • Transition metals have high melting points
      • Close proximity in energy of 3d and 4s subshell – allows transition metals to delocalize a larger number of electrons
      • All transition metals can delocalize electrons and have small ionic radii – increases the strength of metallic bonding
      • Makes it harder to predict trends
      • Sc to V – increase in mp due to increased number of delocalised electrons
      • V – general decrease as the large amount of energy required to create +5 and +6 ions would not be paid back by the energy released by metallic bonding
    • Transition metals have high electrical conductivity
      • Large number of delocalised electrons allows for easy flow of current
      • All metals can conduct electricity due to mobile electrons
      • First row transition metals (except for Copper) seem to be poorer conductors of electricity compared to metals like Aluminium,etc
      • s and p block electrons have a higher tendency to get delocalised than d block electrons – why most transition metals aren’t better electrical conductors than metals
      • Copper has an extremely high electrical conductivity and is an exception – however, its conductivity is usually explained in terms of the delocalised s electron

  • Reactivity 1.4 – Entropy and spontaneity (HL)

    1.4.1 – Entropy

    📌 What is entropy?

    • Entropy is describes as a measure of disorder in the universe
    • It is given by the symbol ‘S’
    • Ordered states have low entropy, while disordered states have high entropy
    • Entropy increases as energy becomes more dispersed
    • Using this idea, we know that solids are the most ‘orderly’ and have the least energy dispersal while gases are the most ‘disorderly’ and have the greatest energy dispersal
    • We can also predict that during state changes there is a change in entropy in the universe (eg. solid to liquid would increase entropy while gas to liquid would decrease it and so on)
    • The absolute entropy of a substance can be calculated at STP and these values can be found in Section 13 of the data booklet
    • The overall entropy change of a reaction can also be calculated using the formula below

    Formula for calculating entropy change :
    ΔS° = (∑S°products) – (∑S°reactants)

  • Reactivity 3.1 – Proton transfer reactions

    3.1.10 & 3.1.11 – Acid& base dissociation constants (HL)

    📌 Strength of acids and bases

    • Since the strength of acids and bases is based on the extent of dissociation, there are acid and base dissociation constants that can be used to measure the relative strength of an acid or base
    • For weak acids/bases the dissociation is represented as a reversible reaction and the value of the constant will determine the relative strength.

    Generic equation for acid dissociation: HA (aq) + H2O(l) H+(aq) + A(aq)
    FORMULA FOR ACID CONSTANT : Ka = [H+][A] / [HA]

    We do not include the [H2O] as we know that the K of water is constant

    • Ka is a fixed value for each specific acid at each specific temperature
    • The higher the value of Ka, the stronger the acid is
    • Using this same logic, we can ask get the following formula for a generic base ionisation

    Generic equation for acid dissociation: B (aq) + H2O(l) BH+(aq) + OH(aq)
    FORMULA FOR ACID CONSTANT : Kb = [OH][BH+] / [B]

    We do not include the [H2O] as we know that the K of water is constant

    • The greater the value of Kb, the stronger the base is

    📌 Calculations

    Generic equation for acid dissociation: HA (aq) + H2O(l) H+(aq) + A(aq)
    FORMULA FOR ACID CONSTANT : Ka = [H+][A] / [HA]

    We do not include the [H2O] as we know that the K of water is constant

    • Ka is a fixed value for each specific acid at each specific temperature
    • The higher the value of Ka, the stronger the acid is
    • Using this same logic, we can ask get the following formula for a generic base ionisation

    Generic equation for acid dissociation: B (aq) + H2O(l) BH+(aq) + OH(aq)
    FORMULA FOR ACID CONSTANT : Kb = [OH][BH+] / [B]

    We do not include the [H2O] as we know that the K of water is constant

    • The greater the value of Kb, the stronger the base is
  • Reactivity 3.1 – Proton transfer reactions

    3.1.9 – The pOH scale (HL)

    📌 pOH as a measurement

    • The pOH scale is a measurement of the concentration of hydroxide ions in an aqueous solution
    • Similar to the pH scale, the pOH can be calculated using the following formula

    pOH = -log10[OH] where [OH] is the concentration of hydroxide ions

    • Since the two scales are inter-related we also know that pH + pOH = 14

  • Reactivity 3.1 – Proton transfer reactions

    3.1.8 – pH curves (SL)

    📌 Titrations

    ⭐️