🧠 Experimental Research Methods 

šŸ“Œ Key terms

TermDefinition
ExperimentA research method where the independent variable (IV) is manipulated to observe its effect on the dependent variable (DV) while controlling other variables.
Independent Variable (IV)The variable that is manipulated by the researcher to test its effect.
Dependent Variable (DV)The variable that is measured in response to changes in the IV.
Control GroupA group that does not receive the experimental treatment, used as a baseline for comparison.
Random AssignmentAllocating participants to different groups (e.g., experimental and control) by chance to minimize bias.
OperationalizationDefining variables in a measurable way (e.g., ā€œmemoryā€ as ā€œnumber of words recalledā€).
Confounding VariableAny variable other than the IV that may affect the DV, potentially compromising validity.
Internal ValidityThe extent to which the experiment demonstrates a cause-and-effect relationship between IV and DV.
External ValidityThe degree to which results can be generalized beyond the experiment.
ReliabilityThe consistency of a study or measure across time and conditions.

šŸ“Œ Notes

Experimental methods are the cornerstone ofĀ quantitative psychology. They are designed toĀ establish causation — determining whether changes in one variable directly cause changes in another.

Key Types of Experiments:

  1. Laboratory Experiments
    • Conducted in controlled environments.
    • High internal validity, low ecological validity.
    • Example:Ā Loftus & Palmer (1974) — the effect of leading questions on memory.
  2. Field Experiments
    • Conducted in real-world settings.
    • Higher ecological validity, less control.
    • Example:Ā Piliavin et al. (1969) — helping behaviour on the New York subway.
  3. Natural/Quasi-Experiments
    • IV not manipulated by the researcher but naturally occurring.
    • Example:Ā Charlton et al. (2002) — effect of television introduction on aggression in children.

Core Process:

  • Hypothesis formation → Operationalization of variables → Random assignment → Manipulation of IV → Measurement of DV → Statistical analysis → Conclusion.

Strengths:

  • Establishes cause-and-effect relationships.
  • Replicable and objective.
  • Allows control of confounding variables.

Limitations:

  • Artificial environments can reduce ecological validity.
  • Demand characteristics and experimenter bias may influence results.
  • Ethical considerations when manipulating variables affecting participants.

    šŸ”Tok link

    • Experimental psychology raises epistemological questions aboutĀ how knowledge is created.
    • Can human behaviour truly be isolated and tested under controlled lab conditions?
    • Does quantification of behaviour reduce complex experiences to oversimplified variables?
      TOK prompt:Ā ā€œTo what extent does control in experiments enhance or limit our understanding of human experience?ā€

     šŸŒ Real-World Connection

    • Experimental methods underpin psychological fields likeĀ clinical,Ā educational, andĀ organizational psychology.
    • Drug trials in clinical psychology depend on random assignment and control groups.
    • Educational psychology experiments (e.g., testing learning strategies) directly influence teaching practices.

    ā¤ļø CAS Link

    • Designing and conducting mini psychological experiments (e.g., testing memory or attention under different conditions) can connect theory to CAS experiences inĀ creativityĀ andĀ service, such as designing workshops on focus and learning strategies.

    🧠  IA Guidance

    IB Internal Assessments typically follow the experimental model.

    • Choose one IV and one DV based on a replicable study.
    • Control extraneous variables and ensure ethical practice.
    • UseĀ descriptive and inferential statisticsĀ (mean, SD, t-test).
    • Present data visually and justify the statistical test.

    🧠 Examiner Tips

    • Clearly identify the IV, DV, and control variables in written responses.
    • Use precise language like ā€œmanipulatedā€ and ā€œmeasured.ā€
    • When evaluating, always discussĀ validity, reliability, and ethics.
    • Don’t confuse ā€œcorrelationā€ with ā€œcausation.ā€