4. Experiment: Definition
• A strategy that investigates
cause and effect relationships.
• Tries to prove or disprove
that a cause and effect
hypothesis is true:
– "A causes B", "A increases B's occurrence", "A eliminates B".
• Should include many instances, as opposed to case
study.
• Should include only a few study parameters, as opposed
to case study.
6. Experiment: Characteristics
• Precise observations and measurements.
• Pre-test and post-test observations.
• Proving or disproving a hypothesis.
• Identification of causal factors, i.e. one-directional links
• Explanation and prediction.
• Repetition.
Cause Effect
Independent
variable
Dependent
variable
7. Conducting an experiment
• Find the hypothesis to be tested.
– Must be testable and disprovable.
• Find the dependent and independent variable(s).
– Dependent variable is altered by altering the independent variable.
– A (independent) causes B (dependent).
• Find the control mechanisms.
– Mechanisms that help you control all contaminating variables.
• Observe and measure.
– Often quantitative data is collected through structured observations.
– Remember before and after observations (or control groups).
• Be careful about internal and external validity.
• Document everything so that others can repeat the
experiment.
8. Controlling unwanted factors
• Eliminate the factor from your experiment.
– E.g. via exclusion criteria. "Exclude students with programming skills".
• Hold the factor constant, if you cannot eliminate it.
– E.g. via inclusion criteria. "Include only seniors 60-65 years old".
• Use large random selection.
– E.g. in opinion surveys. Let the statistical distribution take care of it.
• Use control groups.
– Similar groups, the only difference: Change in independent variable.
• Blind experiments.
– Controls researcher and subject bias.
Cause Effect
Unwanted factor
9. Validity threats
• Internal validity: Show that results are attributable only to changes in
independent variable. Threats:
– Differences between experimental and control group.
– History, i.e. "what has happened in between".
– Maturation, due to age, practice, boredom etc.
– Instrumentation, i.e. faulty measurement equipment.
– Experimental mortality, i.e. changes in observed groups' composition.
– Reactivity and experimenter effects, e.g. "behaving correctly".
• External validity: Show that your results are generalizable. Threats:
– Using only special types of participants, e.g. students.
– Using samples that are not representative of the population.
– Too few participants.
– Non-representative test cases.
10. Types of experiment
• "Pure" lab experiment
– High control over parameters.
– Unrealistic settings.
• Quasi-experiments, or field experiments
– Realistic settings.
– Free flow of contaminating factors, difficult to conclude.
• Uncontrolled trial
– Fake experiments.
– Forget it if you don't have pre-test measurements.
– Better to use case study instead!
11. Advantages and disadvantages
• Advantages:
– Well-established method.
– The only way to show cause-effect relationships.
– Don't have the cost associated with field work.
• Disadvantages:
– Create artificial situations that don't exist in the IT world.
– Often impossible to control all the parameters.
– Difficult to recruit representative samples.
– Bias can invalidate results.
– Randomization and statistical validity result in high costs.