II. Definitions

  1. Bias
    1. Error introduced systematically by faults in experimental methodology
  2. Experimental Error
    1. Error due to either bias (see below)
    2. Error due to chance (random)
      1. Alpha Error (False Positive)
        1. Difference is due to chance
        2. P Value represents the probability that results are due to chance (p <0.05 is goal)
      2. Beta Error (False Negative)
        1. There was a true difference between experimental and control groups
        2. However, study detects no difference between experimental and control groups
        3. Typically results from too small a sample size
        4. Statistical power measures the study ability to detect a Statistically Significant difference
  3. Validity
    1. Internal Validity
      1. Reliable experimental methodology reduces error due to chance and bias
    2. External Validity
      1. Experimental results are generalizable to real-world scenarios
      2. Clinically Significant improved outcomes

III. Types: Bias

  1. Selection Bias
    1. http://en.wikipedia.org/wiki/Selection_bias
    2. Patient Selection Bias
    3. Subjects chosen for study are not randomly selected or otherwise not correctly selected
  2. Measurement Bias
    1. https://learner.org/courses/learningmath/data/session1/part_c/index.html
    2. Inconsistent or incorrect measurement
  3. Assessment Bias (ascertainment bias, diagnostic bias, detection bias)
    1. Subjects are affected by their opinions about the system (often due to inadequate blinding)
  4. Allocation Bias
    1. Additional difference between the intervention and control groups
    2. Beyond the intervention (inadequate randomization)
  5. Hawthorne Effect
    1. People perform better when they know they are being observed Checklist Effect
    2. Checklists aid decision making and may skew results

IV. References

  1. Hersh (2014) Evaluation of Clinical Information Systems, AMIA’s CIBRC Online Course

Images: Related links to external sites (from Bing)

Related Studies