II. Definitions
- Bias
- Error introduced systematically by faults in experimental methodology
- Experimental Error
- Error due to either bias (see below)
- Error due to chance (random)
- Alpha Error (False Positive)
- Difference is due to chance
- P Value represents the probability that results are due to chance (p <0.05 is goal)
- Beta Error (False Negative)
- There was a true difference between experimental and control groups
- However, study detects no difference between experimental and control groups
- Typically results from too small a sample size
- Statistical power measures the study ability to detect a Statistically Significant difference
- Alpha Error (False Positive)
- Validity
- Internal Validity
- Reliable experimental methodology reduces error due to chance and bias
- External Validity
- Experimental results are generalizable to real-world scenarios
- Clinically Significant improved outcomes
- Internal Validity
III. Types: Bias
-
Selection Bias
- http://en.wikipedia.org/wiki/Selection_bias
- Patient Selection Bias
- Subjects chosen for study are not randomly selected or otherwise not correctly selected
- Measurement Bias
- https://learner.org/courses/learningmath/data/session1/part_c/index.html
- Inconsistent or incorrect measurement
- Assessment Bias (ascertainment bias, diagnostic bias, detection bias)
- Subjects are affected by their opinions about the system (often due to inadequate blinding)
- Allocation Bias
- Additional difference between the intervention and control groups
- Beyond the intervention (inadequate randomization)
-
Hawthorne Effect
- People perform better when they know they are being observed Checklist Effect
- Checklists aid decision making and may skew results
IV. References
- Hersh (2014) Evaluation of Clinical Information Systems, AMIA’s CIBRC Online Course