Bias refers to the “deviation of results or inferences from the truth, or processes leading to such deviations.” Just as a thorough differential diagnosis list promotes accurate, effective disease identification and management, understanding potential biases in research helps clinicians assess whether the findings are relevant and accurate for their patients and practice settings. For example, findings on congregate living illnesses in college students might be applied to nursing home residents. But, without considering differences in the populations’ characteristics (underlying health problems or lifestyle activities), data collections techniques (survey forms vs phone apps), or survival characteristics in the study population, the findings could be incorrectly or harmfully applied to non-comparable groups.
Of note, this lexicon entry does not address cognitive or unconscious bias; these topics will be covered in separate entries.
-
- Literature Review: errors made when reviewing the current information on the study question. “Foreign language exclusion bias” arises from omitting studies in other languages, and “one sided reference bias” means selecting only references that support the hypothesis.
- Study Design: errors arising from inappropriate design of the study. “Selection biases” (strategies for identifying participants) may exclude important populations and “centripetal bias” may distort and/or limit the generalizability of the findings as patients with certain conditions gravitate toward certain specialists or treatment centers. In “sample size bias”, a small sample size fails to detect a difference in outcomes, or a large sample size finds detectable but clinically insignificant differences.
- Study Execution Bias: errors occurring due to how the study or experiment is performed. For example, “compliance bias” results when subjects fail to follow each step of the protocol.
- Data Collection Bias: shortcomings in measurement techniques that create inaccuracies in the information collected from subjects. “Instrument biases” like “forced choice bias” arise from limited questionnaire response options, which can drive selection of inaccurate answers. “Subject biases” include “attention bias” (also known as the Hawthorne Effect) in which subjects change their behavior in response to observation.
- Analysis Bias: errors arising from data analysis approaches. “Confounding biases” result when extraneous factors amplify or reduce the measured effects of the factor under investigation. “Analysis strategy biases” introduce errors by mishandling outlying or missing data (inappropriately ignoring or including these data points) or inconsistently selecting units of measure (number vs percent of events).
- Interpretation Bias: inference and speculation that leads to conclusions not supported by the data. For example, “correlation bias” confuses correlation for causation, and “generalization bias” applies the results to populations not included in the study.
- Publication Bias: editorial trends toward publication of preferred findings. These biases include “positive results bias” in which studies with positive results are more likely to be published: “hot topics bias” which prioritizes publications of high visibility or trending topics; and “favored author bias”, in which researchers cite themselves or other authors with a similar viewpoint, thereby skewing the published evidence.
Further Reading
-
- Choi, CK & Pak, AWP. (2014). “Bias, overview.” Wiley StatsRef: Statistics Reference Online. Wiley Online. https://onlinelibrary.wiley.com/doi/book/10.1002/9781118445112. Accessed 9.6.25.
- Popovic A & Huecker MR (2023). Study bias. In Statpearls [internet]. StatPearls Publishing. https://www.ncbi.nlm.nih.gov/sites/books/NBK574513/. Accessed 9.6.25.