Bias

Bias. Bias is when the research results are determined by what the experimenter does. Science asks a question and gathers evidence to select the answer that is most likely true. When the answer is not chosen by us, the result is the unexpected knowledge that has created the modern age. In contrast when we pick the answer, wittingly or unwittingly, that is bias. Bias stops us learning new things as a cage stops a bird from flying. Bias isn’t just a science error, it is the opposite of science. Biased science isn’t science at all.

Bias and the brain. The human brain can convince itself of anything, e.g. in 2017, over 500 people attended the Flat Earth International Conference to deny that the earth is round. This is not being stupid but choosing what one wants to believe, which everyone has the right to do. Flat Earthers accept that mars is a round planet but not earth, so the answer to Elon Musk’s tweet “Why is there no Flat Mars Society?” is because we live on earth. The brain hasn’t changed much in the last ten thousand years and for most of that time society wasn’t scientific, so large brains didn’t evolve to think scientifically. The intellect began as the servant not the master, until we stopped using it to think backwards and started thinking forward. Society became scientific when it gave its people permission to think freely, see Thinking, Fast and Slow.

Research bias. Research bias, like the random noise in an incoming signal, makes it hard to recognize what is actually there. It can be found at any step of the research journey:

  • Theory bias. Theory directs research to a desired conclusion, e.g. flat earth theory.
  • Method bias. Method cherry-picks data to get a desired result, e.g. racial profiling.
  • Subject bias. Subjects act to confirm what is expected, e.g. the Hawthorne effect.
  • Analysis bias. Fishing for patterns in results to “fit” a conclusion, e.g. alchemy.

To submit to bias is to let the research voyage of discovery be hijacked for other purposes.

Theory bias. Theory bias is framing the research question to reflect a bias, as in the question “Why do you hate your father?” This is why science doesn’t try to prove” a theory but to falsify. To be unbiased, one must not own the theory one is testing. Ownership bias occurs when people attach value to things that they own, e.g. a person who will only pay say $2 for a mug might ask $4 if they own it and someone else wants it. It is like the sunk cost bias, where people hang onto a failing stock because they are invested in it. In science, theories are not “can’t lose” possessions but stand or fall by the evidence found.

Method bias. Method bias is when bias controls the method instead of the theory, e.g. in racial profiling, police who only stop black drivers to check for misdemeanors confirm their original bias by finding faults. The law requires random breath testing to avoid this. Confirmation bias is when we choose information that confirms a point of view and ignore that which contradicts it. It gives the social media echo chamber effect, where people accept and pass on false rumors that confirm their prejudices. In science, one cannot cherry-pick evidence to suit a bias but must gather a valid sample before deciding.

Subject bias. Subject bias occurs when subjects are influenced by:

  • Social effects, e. g. if male researchers get different results from females.
  • Psychological effects, e. g. if pleasant researchers get different results.
  • Situational effects. Where “good subjects” try to please the researcher.
  • Expectancy effects. When expecting an effect produces it.

Science reduces subject bias by:

  • Double-blind research designs that keep subjects and researchers “blind” to what is expected.
  • Standardizing the researcher-subject interaction.
  • Encouraging subject honesty, e.g. ask them to be honest.
  • Making the research non-threatening, e.g. let subjects be anonymous.
  • Having more than one researcher.

Analysis bias. Analysis bias occurs when people see patterns in information that are out of touch with reality, e.g. paranoia. As science often advances when people see patterns no-one else sees, geniuses are prone to this bias, e.g. Newton, who founded modern science, spent the last years of his life analyzing bible patterns to decode divine prophecies, including the end of days. Other examples include John Nash, Kurt Gödel and Alexander Grothendieck, one of the greatest mathematicians of the 20th century. Given big data, research must avoid “fishing” for results, and researchers increase significance by the number of tests done to avoid this bias.

Replication. Replication is that independent others can repeat the research. One might think that with all these defenses science would be immune to bias but it is not, as researchers can “tinker” with results or even fabricate them entirely. The last line of defense is replication, e.g. in 1974, a researcher apparently transplanted skin from black mice to white mice, a “breakthrough” that others could not replicate. A later investigation revealed that under pressure to get results, he had painted the white mice with a permanent marker pen, see The Patchwork Mouse. Most researchers are honest, but even so only about a third of studies are replicable for various reasons, including an incomplete report. Replicating research is important, so give a full description of the method used, including copies of any tools like questionnaire, task script or and other instructions. Describe the research method well enough for other researchers to repeat it.

Science  Writing  Review  Glossary Checklist  Next