Thinking Fast and Slow: Human Cognitive Biases and their role in AI

Updated: Jul 6, 2021

One might assume that since computers operate on tenets of logic and calculation, machine learning and AI will not be impacted by cognitive biases. However, the caveat remains that humans are an essential component in the process, responsible for creating and monitoring the programs, thus inadvertently causing their inherent biases and heuristics to adversely affect algorithms.

Decision Making and You: What are Cognitive Biases?

Besides an overall limited processing capacity, human cognition shows systematic distortions in information processing, manifested as various Cognitive Biases. These are Heuristics or mental ‘shortcuts’ which affect decision making and reasoning, which might lead to errors in interpretation and information processing. When the brain receives information, it slightly alters it while filtering and interpreting said information. Initially, these served functions of efficiency and optimality in decision making, originating from functional evolutionary principles aimed at promoting survival. These shortcuts help make quick decisions essential for survival, even if they stand on limited knowledge. Although biased reasoning can result in acceptable outcomes, it can cause people to deviate from the tenets of logic, probability, and calculation, leading to suboptimal or irrational decisions and perceptions.

A few cognitive biases have been identified, which cause significant limits on the functioning of intelligent machines when accidentally programmed into AI.

  • Availability Heuristic is a common cognitive bias prominent in AI decisions. This bias refers to the tendency of human beings to rely more on information that supports their current beliefs or ideologies.

  • Ambiguity Aversion is a tendency to prefer known risks over unknown risks. As missing information is logically uncomfortable, one strives for additional information. Data mining tasks typically contain various attributes about which the data analyst may possess no or limited knowledge. Hence, the data analyst may prefer rules that do not contain ambiguous conditions and predicts that those perceived as ambiguous may have worse outcomes.

  • Negativity Bias refers to the tendency to assign a higher value to negative information or evidence than positive or neutral information of equal intensity. Hence, negative information acts as an attention magnet by making it seem more important. Conditions with negative valence get more attention than those with positive valence, even if the conditions are irrelevant.

  • Primacy Effect refers to the disproportionate effect of initial information on final assessment and outcome. Once an initial assessment is made and plausibility ascertained, its subsequent evaluations reflect this initial disposition and the interpretation of novel information. It leads the analyst to favour rules presented first in the rule model, even if the ordering of these rules does not correspond to the relative importance of quality.

  • Disjunction Fallacy is the tendency to judge the probability of an event as higher than the probability of the union of the event with another event. It is inconsistent with the disjunction rule, which states that the probability of the union of events is greater than the probability of individual events. Data mining often presents the data analyst with rules containing attributes with multiple levels of granularity. Hence, the analyst tends to prefer rules with a greater number of specific attributes, even if they have less evidence backing them, resulting in lower statistical validity.

  • Information Bias refers to the tendency to seek more information, whether relevant or irrelevant, to improve perceived validity. The information bias causes analysts to set up learning algorithms with a larger rule list and longer rules with attributes that might contain little informational value.

How do Cognitive Biases affect AI?

Human cognitive biases can affect Artificial Intelligence and Machine Learning through the input of faulty training data, algorithms, and interaction. Machine learning algorithms are developed on data and inherently trained to make assumptions. Data is dependent on the set of training data given for scoring or prediction. Biases in machine learning often stem from biased human decisions or historical and social inequities introduced by individuals who design and train machine learning systems.

Individuals could either create algorithms that include unintended cognitive biases or prejudices. Another factor responsible for machine learning bias could be incomplete, faulty, or prejudicial data sets to train or validate machine learning systems.

These can be manifested through areas in machine learning vulnerable to cognitive biases, such as the collection and structuring of data, data set sizes, degree of its objectivity, inclusion or lack of certain indicators, and the weight assigned to various data points. Human cognitive biases inadvertently cause humans to show a bias in weight assigned to particular variables and indicators. Therefore, machine learning bias can develop from its inception through data input and supervised training, interventions, and manual adjustments.

We have already started seeing instances of these biases at play, causing harmful results. Amazon stopped using a hiring algorithm in 2018 after it was found to favour male applicants, over females. The algorithm downgraded graduates from two all- women’s colleges and penalized words such as “women's” in resumes. Furthermore, the algorithm rewarded applicants who had used the terms ‘captured’ or ‘executed’ in their resumes, terms which male engineers are statistically more likely to include. As a result, the algorithm had recommended even unqualified candidates for hiring positions.


In recent years, Artificial intelligence has become increasingly pervasive- being used in search engines, equipment failure predictions, credit scoring, self-driving cars, and business and medical domains among other uses. Considering the growing ubiquity of artificial intelligence, it is critical to prevent the introduction of human cognitive biases in their development.

Even though awareness and explicit guidance can reduce fallacy rates, these cannot be considered the sole recommendations required to counter biases. Although multidisciplinary efforts to counter biases have taken great strides in recent years, bias research requires more investment and interdisciplinary engagement. A mix of debiasing approaches and increased multidisciplinary bias research would help ensure effective evaluation and progress in AI as the field progresses.


Unwired India is a neurotech-startup that aims at integrating state-of-the-art research and developments in STEM, for catalyzing the transition of Neuroscience to Neurotechnology. We develop avant-garde non-invasive neurostimulation products used to solve some of the world’s most critical global issues and challenges. Our mission is to take cutting-edge brain research directly into the lives and homes of people, thereby fostering a unique culture of sustainable neuroscience and scientific literacy in India.

  • Founded in 2020, we are the pioneers of Nootropics and non-invasive Neurotechnology devices in the country, and offer so much more than high-quality, delicious Brain Nutrition products for daily cognitive support; a full-service health and fitness startup that has become an important part of the local community, here in New Delhi, India.

  • We develop non-GMO, all-natural nootropic (smart-drug) formulations, Himalayan herb blends, and specialized amino-nutraceutical interventions and supplements for enhanced brain function, cognition, neuroinflammation, and neurodegeneration.


A friendly reminder: We've done our research, but you should too! Check our sources against your own and always exercise sound judgment. 

S. Pagliaro, Cognitive Biases and the Interpretability of Inductively Learnt Rules, 

T Kliegr, Š Bahník, J Fürnkranz, A review of possible effects of cognitive biases on interpretation of rule-based machine learning models,


About The Author

Ragini Narang,

Guest Author (New Delhi, India)

Ragini Narang has her academic interests lying in the intersection of psychology and neuroscience. A passionate advocate of mental health and its awareness, she also finds herself intrigued by its sociopolitical dimensions. She aims to integrate psychology within the realms of society at large through her inclination towards public policy and has worked with numerous NGOs for the causes of social welfare and upliftment. She considers herself an 'avid observer, looking for the unconventional in the mundane'.

994 views0 comments