The AI Mind Trap: Bias in AI-Driven Competitive and Market Intelligence

By Thorsten Bill

Bias in competitive and market analysis refers to the tendency to skew information or perceptions based on preconceived notions, leading to potentially inaccurate conclusions. This can include confirmation bias (favoring information that confirms existing beliefs), availability bias (relying on immediate examples), and anchoring bias (relying heavily on the first piece of information encountered). It’s crucial to be aware of these biases to ensure accurate and objective analysis.

Bias in AI Models

While some authors view AI as a solution to counter bias in CI/MI analysis [7], the reality is quite the opposite. One of the significant challenges in developing and using Large Language Models like ChatGPT is dealing with inherent bias [1].

Training Data as a Source of Bias
Any bias present in the training data will inevitably become part of the resulting AI model. Therefore, expect elements like fake news, click bait, sponsored content, propaganda, business narratives, and even satire to be an integral part of your results.

Temporal, Group Think, Attention, and Availability Bias
The model may reflect viewpoints and group think prevalent during the time of its training and data collection. Additionally, easily accessible data or data perceived as more authoritative may be overrepresented in the results.

Confirmation and Anchoring Bias
The model may generate content that aligns with pre-existing beliefs, assumptions, or stereotypes present not only in the training data but also in the results that align with the user’s expectations, inherently expressed in the prompts used. This can lead to inconsistent results. Don’t expect critical answers if you don’t question whether the opposite might also be true. And remember to ask about the opposite in a separate session, as the models tend to anchor their answers on the initial results.


The biases mentioned above are just the most apparent and prevalent ones encountered when using these models. A recent review [1] lists a total of 24 biases found in ChatGPT.

Deciphering AI “Creativity”, Bias, and Hallucination

To comprehend the essence of AI creativity and potential bias, it’s crucial to grasp the fundamental workings of these models. During the training phase, a generator model is employed that produces random recombinations and rewrites of existing content. Concurrently, an adversarial network operates to filter out nonsensical outputs, thereby training the generator model to yield more coherent and contextually appropriate results. In ChatGPT, the best results are presented to a human trainer (labeler), who ranks the results to adjust the internal rewards system of the generator [1].

In essence, the generator model initially produces random plagiarized content until the adversarial network trains it to generate content that aligns more closely with the quality content found in the training data, and the human trainer (labeler) trains it to align it to what looks more verbose and comprehensive.

The potential for bias arises from three sources: the internal scoring mechanism of the adversarial network, the training data itself, and the preferences of human trainers. All of these elements can contribute to the bias as discussed above.  

The crucial point is: AI lacks real world experience. It’s like a theoretical physicist who creates new formulas based on their training, but lacks the ability to conduct real-world experiments to validate or falsify their hypothesis. AI is inherently limited to its own form of “hallucination”. It will create a hypothetical “Newton’s Fourth Law” with the same underlying principles as in creating Elvish songs or new Vogon poetry. In fact a recent review on hallucination in Large Language Models list knowledge acquisition and post hoc correction as one major mitigation technique to prevent hallucination [3].

Conclusion

While AI and Large Language Models like ChatGPT offer significant potential in competitive and market analysis [4], [5], [6], it’s crucial to be aware of the inherent biases these models may carry. These biases, stemming from the training data and the model’s design, can influence the results and lead to potentially skewed conclusions.
Therefore, it’s essential for users to approach AI-generated insights with a critical eye, questioning the results, and cross-verifying them with other reliable sources.

The Institute for Competitive Intelligence’s ICI-26 Workshop on ”Intelligence Mind Traps and Cognitive Biases” provides valuable guidance in this regard. As we continue to leverage AI in competitive intelligence, understanding and navigating these biases will be key to ensuring accurate and objective analysis.

References

  1. Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Artificial Intelligence Review
  2. Berman, R., & Daphna-Tekoah, S. (2023). Humans as creativity gatekeepers: Are we biased against AI creativityCreativity Research Journal, 32(4), 415-424
  3. Zhang, Y., Li, Y., Cui, L., Cai, D., Liu, L., Fu, T., Huang, X., Zhao, E., Zhang, Y., Chen, Y., Wang, L., Luu, A. T., Bi, W., Shi, F., & Shi, S. (2023). Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language ModelsarXiv preprint 
  4. Spy Newsletter. (2023, November 13). 21 AI prompts for competitive intelligence
  5. Spy Newsletter. (2023, November 13). Harness the Power of AI for Competitive Intelligence: ChatGPT, Claude-2, and Google Bard
  6. Hoffman, C. (2021, January 22). How to fact-check ChatGPT with Bing AIHow-To Geek
  7. Datta, A., & Lee, M. (2023, November). Use GenAI to uncover new insights into your competitorsHarvard Business Review

Related articles

Related events

Want to stay informed?

 

Log in