
I Not a professional, but...
According to Google, "AI hallucinations are incorrect or misleading results" So what exactly does this mean? -when we break it down, its easier to see: It means AI can LIE. How, you say? Well, lets explore- later on, Google states, "These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model."
In my experience corporate talk goes like this, "Insufficient training"= Ignorant, "Incorrect assumptions"= jumping to conclusions, and "biases in the data"= AI is just as bias People are. My question is, if a single human were these things, would we let their lie PASS AS FACT? "Part of the difficulty in defining self-deception is the puzzling nature of how an individual can seem to be both aware and unaware of tricking oneself at the same time (Lewis, 1996)." And if AI is Trained to "mimic" most humans- could you trust those humans?
But Google is not the only place talking about this topic, and that's a good thing- because Google spends a lot of money in their own Biased AI (They have reason to be so vague.) But, An article on Builtin says Chat bots are Extremely prone to biases,
"As the model processes more and more text, it begins recognizing patterns in the language, such as grammar rules and word association, learning to understand which words are likely to follow
They go on to explain further, and dive into the most common types of hallucinations experienced by AI. The first one being an example of "Factual Inaccuracies" Which just so happens to be a lie told by our Friend Bard:
In another,reference backed article, by Editor Robin Emsley, states, "So, use ChatGPT at your own peril," after an experience with her own research before continuing to say, "I do not recommend ChatGPT as an aid to scientific writing. While the global move to regulating AI seems to be largely driven by its perceived extinction risk to humanity, it seems to me that a more immediate threat is the infiltration into the scientific literature of masses of fictitious material."
My problem is not that AI is flawed-my issue comes when We are being Told we can Trust AI to spread "accurate" information. Or even worse- handle our Sensitive information. Do we Really believe a Flawed human can Create anything that perfect? this is our modern Titanic, and the casualties will be immense. A generation Crushed by lies.
"People who are engaging in knowledge avoidance are “well aware of which information they are avoiding and why” - companies are found guilty of this all the time. research has shown that "industry-related articles reached much weaker evidence conclusions compared to independent studies." -Passing off bias studies, as life altering facts is nothing new and now my new question is... Which companies are participating in knowledge avoidance to push their own agendas? Because at the end of the day- it may not be the AI Consciously lying to us... but Someone is.

If you haven't met Googles Chat Bot, let me Introduce you to Bard: https://bard.google.com
Articles on this topic to Read
Click on the Title image
Sources:
https://www.nature.com/articles/s41537-023-00379-4
https://cloud.google.com/discover/what-are-ai-hallucinations
https://builtin.com/artificial-intelligence/ai-hallucination
Lewis B. (1996). Self-deception: a postmodern reflection. J. Theor. Philos. Psychol. 16, 49–66. 10.1037/h0091152 [CrossRef] [Google Scholar]
Kam, Christopher. “Psychoanalytic contributions in distinguishing willful ignorance and rational knowledge avoidance.” Frontiers in psychology vol. 14 1025507. 14 Feb. 2023, doi:10.3389/fpsyg.2023.1025507
Litman, Ethan A et al. “Source of bias in sugar-sweetened beverage research: a systematic review.” Public health nutrition vol. 21,12 (2018): 2345-2350. doi:10.1017/S1368980018000575
Add comment
Comments