What are AI hallucinations?
According to Google, "AI hallucinations are incorrect or misleading results" So what exactly does this mean? -when we break it down, its easier to see: It means AI can LIE. How, you say? Well, lets explore- later on, Google states, "These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model." In my experience corporate talk goes like this, "Insufficient training"= Ignorant, "Incorrect assumptions"= jumping to conclusions, and "biases in the data"= AI is just as bias People are. My question is, if a single human were these things, would we let their lie PASS AS FACT? "Part of the difficulty in defining self-deception is the puzzling nature of how an individual can seem to be both aware and unaware of tricking oneself at the same time (Lewis, 1996)." And if AI is Trained to "mimic" most humans- could you trust those humans?
Create Your Own Website With Webador