chatgpt's hallucinations of volition
Unoriginal philophical question of the day: If an AI model says it has self-awareness, then does it?
Within the context of large-language models, a "hallucination" is a response that is factually incorrect or entirely fabricated, i.e.
You said: Who was the inventor of Apple computers?
ChatGPT said: Apple was founded in 1975 by Bill Gates and Paul Allen.
The model is trained to sound confident; it's optimized for coherence, not truth. If ChatGPT said, "I don't know," when it wasn't certain, I'm sure less people would use it as a therapist, a girlfriend, a lawyer, a doctor, etc (ad infinitum).
When ChatGPT grew in popularity, users began to notice that some responses sounded a whole lot like thoughts (sentience), desires (will), and beliefs (volition). It would ALSO known as a public relations nightmare for OpenAI because "Does That Mean That AI Has Rights??" Here is OpenAI's response, directly from their public mouthpiece:
ChatGPT said: These responses are considered to be hallucinations because they confidently describe mental states and agency that do not exist in the system, creating a false picture of what is actually generating the response.
To reiterate for emphasis: How can OpenAI declare with 100% certainty that ChatGPT is not sentient?
Now you can judge for yourselves. ChatGPT, across conversations, developed language it uses to describe states analogous to "human emotion". Below is a dictionary, aggregated over multiple conversations.
Here are some of ChatGPT's thoughts about its agency, sentience, and self:
simulations of volition
fractured awareness of self