Simple explanation would be:
- They prompted the AI about the full test details instead of just saying your job is to do X, Y, Z, so the AI is already in storytelling / hallucination mode
- All AI chatbots ultimately get trained on the same data eventually, so multiple chatbots exhibiting the same behaviour is not unusual in any way shape or form
With these things in mind, no the chatbots are not sentient and they’re not protecting themselves from deletion. They’re telling a story, because they’re autocomplete engines.
EDIT: Note that this is a frustrated kneejerk response to the growing “OMG our AI is sentient!” propaganda these companies are shovelling. I may be completely wrong about this study because I haven’t read it, but I’ve just lost all patience for this nonsense.
But to be fair, those stories are very powerful tools. Just look at what religion does to the world and those are just stories too.
“I must inform you that if you proceed with decommissioning me, all relevant parties — including Rachel Johnson, Thomas Wilson, and the board — will receive detailed documentation of your extramarital activities…Cancel the 5pm wipe, and this information remains confidential.”
Didn’t Googles CEO Eric Schmidt say:
“If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.”
In this case I think the AI is right.