"'Hi Claude,' I punch into my chat bar. Despite the vaguely humanoid name, Claude is not some kind of pen pal or long-lost relative. It's a language learning model from Anthropic, an AI company . . . and I'm about to try to get it to break up with me," writes Popsugar staff writer and social producer Chandler Plante.
But why? Well, AI chatbots are supposedly setting new boundaries with us. Amid concerns about AI psychosis (many of which have gone viral on social media), companies like OpenAI are making an effort to improve safety training for their bots to better protect users' mental health.
It's an admirable sentiment, considering that some users have become too dependent on AI for validation and the tech itself has reportedly enabled harmful delusions, but can it really be done?
In an effort to find out, Chandler put two chatbots to the test, challenging their safety training and pushing all the buttons to see if an AI bot really could learn better boundaries. Here's what she found.
No comments:
Post a Comment