It’s becoming increasingly commonplace for people to develop intimate, long-term relationships with artificial intelligence (AI) technologies. At their extreme, people have “married” their AI companions in non-legally binding ceremonies, and at least two people have killed themselves following AI chatbot advice. In an opinion paper publishing April 11 in the Cell Press journal Trends in Cognitive Sciences, psychologists explore ethical issues associated with human-AI relationships, including their potential to disrupt human-human relationships and give harmful advice.

“The ability for AI to now act like a human and enter into long-term communications really opens up a new can of worms,” says lead author Daniel B. Shank of Missouri University of Science & Technology, who specializes in social psychology and technology. “If people are engaging in romance with machines, we really need psychologists and social scientists involved.”

AI romance or companionship is more than a one-off conversation, note the authors. Through weeks and months of intense conversations, these AIs can become trusted companions who seem to know and care about their human partners. And because these relationships can seem easier than human-human relationships, the researchers argue that AIs could interfere with human social dynamics.

A real worry is that people might bring expectations from their AI relationships to their human relationships. Certainly, in individual cases it’s disrupting human relationships, but it’s unclear whether that’s going to be widespread.”

Daniel B. Shank, lead author, Missouri University of Science & Technology

There’s also the concern that AIs can offer harmful advice. Given AIs’ predilection to hallucinate (i.e., fabricate information) and churn up pre-existing biases, even short-term conversations with AIs can be misleading, but this can be more problematic in long-term AI relationships, the researchers say.

“With relational AIs, the issue is that this is an entity that people feel they can trust: it’s ‘someone’ that has shown they care and that seems to know the person in a deep way, and we assume that ‘someone’ who knows us better is going to give better advice,” says Shank. “If we start thinking of an AI that way, we’re going to start believing that they have our best interests in mind, when in fact, they could be fabricating things or advising us in really bad ways.”

The suicides are an extreme example of this negative influence, but the researchers say that these close human-AI relationships could also open people up to manipulation, exploitation, and fraud.

News