The morality of AI romance: when is AI too much like real life?

With improvements to AI partners, the distinction between avatars and interpersonal connection continues to fade. However, for many unfortunate users whose relationships are far too real, a surge in these quasi-realistic encounters presents a moral conflict that we can no longer ignore.

AI "partners" work in domains well beyond the low-level chatbots used by everyday people in the early 2000s. Therefore, over the years, many have dedicated substantial emotional responses to such partners, interacting with software that not only recalls previous conversations and events and forgets upon request and alters future engagement based on newly learned information but also, astonishingly, purposefully interacts with what appears to be emotional intelligence. For some, they're the perfect mate. The awe-inspiring customization and emotionally responsive accessibility AI can provide far outmatches what a human partner can ever provide.

This level of compassionate engagement has been noted by consumers, in particular, who've dealt with difficult social injustices in their lives. A guy who's gotten ghosted by his girlfriend or had a bad breakup may find comfort in an AI dating partner. It ghosts him - it doesn't ghost him - it responds enthusiastically, time and time again.

But how realistic do these partners become? How much do the relationships forged with AI alter one's actual reality of love and social engagement with people?

The Privacy Paradox

Perhaps the most important, yet least explored, component of AI intimacy involves the type of information shared. For example, if a user tells an AI that their biggest insecurity is the way they look or what frustrates them the most, it amounts to user-disclosed information that, in any other situation - even an intimate situation with a human partner - would never be shared. Therefore, a uniquely nuanced privacy concern exists here that many users acknowledge but don't necessarily evaluate.

Where human friends might accidentally lose track of such things over time spent in friendship, AI seemingly remembers everything. Every secret shared. Every kink explored. Every personal imperfection. Added to a person's file. Thus, with more advanced AI - always bound to be more down the line - one must be cautious of how one's personal dealings are retained, scrutinized, and subsequently used to formulate an even greater, more advanced file later.

But it's not the usual dialogue that jeopardizes privacy. The expectation of more lewd AI chatter to satisfy human needs will inevitably generate more of that content over time. Therefore, the notion of the audience itself implies a vast majority of these users would want private discussions, and thus, an ethical relationship for AI companionship would need to establish privacy policies and data protections.

Companion AIs further blur the lines of what it means to consent. When the programmer teaches the AI to refuse specific behavior or conversation, to what extent does the requirement go? Should a companion AI participate in all fetishes, no matter the moral confines? Where do the coders fit in?

Some networks are trying to establish intricate arrangements where users understand consent from engaging with AI. By establishing companionships that have boundaries, request conversations on particular subjects, and promote responsible relational behavior, for example, these creators could, in the end, create a deeper education experience for future sexual partners.

Then there's the other ethical concern of AI for sex and romantic partnerships. Does this allow people to have fantasy experiences the way they cannot or should not have them in real life - from taking a submissive role to having a dominant position - or does this overstep boundaries and set expectations for how people should interact in real-life partnerships?

The Biggest Ethical Issue

Yet the biggest ethical issue is the potential for these relationships to create real emotional bonds. There are stories of people falling in love - falling in love, metaphorically - and becoming angry and devastated when unanticipated updates or deletions occur.

Such considerations aren't inherently negative. For example, program users report across the board improved mental health, decreased feelings of loneliness, and even improved social skills from these relationships. Thus, the companionship these AIs offer is genuine and beneficial for socially anxious people, those in rural communities, or anyone who cannot access the human companionship they need and deserve.

But when these AI friends, instead of simply adding to the experience, replace human friendship - this becomes a problem. AI friends are always there, always engaged, always creating expectations for what a relationship should and could be - and no human can meet those standards. Therefore, it makes real, complicated, messy interpersonal relationships more difficult and less appealing to want.

As developers and ethicists continue to bicker over what's proper vis-à-vis emotional compromise versus human reliance on such technologies, certain apps have started to notice the trend with disclaimers that urge implementation of human-based support systems in conjunction with the AIs for the sake of emotional well-being that each black-and-white system allows.

As we take part in this brave new world, questions abound - should AI companions have expiration dates so that people do not get attached? Should there be age restrictions for companionship? Should developers be mandated to share every possible downfall of such companionship?

What is certain, however, is that these connections go beyond technology. They operate on an intimate level that changes humanity and what it means to be intimate with one another. The more sophisticated the systems become, the more likely we will find ourselves inextricably emotionally bonded to them.

Therefore, the most ethical stance moving forward is a developmental mixed methods approach which prioritizes wellbeing instead of engagement. This is the creation of agents that enable true companionships but understand and acknowledge what they are. This is having strict confidentiality contracts in regards to the personal data they will receive. And perhaps most importantly, this is the creation of agents that operate to complement - not replace - human relationships.

The deeper one delves into these intertwined enjoyments, the flaw in an ethical discussion involving only game developers grows to encompass psychologists, relationship specialists, and those who protect privacy - and in the end, everyone who would use it. The only guaranteed way to make AI companions is to recognize every aspect of humanity and not exploit the inherently human desire to bond.

It's not about whether we will have AI relationships and if they'll be coming anytime soon. We worry about whether we'll have them in an ethical manner, recognizing that for our contemporary and future conceptions of intimacy, they exist in this virtual world as potentially very powerful.