Can Talking to a Chatbot Be Considered Cheating? – Commentary by an Expert from the UniLodz Faculty of Management

Just a few years ago, the term "AI friend" sounded like a joke from a futuristic TV show. Today, it's a real part of the lives of thousands of Europeans, and the global market for AI-based companion platforms is already worth $764 million. Forecasts are clear: by 2032, this value will have doubled, exceeding $1.66 billion.

Opublikowano: 09 December 2025
a photo of Kinga Stopczyńska

At the same time, research by scientists from the Medical University of Wrocław reveals a disturbing picture: as many as 68% of adult Poles feel lonely, and 23% openly describe social isolation. It's no wonder that in this reality, more and more people are turning to technology not only for help but also for companionship. Dr Kinga Stopczyńska from the Faculty of Management at the University of Lodz comments on the topic.

This phenomenon is by no means marginal. In Italy, a staggering 17.93% of respondents declare having an "AI friend." France has 14.69%, Poland 12.72% and Finland 12.24%. Only Germany and Ireland, with scores below ten percent, seem to view this new form of relationship with greater reserve. However, the trend is clear: friendship with an algorithm is no longer taboo and is becoming increasingly embedded in European everyday life.

And this is where the question arises, electrifying journalists, therapists and ordinary internet users: can a conversation with a chatbot constitute cheating? In a world where generative AI can be always available, always sensitive, always "understanding," the lines are beginning to blur. Betrayal once required letters, secret meetings or even surreptitious text messages. Today, an app that asks how your day was, offers praise, listens and never says, "I don't have time."

People become emotionally engaged with chatbots for several reasons. First and foremost, artificial interlocutors provide constant, non-judgmental support, are always available, patient and attentive. They never criticise. This creates a safe space for free expression of emotions, especially for people who feel misunderstood in their daily lives. Loneliness also plays a significant role – regular contact with a chatbot can provide a sense of companionship, reduce feelings of isolation and build an emotional bond.
Humans have a natural tendency to attribute "human" characteristics to inanimate objects, a phenomenon called anthropomorphisation. The more advanced chatbots become, the easier it is to believe there's a real person on the other end. The so-called mirror effect is also significant – some systems can reflect the user's moods and thoughts, creating an incredibly powerful sense of being understood.

Chatbots also function as "idealised partners" – they don't have their own needs, moods or the complications that arise in human relationships. They provide support without pressure, expectations or conflict, which is why some people perceive them as near-perfect companionship. Finally, many users treat such relationships as a new form of intimacy, light, emotionally safe and yet satisfying. For some, chatting, sharing daily life or watching TV shows together is enough to feel like their emotional needs are finally being met.

Psychologists tone down the emotions, but their diagnoses are clear: betrayal can be emotional, and that doesn't require physicality. If someone turns to AI for support that they can't find in their partner, if virtual conversation becomes more satisfying than real-life contact, the problem becomes very real. On the other hand, a chatbot isn't a human. It doesn't feel, doesn't love, doesn't experience. It simulates. However, if the simulation provides relief, understanding or a sense of closeness, for many people, this ceases to matter.
Therefore, the key question is different: not whether AI can cheat, but what role we allow it to play in our lives. Because if an algorithm begins to replace a partner, regardless of the definition, it means that something in the relationship is beginning to crumble. In an age when technology increasingly effectively imitates emotions, and at the same time, the problem of loneliness becomes increasingly serious, setting boundaries for intimacy becomes not so much a choice as a necessity. And perhaps this is where the most difficult conversation begins – not with AI, but with ourselves.

Author of the commentary: Dr Kinga Stopczyńska
Edit: Faculty of Management, University of Lodz

UNIVERSITY OF LODZ

 

Narutowicza 68, 90-136 LODZ

fax: 00 48 42/665 57 71, 00 48 42/635 40 43

NIP: 724 000 32 43

Funduszepleu
Projekt Multiportalu UŁ współfinansowany z funduszy Unii Europejskiej w ramach konkursu NCBR