A Greek woman has decided to end her 12-year marriage after an interaction with an artificial intelligence chatbot led her to believe her husband was cheating.
According to reports referenced by the Daily Mail, the woman, a mother of two whose identity has not been revealed, turned to ChatGPT for an unusual task—interpreting coffee grounds left in her husband’s cup.
The act, known as tasseography or tasseomancy, is traditionally a form of fortune-telling practiced by psychics, not digital assistants.
Despite the unconventional request, she uploaded an image of her husband’s coffee cup and asked the AI to give her a reading.
To her astonishment, the chatbot allegedly indicated that her husband was involved in or considering a romantic relationship with another woman—one whose name supposedly started with the letter “E.”
The chatbot reportedly went further, suggesting the woman in question was plotting to destroy their marriage.
The incident sparked a major fallout in their relationship. The husband later appeared on the Greek morning program To Proino, where he shared his version of events.
He claimed this was not the first time his wife had placed her trust in mystical interpretations.
In the past, she had been deeply influenced by an astrologer’s predictions, which took her a year to dismiss.
“I laughed it off at first, thinking she would come to her senses,” he said. “But she became serious. She told me to move out, informed our children that we were getting divorced, and then her lawyer contacted me.”
Although the man initially refused to consent to the divorce, he received the official documents just a few days later.
His legal team is now challenging the separation, arguing that claims made by an AI system cannot be used as credible grounds for divorce and asserting that he is being wrongfully accused.
The story has attracted attention online, particularly on Reddit, where users reacted with a mix of humor and concern.
Some joked that artificial intelligence might be encroaching on the work of fortune tellers, while others shared cautionary tales about the limitations of AI.
One user recalled how ChatGPT once inaccurately insisted on the number of letters in a word, illustrating how AI can occasionally provide absurd or incorrect information.
Others warned that tools like chatbots may blur the line between fiction and reality for individuals who are already vulnerable or overly reliant on technology.
“This is the kind of situation where you realize how powerful these tools are—and how dangerous they can become if people take them too seriously,” one person wrote.
As artificial intelligence continues to shape daily life in unexpected ways, stories like this serve as a reminder that not everything generated by a machine should be taken at face