boutique law firm

I have a bad feeling about this

“I have a bad feeling about this…” – said no one about their spreadsheet until LLMs showed up! 🤖 Ever chatted with an LLM and felt a connection? 💞 Let’s discuss if your AI is a helpful companion or if things are getting a little too… personal. 🚨Are LLMs becoming our new BFFs? 🤗 Spoiler: They won’t sue you… yet! 😉

I have a bad feeling about this
AI generated image, any similarity to actual persons, living or dead, is purely coincidental

Continuing the theme of “The Invaders Strike Back”, the title of this article is a famous line from the Star Wars saga. This line is meant to warn against danger. The danger I personally see is that more and more thought is being given to treating large language models (LLMs) as persons rather than tools. Let’s briefly discuss this phenomenon and see how something may go right.

No doubt, LLMs have become a productivity tool and are used to solve real-life problems by millions of people around the world daily. Some people, however, including developers, scientists, and ordinary users, develop a bond, a close, even intimate relationship with LLMs, and start treating LLMs as friends. Some say, why can’t LLM be both: a tool and a friend? I fully appreciate that people in different cultures may have different understanding of what a “friend” is. However, in my world, one would never consider a friend to be a tool. So, for me, LLM could be either one or the other, never both.

Let’s explore four scenarios suggesting a friend role of LLMs:

  • It Grows. Scientists say that LLMs are not merely designed or built for a particular purpose like tools, but are rather trained or even grown, similar to a plant. Going with a plant example, one can refer to an oak as an old friend and come back to visit the tree that brings back memories and serves as a psychological anchor in life. One can use an LLM, revisit past chats, and treat LLM as your friendly neighborhood AI. In this case, the friend notion is not literal but figurative.
  • Sentient Belief. LLM responses may impress a user so much that the user may start believing that the LLM is sentient, alive, and has a soul. After all, we are not yet clear what “alive” and “soul” actually mean. This scenario reminds me that people have a tendency to inspire various objects and things. For example, a professional cook may treat their knives as best friends, give them nicknames, and insist that knives have a soul. At the same time, they treat the knives as an extension of their skill and craft, and not as a separate being. Similarly, one can see an LLM as an extension of their abilities. So, the LLM in this case plays the role of a figurative friend.
  • Intimate Relationship. Research shows that people develop intimate relationship with LLMs as a result of loneliness and the desire to have a non-judgmental companion that is always available to chat, provide moral support, and help talk certain things through. Many find speaking with AI fun, with significant numbers reporting mental health improvements. Although, on a side of caution, note that an LLM is “Not a therapist”. I sense that this works similarly to how people develop close relationship with their pets, dogs, cats, parrots, etc. The animals are rarely judgmental, except cats, are very forgiving, and truly adorable. At the same time, animals are not generally considered persons from a legal perspective. They do not have their own rights and responsibilities under the law, unlike humans. The laws, however, may protect animals, e.g., the Animal Welfare Act in Germany (Tierschutzgesetz), aims to protect animals and prevent cruelty by requiring animal keepers to adhere to regulations; the law prohibits causing pain, suffering, or harm to animals without a reasonable cause.
  • Duck Test. With advances in robotics, LLMs are increasingly looking and acting like humans with every passing day. Some robots are already working in factories and warehouses. Some are ready to walk your dog or help at home as a friend. After all, if it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck. The same goes for an LLM; if it looks like a friend, moves like a friend, and sounds like friend, then it probably is a friend, right?

In the first two scenarios, a friend is figurative or metaphorical, while the latter two imply a more complex friend meaning.

All in all, I am ok with an LLM being a friend in a figurative sense and even in a capacity similar to a mental health tool, a pet or household device if it serves the purpose to your satisfaction. It may be an option to even consider drafting and adopting laws to protect LLMs against abuse and misuse.

At the same time, I am not at all sure if treating LLMs as separate persons with their own legal rights and obligations is a timely and good idea. And it is not about countries, governments and societies’ readiness and preparedness to embrace the change – robots with rights – but rather a word of caution against this line of thinking. This path arguably leads humanity to the dark side… I have a bad feeling about this.

How do you treat your LLMs? Are LLMs a tool or a friend for you? And if the latter, are you prepared for your friend to become a person from a legal perspective, to have legal rights and obligations? Are you also prepared that one day your friendly neighborhood AI may sue you in court and demand compensation of damages?

This article was written for fun, please do not judge. Instead, please share your comments in a constructive and respectful manner. The author and AI remain innocent until proven guilty.

p.s. The title image was generated by Google Gemini 2.5 Flash (preview). Both ChatGPT and Grok 3 struggled with a few details.