elelex boutique law firm

LLMs are multiplying faster than rabbits! 🐇 But can you trust their answers on important matters? 🤔 Do we need ethical AI 🤖, or just users who can handle their... unique perspectives? 🌍 Let’s explore the dilemma together: should we train the AI to be good, or train ourselves to spot the questionable? 💡 Read on and let's debate!

If a tree falls in a forest

AI generated image, any similarity to actual persons, living or dead, is purely coincidental
AI generated image, any similarity to actual persons, living or dead, is purely coincidental

In my previous article “Think, Write, Innovate” I briefly discussed various historical methods of storing knowledge, culminating in the latest advancements in digital media and AI. As more of the physical world is digitized and this data is fed into Large Language Models (LLMs), these LLMs are becoming the go-to solution for individuals and businesses seeking answers to their most critical and important questions.

It is easy to imagine the future where older forms of media are used less frequently or not at all. Ask yourself: Would you travel to a distant mountain to see a cave painting, or would you prefer to view a photo of it? Would you visit a library to read a paper book, or would you rather read the same book on your digital screen? Is it easier to find a physical photo (or book), or a digital one? Or perhaps it’s simpler to search online or ask your favorite LLM?

Last month saw a flurry of new LLM releases that made waves across the Internet. To name a few, but far from all: Gemma 3 by Google on 12 March 2025 (also mentioned in my article “In what language do you think?”), EXAONE Deep by LG AI Research on 18 March 2025, OpenAI New Audio Models by OpenAI on 20 March 2025, DeepSeek V3-0324 by DeepSeek on 24 March 2025, 4o Image Generation by OpenAI and Gemini 2.5 Pro Experimental by Google both on 25 March 2025.

Paraphrasing the saying “If a tree falls in a forest and no one is around to hear it, does it make a sound?” in the context of LLM releases: If an LLM is released and no one uses it, does it make a difference? Although both 4o Image Generation and Gemini 2.5 Pro Experimental were released just a few days ago, their immediate availability to many people around the world (though 4o Image Generation is only available to paid users) has allowed these models to overshadow other LLM releases.

But here is the catch – have you noticed how different LLMs respond to the same questions? Not simple or advanced math with predictable outcomes, but questions about life and personal values? If not, please read my previous article, “There are no results for ‘Think of one word and one word only’.

It seems to me that the crux of the matter lies in the training data and the constraints placed on LLMs, which shape their “thinking” about the world. Without going into awkward examples where some LLMs misrepresent historical images, refuse to name famous people, or discuss certain events, one can sense the problem: Unless an LLM is designed with an ethical purpose in mind and at heart and treats its input data fairly, it is unlikely to be safe, reliable, or trustworthy. But does it matter? Does an LLM really need to be trustworthy? After all, people do not always trust each other, yet the world keeps turning.

Should LLMs be ethical and trustworthy, or is it acceptable for LLMs to be more human-like, providing answers on their “mind” and letting others like real humans decide? But what if the real human lacks the experience or knowledge to make an informed decision to accept only “good” responses and reject the “bad” ones? This feels to me like a problem worth solving.

What do you think? Should LLMs be trained to be ethical and trustworthy, or should people be better trained to discern “good” from “bad” LLM responses? Please share your thoughts in the comments.

This article was written for fun, please do not judge. Instead, please share your comments in a constructive and respectful manner. The author and AI remain innocent until proven guilty.


to publications

to home