boutique law firm

AI not make one great

A long time ago in a galaxy far, far away… 🚀 AI not make one great! 🤖 Dive into the wild world of AI ethics, and why we should all “Think before you AI” 💡. Join the Light Side 🌟and let’s make AI great responsibly again! Spoiler: The future’s in our hands (for now) 😅

AI not make one great
AI generated image, any similarity to actual persons, living or dead, is purely coincidental

This past week was packed with major AI events: Microsoft Build (19 – 22 May), Google I/O (20 – 21 May), and Anthropic release of Claude 4 (22 May). It reminded me of the flurry of new large language models (LLMs) released in March, which I mentioned in my article “If a tree falls in a forest”.

In that article, I questioned the real-world impact of the new LLMs, especially since most are not readily available to scientists, businesses, and the general public. The issue I raised then remains relevant today and will likely continue to be relevant for the foreseeable future:

Unless an LLM is designed with an ethical purpose in mind and at heart and treats its input data fairly, it is unlikely to be safe, reliable, or trustworthy”.

I would like to extend the above problem statement by adding that the wise and responsible use of LLMs is another part of the equation. Anthropic decision to activate AI Safety Level 3 protections for Claude Opus 4 only underscores this concern and the pressing question: What comes next?

In my previous article “I have a bad feeling about this”, I briefly touched on whether we should treat LLMs as tools or as persons. I lean toward the former. LLMs are tools, and humans remain their primary users, responsible for both their use and consequences. For now, let’s set aside the emerging trend of LLMs interacting with each other via protocols like Model Context Protocol (MCP) or Agent2Agent (A2A). However, even in those cases, humans remain the ultimate end users, or at least are planned to be.

Criminal law clearly differentiates between a subject (a person committing a crime) and an instrument (a tool used by the subject to commit a crime). A knife is not treated as a person. It is an intent, malice, or negligence of the person, not the knife, that play a pivotal role. So, it is not the knife but the murderer who is tried in a criminal case.

The saying “double-edge sword” perfectly depicts any capable tool. A knife can help one make a salad, but it can also be used to kill. LLMs so far cannot kill directly (wait until robots arrive in masses), but there are early signs of agentic features and tool use, where LLMs are becoming increasingly capable of making a difference in the real world, e.g. hiring a hitman on the Dark Web or blackmailing a person.

LLMs absorb vast amounts of data at training and at post-training stages. More companies are working on personalized LLMs that not only remember your past communication but synthesize understanding of you as a person. With soon to be almost infinite memory and context windows, LLMs may become even more acute and precise in their responses, which is good and bad similar to a “double-edge sword”. So, it is what we, people, say and do every day shapes LLMs inputs and search paths in their neural networks to produce the most likely and effective output. Whether the effective output is good or bad is up to us to decide. Recent ChatGPT becoming overly “sycophant-y and annoying” shows a glimpse into the Dark Side.

Remember that scene in The Fifth Element where LeeLoo learns about WAR? She says “Everything you create, you use to destroy”. And Korben replies “Yeah, we call it human nature”. But is that all there is to human nature? I don’t think so. The future is ours to shape, and it’s up to us to architect the desired future.

I invite you all to join the Light Side, to focus on peace, balance, and responsible use of AI to make us and the world around us better, every single day. The time to act is NOW.

Would you be willing to replace email signature cliché “Think before you print” with “Think before you AI”? And yes, I expect “AI” or a similar word meaning the use of AI to become a verb one day, just like “google”.

AI. Or AI not. There is no try.

I strongly believe that AI doesn’t make someone great. Greatness comes from a ‘capital-H Human’, someone worthy of admiration, a role model, a person who inspires others to be and do better. Only then should we use tools like LLMs wisely and widely to create positive change. LLMs are an extension of our own abilities – a ray of light that can either guide us through the darkness of the unknown cosmos or burn us to the ground. The choice is ours.

What are your thoughts on building and using LLMs responsibly? Should we be cautious now, or do whatever is the “the best bang for your buck” just Because it can? Are you personally waiting for an AI doomsday to arrive before getting off the couch?

This article was written for fun, please do not judge. Instead, please share your comments in a constructive and respectful manner. The author and AI remain innocent until proven guilty.

p.s. The title image was generated by the Imagen 3.0 002 model in Google AI Studio. It offers an option to generate images in a 16:9 aspect ratio. Amazing! 🤗