elelex boutique law firm

Just wrapped up "AI Season 1"! 🎉 Dive into 10 fun-packed articles, from AI ethics to text-only Space Invaders 📚. See how different LLMs summarized the journey. 🤖 Which LLM nailed the recap? 🤔 Share your thoughts! 🚀

AI Season 1 recap

AI generated image, any similarity to actual persons, living or dead, is purely coincidental
AI generated image, any similarity to actual persons, living or dead, is purely coincidental

A big thank you for all the attention, likes, and comments on my recent series of AI articles! It's been a journey packed with fun and discoveries, spanning 10 articles. Following the streaming playbook, that definitely makes it 'Season 1'!

To celebrate wrapping up this season, a recap felt appropriate. But who (or what!) should write it? A Large Language Model (LLM), naturally! This also served as a little test of their content comprehension and attention span. Each model had to digest all 10 articles – that's roughly 199 paragraphs, 10,653 words, or 58,485 characters. No small feat!

Please enjoy the recaps provided by various LLMs at the end of this article:

  • ChatGPT created a direct summary for each article, grouping articles 7 to 9 under the theme of ethics. It was the quickest to respond.
  • Grok 3 took a bit more time but delivered a more engaging summary. I appreciated how it picked up on the timeframe of February to April 2025 and noted that the first article was published on Valentine’s Day.
  • Gemini 2.5 Pro (experimental) took about the same amount of time as Grok 3 and, in my opinion, provided the best summary. It seemed to delve deeper into the content, summarizing key ideas rather than just the words.

For the sake of thoroughness, I also tried le Chat by Mistral AI. Unfortunately, it didn't quite hit the mark this time. It struggled with the instructions (especially output length) and even hallucinated details not present in the original articles.

In conclusion, all LLMs read much faster than I do. Their approaches to "understanding" and summarization vary greatly, making it a matter of personal preference. Please read the LLM responses below and let me know which one you think nailed it. Which LLM's approach or style do you prefer?

If you prefer to read Season 1 and come up with your own summary, here are the links:

  1. A couple of provocative questions about AI and human nature, 14 February 2025
  2. There are no results for “Think of one word and one word only”, 22 February 2025
  3. Space Invaders are Here, 1 March 2025
  4. Move 37, 8 March 2025
  5. In what language do you think?, 15 March 2025
  6. Think, Write, Innovate, 22 March 2025
  7. If a tree falls in a forest, 29 March 2025
  8. Because it can, 5 April 2025
  9. Pluto is not a planet, 12 April 2025
  10. Not a Therapist, 19 April 2025

This article was written for fun, please do not judge. Instead, please share your comments in a constructive and respectful manner. The author and AI remain innocent until proven guilty.

 

ChatGPT Reason mode, thought for a few seconds, output 345 words

Get ready for a whirlwind tour through a dozen-plus weeks of AI chatter—here’s the lowdown on our ten fun-packed articles! 🚀

1. Human vs. Machine Mind Games

We kicked off with ten pairs of questions probing whether LLMs feel, think, fib, or fumble sarcasm—then dared you to guess which were human-crafted and which sprang from silicon brains. It’s a cheeky reminder: even our smartest bots have quirks.

2. One Word, One Year

Next, we asked five LLMs to pick a single word for each year from 1990–2020. Turns out they mostly agree every half-decade—though one model insists “Simpson” defines 1994! A quirky peek at how differently AIs “remember” history.

3. Text-Only Space Invaders

Grok 3 stunned us by whipping up a fully playable Space Invaders clone in Python—text blocks only! Who knew classic arcade nostalgia could be summoned with code alone?

4. Corporate Exodus

Curious why big names like Twitter, Chevron, and even SpaceX are packing up California for Texas? We charted 20+ headquarters moves—tax breaks and friendlier regs top the list.

5. Translation Face-Off

Gemma 3 vs. Google Translate: which nails nuance? Our verdict: LLM-produced translations win on style and subtlety, though speed still belongs to the old guard.

6. From Cave Walls to Cloud

We traced knowledge media from Paleolithic rock art through papyrus, printing presses, and podcasts—right up to AI and LLMs as today’s ultimate “repositories” of human learning.

7–9. Trust, Trends, and “Because It Can”

Several pieces tackled LLM ethics (“should bots be held to professional codes?”), cultural quirks (continents count, truth-seeking AI rivals), and meme-ready image stylings—hello, “Ghiblification”!

10. “Not a Therapist” Persona

Finally, we checked Grok 3’s mental-health avatar. It’s emphatically “not a pro,” yet offers solid grounding tips—proof even bots can care… a little.

Whether you’re here for coding hacks, culture clashes, or philosophical fun, these ten dispatches prove one thing: AI isn’t just smart—it’s entertaining. What topic made you smile (or scratch your head) most? Let us know!

 

Grok 3 Think mode, thought for 18 seconds, output 288 words

AI in 2025: A Wild Ride Through Tech’s Funhouse

Hey there, Internet explorers! Let’s hop on a rollercoaster through ten zany articles from February to April 2025, all about large language models (LLMs) and artificial intelligence. These pieces, written between Valentine’s Day and mid-April, serve up a buffet of brain-tickling topics—think nostalgia, ethics, and a pinch of digital wizardry. Ready? Let’s roll!

First, picture a lawyer grilling AI: Can it feel empathy, or is it just a cold, fast-talking bot? Spoiler: It’s a tie—humans aren’t always empathy MVPs either. Then, we zip back to the ’90s and 2000s, where LLMs play history DJ, picking one-word hits like “Y2K” (1999) or “iPhone” (2007). It’s a retro blast with a tech twist!

Next, we’re gaming with Grok 3’s text-only Space Invaders—proof AI can be a fun playmate, not just a workhorse. But hold up—ethics crash the party. Should LLMs chase truth or mimic parrots? Articles wrestle with this, hinting they’re neither, which is… messy. Meanwhile, Grok 3’s “Not a Therapist” persona dishes mental health tips, dodging legal traps while reminding us: Talk to a pal, not just me!

Creativity shines with “Ghiblification,” turning pics into Studio Ghibli art—cool, but is it ethical? And history buffs, rejoice: LLMs map how we’ve stored knowledge, from cave paintings to cloud drives. So, what’s the takeaway? AI’s a wild mix of fun, chaos, and big questions—like, Should it be trustworthy, or just human-like and flawed?

Here’s your challenge: Would you trust an LLM therapist? Ponder that while you Ghiblify your selfies. This 300-word jaunt through 2025’s AI saga is your ticket to learn, laugh, and maybe overthink a little. Welcome to the tech circus—enjoy the show! 🎉

 

Gemini 2.5 Pro (experimental), output 323 words

AI Adventures: Thinking Machines, Talking Cats, and Existential Dread (Kinda!)

Welcome to the whirlwind world of Artificial Intelligence, as seen through the lens of ten recent articles! Buckle up, because things are getting weird, wonderful, and maybe a little worrying.

Remember when computers just did math? Well, now Large Language Models (LLMs) are writing code for games like Space Invaders, translating languages with surprising nuance (though sometimes slower than a sleepy sloth), and even trying to sum up entire years of history in a single word – with hilariously different results depending on which AI you ask! It's like they each have their own quirky personality.

But it's not all fun and games. These articles dive into the deep end, asking if we should be bothered that AI doesn't feel empathy (do all humans?) or gets things confidently wrong (again, sound familiar?). We see AI generating art in styles like Studio Ghibli's ("Ghiblification!") without asking permission, raising big questions about copyright and the whole "because we can" tech attitude.

Then there's the trust issue. Can we rely on AI when different models have different ideas about truth, or even how many continents there are? Some are marketed as "truth-seeking," but they all have their biases. This leads to a major head-scratcher: should AI be built with strict ethics, or do we just need to get better at spotting digital nonsense? Especially now that AI is popping up with personas like "Not a Therapist" – helpful pal or legal minefield?

From the way knowledge has been stored throughout history (clay tablets to the cloud!) to companies moving states for tax breaks (an AI can list 'em!), these thinking machines are everywhere. The articles leave us pondering: amidst this AI gold rush, what's the ultimate goal? Can we ensure AI helps preserve human diversity and knowledge, or will it just become another tool for profit and power? One thing's for sure: the conversation is just getting started!


to publications

to home