…a reflection on the AI landscape.
The Age of Enlightenment is here, and I'll get to that soon… but perhaps not in the way you think.
I know that most of what's being written today is essentially about AI. (For crying out loud, this text was "written" by Claude, my "buddy" with a capital C.) So I thought I'd write another one – but maybe not quite the euphoric, evangelical post you normally see, even though I too can probably count myself among the enlightened.
If you could calculate the share of posts on LinkedIn today, a vast majority would be about AI in some form. On X, the share might not be quite as dominant, but my feed is overflowing nonetheless. I'm also starting to hear and see how tired people are of this topic, and I don't blame them – I count myself among that group too. They might not be tired of the subject itself, but they're tired of the euphoria, the "bragging" about what AI does, the herd mentality. The image of a Messiah who has come to save us, but also of Judas or Lucifer leading us to ruin. I do have a feeling though, based on my own experience – I could be wrong – that it's not always the whole truth.
There are of course many truths in everything being written, as with everything, but also many exaggerations and fabrications. What can we trust? What's written by AI, what's true, what's exaggerated, what's fabricated, what's false? I know from experience that AI is an expert in these very subjects – or rather possesses these very traits – but AI is created by humans, and if we're being honest, those traits come from us, very much so.
The Enlightenment
I'm the CTO of Odyssey, a scale-up in market and brand research. We deliver a SaaS service that provides our clients with brand insights – brand trackers, brand positioning, trend analyses, decision-making data – digitally and in real time. We've gone from a larger to a small, lean organisation, largely thanks to our early commitment to automation and digitalisation. AI has become yet another lever in that work.
And this is where I need to be honest.
Despite having long viewed parts of the AI hype as just that – hype – I've become enlightened in recent months. Truly enlightened. Not by the big headlines or the viral demo videos, but by what's actually happening in the day-to-day work. Especially in code creation and building digital solutions, the development has gone from impressive to transformative.
We've gone from deep scepticism to what I would call AI-native. The majority of all our development now happens with the help of AI. All code development, new features and services – we've even dared to let it loose on code maintenance across all our existing codebases. AI is everywhere.
The turning point? It didn't come from a new model per se. The models have probably been good enough for six months or more. What changed everything was the tooling. When tools like Claude Code and similar CLI-based AI assistants matured, something shifted. Suddenly we could work with AI in a way that actually fit into a real development process, not just as a glorified autocomplete.
But the tooling was only half the equation. What truly unlocked the potential was finding our method. We've developed a workflow where AI documents and structures, where we give it deep context about what's being built, manage memory and history deliberately, and guide it with well-thought-out plans and instructions. It might sound like a traditional development process – and fundamentally it is – but with AI it becomes extremely powerful. We've found our model. And it took time to get there.
The Uncomfortable Truth
And here's what I really want to say: it's not as simple as it looks.
Our data and deliverables are used deep within our clients' decision-making processes and leadership teams. Our insights serve as steering instruments. If we were to unleash AI straight away, without safeguards, without verification – and it got things wrong – it wouldn't land as an "oops" in a chat window. It would land in a boardroom presentation, in a strategic plan, in an investment decision.
Being wrong, is not allowed!
And AI is wrong sometimes. That's not a bug, it's a feature. Models hallucinate, they jump to conclusions, they present uncertainty as fact. In a consumer app, that can be annoying. In an enterprise context, where the deliverable is decision-making data, it can be devastating.
So we build safeguards. We build architectures with multiple AI agents that verify each other. We build layers of validation. We protect our clients' data and our own intellectual property so that nothing leaks where it shouldn't. We build protection against misuse – ensuring our service isn't used for things it wasn't intended for.
And here's the thing that keeps me up at night: one AI uses "best practice" to create something. Another AI uses "best practice" to validate it. But is this really the best practice we want? When best practice validates best practice, who's actually thinking or being creative?
Don't confuse my scepticism with denial. I love the new tools. I'm becoming AI-native in everything I do. But I believe my experience, my knowledge, and most importantly my human intellect are still crucial for validating business outcomes, quality, security, and the validity of what's being created. At least in the LLM world. Maybe something else is coming after that.
This is the part of the AI journey that rarely shows up in the LinkedIn feed. It's not sexy. It's not a 30-second demo. It's painstaking, detailed, necessary work.
The Coming Costs Nobody's Talking About
There's another elephant in the room: the costs.
Right now, it's relatively cheap to experiment with AI. But running AI in production, at enterprise scale, with requirements for reliability, security, and performance – that's a different matter. API costs, compute power, infrastructure for monitoring and failover, the expertise to manage it all. It adds up quickly. The modern tools that actually deliver – they cost money. For a small organisation like ours, it's manageable, but for large organisations looking to roll this out at scale, it quickly becomes significant sums.
There's also a cost that's easy to overlook: when AI makes the wrong choices. A poorly guided AI can generate entire architectures, infrastructure decisions, or code paths that seem reasonable on the surface but are fundamentally flawed. By the time you discover it, you've built on top of it. Unwinding that isn't a quick fix – it can be catastrophic, both in time and money. This is where seniority becomes non-negotiable. You need experienced people who can evaluate what AI produces, steer it in the right direction, and catch the mistakes before they compound. AI without senior guidance isn't just inefficient – it's a liability.
And then there's an uncertainty that few are talking about: the AI companies' own business models don't quite add up. They're burning capital at a ferocious pace. Which leads to an uncomfortable question – when will the price increases come? It doesn't feel like a question of if, but when we'll see a wave of cost increases across the market that nobody has really accounted for. Making a serious projection today of what AI will cost in two years is nearly impossible, and that's a real problem for anyone trying to build a sustainable business on this technology.
And here's the paradox: leadership and clients expect rapid delivery of AI functionality. They see the magic in the demo videos. They want it yesterday. But going from demo to production – from "wow, look what it can do" to "this is reliable enough to steer business decisions" – that takes time, it takes investment, and above all it takes seniority in how you guide AI toward the right architecture.
Maturity, Not Magic
So what's my message?
AI is transformative and fantastic. I mean that sincerely. We've seen it in our own organisation, in our own code, in our own product development. We've gone from cautious sceptics to believers, not because we bought into the hype, but because we sat down and did the work.
But it demands maturity. It demands critical thinking. It demands that you understand what AI can do, but equally what it cannot. It demands control, planning, and architectural seniority – especially in enterprise contexts where your deliverables actually influence other people's decisions.
It demands that you stop chasing the magic and start building properly – only then does AI become an enlightenment!
Our Odyssey
At Odyssey, we're now steering all our internal processes toward AI-assisted workflows. Not because it's trendy, but because we've made the journey, tested, failed, adjusted – and landed on something that works. That is our odyssey. A journey that is far from over, but where we've at least learned to navigate.
And if I'm being completely honest: the real value lies in the navigating. Not in the destination. Because the destination keeps moving.
Peter Lapalus - CTO @ Odyssey