This is the only post you will read today that is not written or edited by AI - enjoy!
If you’re like me, you’re reading a lot of contradictory stories about this new, fascinating technology called Artificial Intelligence. It’s ubiquitous. You probably have conversations with colleagues, partners, friends, on ‘how you are using AI’. Or ‘what’s your take on AI’. A question that is not easy to answer with a technology that changes permanently, and has implicitly contradictory qualities.
On the one hand it is smarter than your lawyer, or better at writing code than your developers. On the other hand it’s less intelligent than your toddler.

On the one hand people claim to fire thousands of software engineers because of AI. On the other hand, there is increased demand for software engineers - especially at the top AI firms.
On the one hand, AGI (Artificial General Intelligence) seems to be around the corner, or has been achieved already. Pundits claim their AI “shows signs of being conscious” and is plotting to take over the world, and we’ll all be out of a job tomorrow. On the other hand, adoption is really low, workload does increase, people are burning out and efficiency gains are slim, or non-existent.
On the one hand you’re using ChatGPT (or Claude, or Gemini, or Grok, or …) yourself and you’re quite fascinated. On the other hand, the fascination already starts to wear off. Especially as you noticed that it can be really dumb in areas you know a lot about.
So which one is it? Somehow the concept of AI seems to be a fleeting one. There are very few good takes or predictions out there. A majority of people seem to underestimate the impact of AI. A minority of people greatly overestimates it. The truth is of course in the middle. But not like an average single point of truth that looks the same from all angles. More like a Chimera that changes its shape drastically depending on the angle you look at it. Some narrow domains will be dominated by AI. Many new domains will be created. And some will barely change. So, what about BizAv and AI? Where are we headed?
I’ll try to answer this by answering some general questions about AI. Trying to keep it short, I’ll add plenty of links for you to dig down the rabbit hole if you like. After that I’ll add some actionable tactics to navigate these cloudy conditions.
Are we on the final stretch to Artificial General Intelligence?
No, we are not. What’s currently considered AI is Large Language Models (LLMs). Statistical systems trained on vast data sets (like the entirety of the internet). They are built to mimic intelligence. To sound smart. However it is an illusion. There is no intelligence. There is no reasoning. There is no awareness. And there is no direct path from LLM to AGI using the same technology. There are hard, mathematical limits to scaling LLMs to AGI.
It’s a powerful technology - but it is not consciousness. People suggesting otherwise have other motives.

Why do I read all these bullish doomsday stories about AGI taking over everything?
Because there is a lot of money at stake. The largest and richest tech companies are placing huge bets on this technology. They expose you to their sales pitch (‘if you’re not deploying AI in your day-to-day operations, you’re done!’) and their fundraising pitch (‘we need to raise $nn billions to train the next model, which will really be AGI!’). Remember when Uber rides were really cheap because they were subsidised by a company fighting for market share? Or scooter rides? That’s where we are with AI companies today. Just at an order of magnitude larger.
AI companies have to raise trillions of dollars this year. Amazon, Google, Meta, and Microsoft add up to $700bn investments in 2026 - OpenAi just raised $500bn. That’s not possible by selling a sophisticated auto-correct. They need to push doomsday narratives. They’re taking on huge bets, sacrificing their free cash flows (up to 90% for Google, bleeding from $73bn to $8.2bn). Paying influencers up to $600k for campaigns ($100k per post).
AI is cheap - right?
Do not install AI in your brain as ‘forever cheap’. Prices will go up, guaranteed. Agentic development can already now be more expensive than a human developer. Anthropic, the provider of Claude, is subsidising subscriptions to the point where they’re burning 8x-13.5x their fees in tokens. Clearly not a sustainable business model as is. Getting developers used to a quality of life that they can’t afford in the long term.
In a recent change, Anthropic not only charges you for producing code, but also to review the code itself produced! So it has a direct incentive to produce bad code, double dipping in your wallet. Be careful what you get yourself addicted to!

Is AI ready for prime time?
Yes, absolutely - and nope, definitely not!
You have heard of hallucinations, and many attempts and improvements to avoid them. Science says they are an integral part of LLMs. It’s how LLMs work. They have no understanding of being right or wrong. Hallucination lies in the eyes of the beholder.
Notice that in areas where you are not an expert you are quite pleased with AI output? Can it be simply halo effect? How come it sounds so stupid in areas of your expertise? My favorite take is that ‘LLMs hallucinate 100% of the time; sometimes they just happen to be correct.’
OpenAi confirms that in a recent study. Quoting hallucination errors amount to up to 30% of answers. With every output you get, you should ask yourself: Is this real, or just a confident guess. Keep it with Ronald Reagan: Trust, but verify! Install that thinking in your employees!
Coding is a lot cheaper now, right?
Software development changes with AI - undoubtedly. Is it the magic pill, where you don’t need developers at all, and vibe code your enterprise software on the go? Clearly not. Currently it does two things: First, it increases the speed of actually writing the code. Especially the boring, repetitive parts. Second, it is great for prototyping.
The overall effect is a net improvement. But it takes investment in many areas to keep it sustainable. If the original code is broken, or your development process is - you are waiting for a -potentially catastrophic- accident to happen. They did happen, and they will happen going forward. Ask Amazon:

It does not relieve the software engineer from thinking or architecting the solution. Although some would claim that.
Vibe-coding anything more than a simple prototype, you are bound to introduce different levels of risks that you need to be aware of and mitigate: performance, security, maintenance, reliability, quality, scalability issues.
Software created without expertise and strict human oversight is a ticking time-bomb.
So how should you treat this 24/7productive genius, that you still have to babysit?
AI is here to stay. Make sure you keep on learning and exploring. Discard the noise, not the technology itself.
I highly recommend trying out the new possibilities you’re seeing, while assessing the risks. Apply it where the risk is low or negative impact is reversible. Figure out where it is useful and where not. As the need for precision goes up, the utility of AI typically goes down.
Automate the 20-30% repetitive, manual, low-value-add stuff - and take it from there.
Make sure you understand what happens to the data you give AI access to!
But: Use it responsibly!
Excessive use of AI literally makes you dumber! Anthropic’s own research proves the negative impact of AI use. Don’t fall into the trap of letting AI do everything. There is strong consensus that outsourcing your thinking to AI atrophies your brain. Your brain will thank you! People nowadays go to the gym to stay physically fit in a time where machines do all the hard work. Knowing you just need to “press the magic button” to write that article/email/function for you has enormous addictive potential. Ever typed “3x12” on a calculator? You know what I mean.

Evaluate the downstream impact. Before deploying an AI tool for a critical process, ask yourself: “Would this still make sense if it costs 5x as much? How disruptive would it be to stop using it 6 months from now?”. Install that thinking in your employees!
Don’t let AI run your company. Don’t let AI “run” anything. Good pilots are always ahead of the autopilot. When flying an approach, they expect the next turn - they don’t get surprised by it. Make sure you handle your business the same way!
And don’t let it run your personal relations! Sycophantic AI is a real issue impacting personal relationships when used as a sounding board. It tells you what you want to hear - not what you need to hear.
What’s FL3XX’s take on AI?
We have several AI features live in our software. Following the risk paradigm described above. Right now we deploy it in risk tolerant areas like editing templates. Or processing thousands of plain text email quote requests automatically.
Many additional “AI features” are in the works. It will clearly revolutionize the way everyone in BizAv interacts. However we will do so rooted in a bulletproof data privacy posture with appropriate risk management. Honoring our ISO27001 and SOC2 certifications.
FL3XX customers can rely on us doing the heavy lifting while keeping their data safe and their businesses compliant. If you do have a question around AI, do reach out to me anytime!