Part 1/5. The illusion of agentic AI – What nobody wants to tell you
Agentic AI, the AI that can autonomously execute actions on the web, is being framed as the next great technological shift. OpenAI’s Operator, Google’s Gemini advancements, and Anthropic’s Claude are all converging toward the same vision: AI that doesn’t just generate content but acts on our behalf, booking flights, making purchases, and navigating websites just like a human.
But there’s a major problem with this narrative:
- We’ve seen this story before. Automation isn’t new, and many so-called breakthroughs are just repackaged versions of existing technology.
- Businesses and platforms may resist AI agents rather than embrace them. The web isn’t designed for autonomous AI interactions, and we’re already seeing platforms block AI-driven access.
- Scaling Agentic AI is a logistical and ethical nightmare. AI executing real-world transactions in an unstructured environment raises security risks, regulatory challenges, and massive liability concerns.
So instead of assuming Agentic AI is the future, let’s break it down and challenge the claims being made and understand what’s in it for us.
1. The automation myth: agentic AI vs. what already exists
The core claim of Agentic AI is that it can autonomously take actions on behalf of a user. But this isn’t a new concept, it’s just a more advanced version of existing technologies:
- RPA (Robotic Process Automation) → UIPath, Automation Anywhere, and Blue Prism have been automating workflows for years, including clicking on interfaces and executing structured tasks.
- Web Scraping + Automation → Selenium, Puppeteer, and PhantomJS have long allowed automation of website interactions.
- Conversational AI + APIs → Alexa, Siri, and Google Assistant can already execute structured tasks within predefined systems.
So what’s new?
This time, it’s not a hardcoded process that is being automated and stopped as soon as there is anything that is not scripted. It’s the GenAI agent that is operating and can work on its own when it faces complexity. Still, Proponents argue that LLM-powered agents don’t just follow scripts but can reason dynamically before executing actions. However, this assumption raises a critical flaw:
- LLMs lack true reliability. They don’t follow strict logic like RPA—they are stochastic, meaning their outputs vary unpredictably.
- There’s no error correction mechanism. If an AI agent makes a bad decision (e.g., books the wrong flight, transfers money incorrectly), who is liable?
- AI hallucinations are a real risk. In structured automation, errors are predictable. In LLM-driven action execution, they are not.
At best, Agentic AI is an evolution of automation, not a revolutionary breakthrough. At worst, it’s a high-risk black box that lacks the precision required for real-world execution. You’re giving the driver’s seat to an AI agent that is already known for its glitches, problems and “very personal” code of conduct when not understanding what to do in front of a website.
2. The adoption problem: why businesses will resist AI agents
Even if Agentic AI worked perfectly, which will happen faster than we anticipate, there’s another problem: businesses may not want AI agents automating interactions on their platforms. Right now, the assumption is that companies will integrate Agentic AI into their workflows—but will they really?
Let’s look at three major barriers to adoption:
Barrier #1: websites & platforms will block AI agents
Agentic AI assumes that AI clicking on a website is equivalent to a human doing the same action. But websites can detect and block bot activity, and they already do:
- CAPTCHAs exist for a reason. If websites can detect non-human behavior, AI agents will need constant workarounds.
- Many companies already block web scrapers. Google, Amazon, and LinkedIn aggressively prevent automated bot activity on their platforms.
- Even OpenAI’s own website blocks Operator. If the company developing Agentic AI isn’t allowing it on their site, what does that say about its viability?
Let’s make it clear: the web is adversarial—AI agents will be seen as threats, not welcomed as useful automation tools. Differentiating online data scraping with an agentic AI would require dedicated access gates with all the challenges that would come along (scraping and customer data for instance, which are already digital war zones). There is a world where we could have websites directly talking with Personal assistant grade agentic AIs, and still this wouldn’t help for the other barriers.
Barrier #2: Security & Liability Risks Make AI Execution Untrustworthy
Let’s assume AI agents do gain access to web platforms. The next issue? Trust and liability.
Imagine an AI agent:
- Books your vacation. → Great, but what if it chooses non-refundable flights?
- Orders groceries for you. → What if it overorders by mistake?
- Manages enterprise workflows. → What happens when it automates incorrect financial transactions?
Legal and security concerns businesses will face are immense:
- Who is accountable if an AI agent makes a mistake? If an AI places the wrong order or leaks private data, who pays for it?
- How do businesses prevent AI from making unauthorized decisions? Giving AI full execution power introduces fraud and compliance risks.
- How do companies regulate AI in high-risk industries? In finance, law, and healthcare, autonomous execution is highly restricted by regulations.
The legal dimension will turn this into a complete mess, and that is basically what OpenAI is writing in its publication “Practices for governing agentic AI systems“. No businesses are ready to trust AI with high-stakes transactions—and may actively block agentic AI to avoid liability.
Barrier #3: Companies Will Fragment AI Instead of Centralizing It
OpenAI is providing its vision that Agentic AI will evolve into a centralized assistant that handles all user needs, or at least is leaving us with this feeling as the demo of its Operator isn’t focused on internal tasks (manipulating data and operations within a proprietary platform) but on external tasks (interacting with external data and websites). But in reality, businesses are likely to compete for control over AI interactions:
- Amazon won’t let OpenAI’s Operator handle its checkout process. It will push users toward its own AI (e.g., Alexa-powered shopping).
- Google won’t let Microsoft’s AI agents take over search interactions.
- Apple won’t allow third-party AI to dominate iPhone automation.
The means that instead of one AI to rule them all, we’ll likely see fragmented AI ecosystems, each locked into proprietary environments. Amazon would go with Anthropic’s Claude that would power Alexa for easy shopping, building an even more amazing shopping experience (see below for more details). Google will leverage its Gemini in the same fashion to facilitate search and advertising, and Microsoft will turn ChatGPT into its new Office Clippy before making it a really useful tool for productivity.
Bottom line, AI agents won’t be universal—they will be walled off by competing companies to maintain control over user interactions. And companies without such capabilities will have to turn to either one those 3 or go with OpenSource solutions like Mistral and LLaMA to build their own private productivity and customer experience partners.
3. What businesses should do now?
Agentic AI isn’t the next great leap forward—it’s a repackaged automation trend. Instead of blindly adopting AI agents, businesses should focus on practical AI integration starting with the basics on one hand and with the data foundations on the one other hand. The real value of Agentic AI isn’t in trying to autonomously interact with external websites—it’s in internal enterprise environments, where:
- The data is structured and accessible.
- AI execution can be monitored and controlled.
- Security, compliance, and liability are managed in-house.
Instead of focusing on AI agents navigating the wild internet, businesses should focus on AI agents optimizing their own workflows:
Step 1: automate through APIs, let GenAI agents consolidate unstructured data and processes
Instead of trying to let AI navigate websites, businesses should use structured API-based automation that provides reliability and control and leverage GenAI for what it has proven efficient, data and process consolidation and content production for now. This will prove useful when agentic AI would be ready, as you would already be able to feed it with reliable and refined business processes that it would be able to leverage for additional efficiency.
Step 2: focus on re-learning your own processes for them to be AI ready, be it GenAI or Agentic AI
AI should recommend actions, not execute autonomously without human oversight—especially in finance, healthcare, and critical business operations. As of now, time is the main challenge, you can see it in action with OpenAI’s Operator demo: human operations are still faster as such models are to be trained to be efficient— OpenAI, like its competition, has released Operator with a near zero level of proficiency and speed for its massive user base to train it. But for them to be faster and more relevant, they need to be fed with your precise, structured, and complete decision making trees. If you leave even a small part of it undefined, there will cost heavy consequences.
Step 3: expect AI fragmentation, not centralization
Businesses should prepare for competing AI ecosystems, where Google, Amazon, OpenAI, and Apple each try to control AI-driven interactions. You will need to take a side and go with either proprietary or OpenSource solutions, or try the hybrid mode, and plug it into your business processes for the soon-to-be-ready agentic AI to be an asset. You will also need to prepare for specialized AIs for different types of expert tasks, which will require also connection capabilities to discuss with your central or core AI model.
See you next week for the part 2 of this series for a dive into opportunities and how to prepare for them.
Leave a Reply