top of page

The Automation Illusion – When Low-Code Breeds High-Risk

Updated: Nov 25

Co-written with Arthur Prévot

"It's very, very difficult — even for people working on the products — to know the difference between what we say in a demo, what's on a road map, and what's actually in production. It's a full-time job just figuring that out." 
Quote from a Salesforce employee as reported by Business Insider reporter Ashley Stewart.

ree

The New Gold Rush


Every few years, technology rediscovers its own illusion of speed.


Today, it’s AI-enabled automations. Tools like n8n, Make, Pipedream, Power Automate, and Zapier promise to let anyone “build their own AI system in a weekend” and become an AI-first company.


What once needed an engineering team now takes drag-and-drop boxes, a few API keys, and a viral LinkedIn tutorial. Low-code tools hide complexity so well that people confuse usability with understanding. Overnight, product managers have become automation architects, designers are wiring GPT prompts into Slack workflows, and managers are dreaming of starting their own businesses on the side.


It feels revolutionary until it starts to break.


What’s happening underneath is a modern gold rush: recently, we have seen teams mining automation for quick efficiency without thinking about scale, safety, or ownership. Nobody documents, only a few monitor, and almost no one considers what happens when a critical workflow fails.


This trend is driven by a new generation of “AI automation experts” with entrepreneurial ambitions but lacking in product engineering mindset. Their credentials are often a few demos and a confident tone on social media. However, when something goes wrong - a misfired webhook, an unmanaged loop for edge conditions, or a hallucinated response - we see the abstraction engine collapse. You quickly learn that a visually appealing UI doesn’t eliminate architectural risk; it only hides it.


Even experienced professionals are falling into this illusion. Ironically, some of the loudest voices driving this hype once came from the systems designed to contain it. Former product leaders are pitching “agentic platforms” with no safety layers. Former security engineers now help teams bypass policy bottlenecks. Speed wins, governance waits.


Every month, we hear of a new agentic platform that major labs are building to amplify speed and make automation easier to create. What they don’t provide are the habits, guardrails, and discipline that real organizational maturity demands.


In the pursuit of velocity, we’ve replaced prudence with plug-ins.


The shadow infrastructure we are building 


Every business workflow can now be a node; every node connects to an API or more, and suddenly, you’ve built a second IT stack without realizing it. This shadow infrastructure runs reminders, enriches leads, triggers approvals, and updates CRMs.


None of it appears on an architecture diagram.

The problem isn’t intent, it’s opacity.


A single credential rotation can break five workflows. Rate limits on API calls can knock out an entire SDR pipeline. A GPT step hallucinating a field name silently corrupts a database.


There’s no versioning

No ownership.

No disaster recovery plan.


We are referring to these tools as “no-code” or low code, but the real issue isn’t code; it’s control or the lack thereof.


The Story: How a 300-Person Company Tried to “Do AI for Sales” — and Discovered the Five Tiers (and Costs) of Automation


AcumenSoft is a 300-person B2B SaaS company selling workflow software to industrial clients. Great product. Moderate pipeline. Sales cycles are dragging.

Leadership wants to “use AI to accelerate GTM” — a slogan that means everything and nothing.


They form an AI task force: one sales ops director, two SDRs, a data engineer, and an enthusiastic consultant who once built a promising GPT demo. Their mandate: “Use AI to increase pipeline velocity and improve outbound.”


No budget assumptions, no governance, no architecture. 

Just ambition.


Tier 1: Quick Automations (Cheap, Fast, Fragile)


Operations cost: €300–€2,000/month

Tools: n8n, Make, Zapier


The team starts with the obvious: throw together automations.

  • list scraping flows

  • enrichment calls

  • GPT-generated scripts for outreach

  • Slack alerts

  • ICP matching based on keywords


It works great for a few weeks. Then it implodes.

  • An API key leaks.

  • A broken node sends SDRs 200 false ICP alerts.

  • A GPT workflow uses the wrong company name in an email.

  • Nobody knows which automation depends on which.


The CTO bans half of it. Shadow infrastructure has entered the building.


"Over 40% of agentic AI projects will be canceled by the end of 2027, due to escalating costs, unclear business value, or inadequate risk controls.”  



Tier 2: “Professionalizing” AI via Hyperscalers


Operations cost: €5,000–€20,000/month

Tools: Azure OpenAI, Bedrock, Vertex, Logic Apps, Lambdas


To regain control, engineering moves everything into Azure - after all the company has been using Microsoft for decades:

  • Azure Functions

  • Logic Apps

  • OpenAI endpoint behind Azure firewall

  • Key Vault for secrets

  • Power Automate for simple flows


Security is finally happy. Engineering feels it has guardrails again. But the sales team feels nothing. The UX is ugly, iteration is slow, and every workflow requires an engineer.


AI stops feeling magical. The solution has become a ticketing system.


Tier 3: Realizing The URGENT Need for Governance


Operations cost: €10,000–€30,000/month (incl. onboarding)

Tools: Glean, Dust.tt, Scale, Cohere Coral


The crisis comes when a strategic prospect says:

“Your outbound emails reference features you deprecated two years ago.”


Investigation of the hyperscaler solution reveals:

  • outbound flows pulling from outdated Confluence pages

  • missing access control

  • Reps dragging PDFs into tools with no permission model

  • GPT hallucinating missing numbers


This is the first “AI safety” moment.

Leadership finally admits: 

“We don’t just need speed. We need governance.”


They adopt an enterprise platform for:

  • document-level permissions

  • centralized knowledge

  • role-based access control (RBAC)

  • audit trails

  • evaluations

  • better prompting layers


For the first time, the outputs are reliable. Sales trusts the system again. But outbound still isn’t accelerating. New changes take months. 


Tier 4: The Agentic Mirage


Operations cost: €50,000–€120,000/month

Tools: Custom-made to measure agentic architecture, vector DBs, planning modules, orchestrators


Leadership attends an AI conference in California, where every keynote claims:

“Autonomous agents will handle the entire SDR workflow.”


The task force is told:  “Build an agent that does research → enrichment → personalization → sequencing → CRM updates → meeting booking.”

They assemble:

  • multi-agent debate

  • planning loops

  • vector stores

  • tool interfaces

  • HubSpot write access

  • n8n orchestrators for environment control


And they realize the hard truth:


✅ Powerful 

but

❌ Expensive 

❌ Hard to monitor 

❌ High failure modes 

❌ Unpredictable behavior 

❌ Impossible for sales to debug


One hallucinated tool call creates 67 duplicate leads in HubSpot. Another tries to update pricing with non-existent values.


The CTO calls for a “temporary pause,” which is usually executive code for: 

We’ve gone too far.”


Tier 5: Vertical AI Tools (Pragmatic, Boring, Effective)


Operations cost: €3,000–€8,000/month

Tools: Not one brand — but a category: AI-powered GTM platforms with enrichment, personalization, sequencing, scoring, and compliance baked in. Usually runs as a SaaS (software as a service).


While leadership debates agentic architectures, an SDR asks the obvious:

“Why don’t we use a vertical tool that already does enrichment, scoring, and sequencing? We don’t need to build this.”


They test a vertical GTM platform.


Within 48 hours:

  • clean enrichment

  • accurate ICP scoring

  • personalized sequences

  • reliable data sync

  • tested safety rails

  • no prompting hell

  • no accidental mass email disasters

  • no late-night debugging


It solves the GTM problem without five layers of AI orchestration.

Leadership admits privately:

“We tried to be a robotics lab. We just needed better outreach.”


The CEO finally sees the pattern:


Not every AI challenge requires agents.

Not every problem requires custom architecture.

Sometimes the right answer is a domain-specific tool built for the job.


Lesson learned (Cost + Sanity + Reality)


ree



Most 300-person companies follow the same arc:


  • T1 is cheap but dangerous.

  • T2 is safe but slow.

  • T3 introduces governance.

  • T4 burns money and creates fragility if you’re not ready.

  • T5 solves commercial problems without illusions.


Most of the teams are now implementing AI automations without thinking about maintenance and dedicating resources to it. The activity requires a product mindset that is often lacking or missing in spades. 


The strategic error is thinking you must climb all five tiers.


AI maturity is not sophistication.

It’s choosing the lowest tier that reliably solves the business problem.


MIT Sloan Management Review + BCG: “The State of AI in Business” Reports (2018–2024)


These annual reports consistently show one strong pattern:


Companies that adopt “AI-as-a-service” (SaaS AI) tend to outperform companies that attempt to build custom AI solutions prematurely.


Specifically, the research highlights:

  • Firms that rely on domain-specific AI tools see faster deployment and higher ROI.

  • Custom AI efforts often fail due to complexity, misaligned skills, and inability to operationalize.

  • High-performing companies typically do not build everything in-house — they use platforms or vertical tools, then integrate selectively.

  • “AI builders” without strong governance had the highest failure rate.


From Illusion to Discipline — The AI Pathway Approach


AI Pathway frames this not as a tooling choice, but a discipline:


  • Start with intent (vision + experimentation).

  • Align on capability (enablers).

  • Choose the lowest tier that reliably solves the problem.

  • Layer in evals, monitoring, and governance.

  • Scale only when your organization is actually ready.


The illusion is thinking the goal is sophistication.

The reality is that the goal is repeatability and reliability. 


Conclusions

When we were writing the AI Pathway book, we thought the complexity of AI adoption would come from the models - price vs quality vs availability vs sovereignty, and more. And now we see that the real complexity is the myriad of automations we build around them.


The future of AI isn’t just model safety- it’s workflow safety. Not just intelligence — but control. Not just an army of agents - but accountability.


And the companies that succeed will be the ones that resist the illusion of height and choose the tier that meets their readiness, their risk tolerance, and their reality.

Comments


bottom of page