The AI transformation narrative is solving a problem most enterprises don’t have and mislabelling the one they do.
Every CIO in Australia is being told the same story. Lean in to AI. Become AI-native. Reinvent and reimagine the business model. The consultants have decks. The vendors have platforms (and decks). The boards have anxiety.
But there’s a question nobody in the room is asking: reinvent what, exactly?
The business model that can’t be reinvented
Take banking. An Australian retail bank makes money by taking deposits, lending them out, and earning the spread. Four Pillars prevents consolidation. APRA, ASIC, Basel III and the Responsible Lending obligations define the operating envelope. The home loan is still recognisably the same core product it was thirty years ago: debt secured against property, priced inside a regulated risk envelope. Most of the change has been in packaging, pricing, apps and digital distribution, not reinvention.
So when someone pitches “let’s reimagine your lending business with AI,” the honest response is: reimagine what? Dynamic pricing? Brokers already arbitrage marginal rate differentiation, and regulators will act the moment it looks like algorithmic discrimination. AI-driven credit decisioning? You still need to explain your decision to the regulator in deterministic, auditable terms. Your model either gets reduced to rules, or you’re carrying a parallel explainability system that costs more than the original process.
Open banking, embedded finance, neobanks are changing the distribution layer, not the product. The loan is still a loan. The spread is still the spread. And this isn’t just in banking. Insurance, superannuation, wealth management, government or any sector where the product is defined by regulation, and the competitive moat is trust faces the same constraint.
The business doesn’t need reinventing. The operation does.
The problem hiding in plain sight
The home loan that takes 22 days and 14 handoffs. The KYC remediation on spreadsheets. The compliance attestation in someone’s inbox. The contractor onboarding in the ops centre that still involves a printed form. These are the problems that actually cost money, create risk, and drive customer attrition. They don’t need intelligence. They need orchestration: right steps, right order, right people, full audit trail. Deterministic. Governed. Predictable.
I’ve seen this done well. A major Australian bank transformed its institutional lending pricing from a manual process into a real-time platform connecting pricing engines, credit risk models, CRM, and core banking in a single workflow. Automated approval matrices routed decisions based on delegation limits. Pricing went from months to instantaneous. NIM uplift paid for the programme within three months.
No Gen AI. No Agents (of the modern AI variety.) Just well-defined processes, clean data integration, governed workflow, and deterministic rules doing exactly what they were designed to do. The business got the granular control they’d had in their Excel spreadsheet, with a production harness of checks and balances around it.
The postscript is instructive too. A coding error between two pricing systems caused over 2,000 customers to be charged double interest for years. It became a Royal Commission case study and the regulator imposed a $7 million penalty. Of course, deterministic systems fail too but imagine if that system had been non-deterministic, “learning from every transaction.” The error wouldn’t have been a traceable bug. It would have been a probabilistic drift nobody could pinpoint, explain to the regulator, or reliably fix. Deterministic failure is at least diagnosable. Non-deterministic failure may not be (for now, there’s a lot of work being done in this space.)
The transformation budget goes to AI strategy programmes. The operational backlog stays in the queue. “We’re transforming” becomes a substitute for “we’re actually making things work.”
This is not an argument against AI. It’s an argument against mislabelling.
Where AI earns its place
An LLM that reads, classifies, and extracts from unstructured documents at scale does something a rules engine cannot. NextGen’s 2026 research found 82 percent of Australian lenders said AI should deliver the most value in document processing and workflow automation of repetitive tasks. That’s a real and valuable application of AI.
Fraud detection is another. In early 2026, CBA reported itself to police over fears that roughly a billion dollars in home loans may have been obtained using AI-generated fraudulent documents. The Penthouse Syndicate allegedly defrauded NAB of $150M. Fraudsters are now using AI to generate synthetic payslips, fabricated trading histories, and forged tax returns precise enough that traditional verification can’t catch them. Detecting these requires pattern recognition, adversarial modelling, and probabilistic reasoning. Rules don’t cut it.
And there are problems in the middle. Exception handling, anomaly detection, document triage. An AI layer sitting inside a governed workflow adds real value in these areas. Not by replacing the workflow, but by handling the parts that are too variable for static rules and linear processes.
The right architecture is hybrid: deterministic orchestration for the structured, auditable core, with AI applied where interpretation creates value.
The narrative hasn’t caught up
A case study by an emerging Australian AI platform describes deploying “AI agents” into loan lodgement and settlement. The agents, it says, handle core steps alongside the existing team, “learning from every transaction, surfacing exceptions faster, and reducing manual rework.”
Loan lodgement and settlement is highly standardised in Australian financial services. The workflow is largely defined. The documents are predictable. Settlement runs through PEXA. Compliance checks are shaped by lender policy, aggregator requirements, insurer rules, and regulation. The unstructured data handling and exception surfacing? Useful. Probably the best application of AI in the entire lending lifecycle. But if those agents are non-deterministic and adapting their behaviour inside a settlement workflow, that should concern a compliance officer more than it impresses a CIO. What exactly is it learning to do differently? And who approved the change?
The product underneath may be perfectly sound. A hybrid architecture, deterministic workflow with AI handling document processing and exception detection, is exactly the right approach for this problem. But the positioning tells you everything about where the market’s head is at. I’ve sat in rooms where a CIO was pitched an “AI transformation” that was a workflow automation project with an LLM bolted on for document reading. The work that creates value gets wrapped in transformation language because transformation language is what gets funded.
The headcount reality
A bank running a thousand operations staff at fully loaded cost spends north of $90 million a year. If an AI-powered architecture can remove a significant percentage of operational effort, the business case is obvious. The question becomes execution speed without triggering regulatory or operational risk.
The value is not only fewer people. It is fewer handoffs, fewer errors, fewer escalations, and less rework.
That’s why taxonomy matters. The enterprises that build with honest labels will spend less getting to the same destination. A $10 million “AI-first” transformation programme that reduces operational effort by 20 percent may still look attractive. But if a $3 million workflow-led programme with targeted AI inside the governed process can remove the same friction, the ROI profile changes completely. And the ongoing operating cost, token spend, governance overhead, and AI risk management compound the difference year after year.
The cost question that isn’t settled
Platform vendors selling deterministic engines are leaning hard on token economics. Reasoning tokens are about to get expensive. The argument is correct today. But it describes a moment, not a structure.
Token usage per iteration is increasing, but iteration count is falling as models improve. The model that rebuilds context from scratch every call is increasingly a 2024 limitation, not a permanent architectural constraint.
Building enterprise architecture around today’s token pricing is like designing your web strategy around 2005 bandwidth costs. Any argument built entirely on cost will have a short shelf life.
The irony is that a multi-model future makes the taxonomy question harder. Now you’re choosing between frontier models for complex reasoning, domain-specific models for vertical tasks, small models for high-volume classification, and deterministic rules for governed routing, all within the same workflow. Deciding which step gets which tool is itself an orchestration problem.
Model routing is process engineering applied to the AI layer.
Build the foundation. Build it now.
If you were in enterprise technology in the 2010s, you’ll remember “two-speed architecture” to enable the Digital Enterprise. Keep legacy systems stable while building fast digital capability alongside them. In practice, integration became the bottleneck, and many organisations ended up with two messes instead of one. The next correction was Agile everywhere, whether the organisation was ready for it or not.
The hybrid AI architecture risks the same trap if you treat “deterministic foundation” and “AI layer” as separate initiatives. The answer is parallel tracks: define processes, govern data, and run AI experiments inside the same controlled architecture from day one.
You can always hand an AI agent a workflow that lives in three people’s heads, and hope for the best. But most of the effort will go into compensating for the mess, not creating value.
An “AI-first” programme that skips the process engineering and goes straight to agent deployment is building on sand. The demo will impress. But the moment you scale it across products, business units, and regulatory jurisdictions, the missing foundations surface as exceptions, escalations, rework, and audit failures.
I saw this recently. A leading e-commerce conglomerate wanted AI agents in infrastructure incident management. Hours-long resolution times, siloed teams, tribal knowledge. We built the orchestration layer first, designed the AI capability as a pluggable component within the governed workflow. Resolution times dropped from hours to minutes. The AI components needed more design time, but the foundation was ready to evolve without re-platforming and delivered real business value.
And the second-order effect mattered just as much. Structuring the process forced conversations between teams that had been operating in silos for years. The real shift was organisational, with technology as the catalyst.
The foundation work doesn’t have to be slow. AI-assisted workflow modelling combined with process mining can accelerate it. But the foundational work still has to be done.
Close the gap
The organisations that win will not be the ones that call themselves AI-native the loudest. They will not be the ones clinging to deterministic execution as doctrine either. They will be the ones disciplined enough to know which problems need intelligence, which need orchestration, and which foundations need to be built so the architecture can evolve as the world moves.
Further reading
Mi-3: AI Category Error — Hyman says firms misprice inference, stall execution (March 2026)
Oracle AI Blog: Agents vs. Workflows — Where does the ROI actually live? (March 2026)
UNSW Newsroom: Why CBA’s $1 billion suspected loan fraud should change how we bank (March 2026)
Ritwik Singh is the founder of 36ARC, a Sydney-based advisory practice helping enterprise leaders and technology vendors make better automation, orchestration, and platform decisions across the ANZ market.
