The pitch is compelling: upload your documents, AI does your books, you never talk to an accountant again.
It's also the wrong model for any product that wants to survive contact with regulators, tax authorities, and professional liability law.
We build AI-powered accounting infrastructure. We believe deeply in what AI can do for accounting. But we also believe — from experience running an accounting practice, not from theory — that removing the accountant from the pipeline is a mistake that will cost companies and their users dearly.
Here's why.
The liability problem nobody talks about
When someone files a tax return, someone carries liability for its accuracy. In most jurisdictions, that's the person who signs it — either the taxpayer or their authorised agent.
If your AI product produces a tax return and the user files it themselves, the user carries the liability. If the return is wrong — and AI returns will sometimes be wrong — the user faces penalties, interest, and potential audit. They trusted your product. Your product got it wrong. They pay the price.
If your AI product files the return on the user's behalf, your company carries the liability. Do you have professional indemnity insurance? Are you registered as a tax agent in every jurisdiction you serve? Do you have the regulatory authorisation to file returns? Most AI tax products don't. They exist in a grey area where they produce the return but don't formally file it, pushing liability back to the user while marketing themselves as "doing your taxes."
An accountant in the loop solves this cleanly. A qualified, warranted accountant reviews the output, applies professional judgment, signs off, and files. They carry professional indemnity insurance. They're registered with the relevant authority. They're personally liable for errors in their professional capacity. This isn't overhead — it's the mechanism that makes the entire system trustworthy.
What AI gets wrong (and why it matters)
AI classification errors are usually small and correctable. A transaction coded to "office supplies" that should have been "software subscriptions" doesn't change the tax liability by much.
But some errors are not small:
VAT treatment errors. Classifying a zero-rated supply as exempt (or vice versa) can change VAT recovery rights across an entire return. In the EU, this can mean five-figure corrections.
Personal vs business misclassification. AI is bad at knowing whether a purchase is personal or business. A meal, a travel booking, a piece of clothing — context matters, and AI doesn't always have it. If personal expenses are claimed as business deductions, that's tax fraud, even if unintentional.
Jurisdiction errors. For users who operate across borders — which is increasingly common — applying the wrong country's rules to a transaction can cascade through the entire return. VAT place-of-supply rules are notoriously complex, and getting them wrong triggers corrections in multiple jurisdictions.
Timing errors. Revenue recognition, accrual vs cash basis, period-end cut-offs — these are judgment calls that AI makes based on patterns, not principles. An accountant applies the relevant accounting standard.
None of these errors are catastrophic individually. But they compound. And they're exactly the kind of errors that a trained accountant catches in review — because they've seen them before, in this specific jurisdiction, with this specific set of rules.
The review rate decay curve
Here's what we've observed: when an AI system starts processing a new client's transactions, the accountant review catches issues in roughly 40% of cases. The AI is learning the client's patterns — their suppliers, their revenue streams, their expense categories.
By month three, the review rate drops to around 15%. The AI has learned the patterns. Most transactions are classified correctly.
By month six, the review rate is below 5%. The accountant is catching edge cases, unusual transactions, and year-end adjustments — not routine classification errors.
This curve is the key insight. The accountant isn't doing the work — the AI is. The accountant is providing quality assurance that decays in cost over time but never reaches zero. Because there will always be transactions that require human judgment: a client's first international sale, a new type of expense, a regulatory change that the AI hasn't been updated for.
The cost of the accountant review drops dramatically over time. The value — liability coverage, error correction, regulatory compliance — remains constant.
What "accountant in the loop" actually means
It doesn't mean an accountant does the books. It means:
-
AI classifies transactions. Based on the jurisdiction's chart of accounts, VAT rules, and deductibility criteria.
-
Deterministic engines compute. Tax liability, VAT returns, SSC, income tax — all calculated by rules engines built on legislation. No LLM in the computation.
-
An accountant reviews the output. A qualified local accountant looks at the classified transactions, the computed return, and the supporting documents. They check for misclassifications, missing documents, unusual patterns, and compliance issues.
-
The accountant signs off and files. They carry professional liability. If the return is wrong, they're accountable — professionally and legally.
This model gives users the speed and cost efficiency of AI with the trust and liability coverage of a human professional. It gives platforms the ability to offer accounting without carrying the liability themselves. And it gives regulators what they've always wanted: a qualified person standing behind every filing.
The regulatory direction is clear
Regulators around the world are watching AI accounting products closely. The direction is consistent: AI can assist, but a qualified professional must oversee.
The EU's AI Act classifies financial advice as high-risk. Tax authorities in Germany, the UK, and France require authorised agents for filing. Malta's FIAU is moving toward classifying tax compilation work as a "relevant activity" requiring compliance oversight. The US has Circular 230 governing who can practice before the IRS.
Building an AI accounting product without an accountant in the loop isn't just risky — it's increasingly likely to be non-compliant. The products that build the accountant into the architecture from day one won't need to retrofit it when regulation catches up.
The trust advantage
Beyond liability and regulation, there's a simpler reason: people don't trust AI alone with their taxes.
They'll use AI to categorise expenses. They'll use AI to estimate their tax bill. They'll use AI to answer questions about deductibility. But when it comes to the actual filing — the document that goes to the government, that determines how much they owe — they want a human to have checked it.
This isn't irrational. It's prudent. And platforms that respect this preference by building accountant review into the workflow will earn more trust than platforms that tell users to trust the AI.
The accountant in the loop isn't a constraint on AI. It's what makes AI in accounting actually work.
Accora's model: AI classifies, deterministic engines compute, warranted accountants review and file. The accountant carries the liability. You and your users don't. Learn more at accora.ai
Michael Cutajar, CPA — Founder of Accora.