In the last 60 days, a $32 billion company launched an AI agent to do your bookkeeping. A startup hit $1.15 billion in valuation on the promise of autonomous accounting. Another platform won a G2 best-of award for an AI bot trained on 83 million hours of finance work. The trade press is calling it "The AI Bookkeeper Wars."
Ramp, Basis, Stampli, Consark. Billions in funding. Thousands of customers. Each promising some version of the same thing: AI does your books now.
I went and looked at who's building these things.
The founders
Ramp's CEO is Eric Glyman. Harvard economics, previously co-founded a price-tracking app that Capital One acquired. Not a CPA.
Basis was co-founded by Matthew Harpe (BCG background) and Mitchell Troyanovsky (first product hire at a fintech startup). Neither is a CPA.
Stampli's CEO is Eyal Feldman. MBA, background in ERP and document management. Not a CPA.
Consark is the one exception that almost proves the rule. Their CEO, Karthik Annapragada, is a Chartered Accountant credentialed under India's ICAI, with Deloitte India on his resume. That's real accounting experience. But it's not a US CPA license, and it's one company out of four.
I'm not saying these people aren't smart. They clearly are. Ramp hit a billion in revenue. Basis convinced 30% of the top 25 US accounting firms to use their product. But building accounting software and being accountable for accounting work are different things.
Two business models, one gap
There's a split in this market that the press coverage is lumping together.
Basis and Canopy sell to CPA firms. The CPA firm is the customer. When Basis's agent processes a tax return, a CPA at the firm reviews and signs it. The licensed professional stays in the loop because the licensed professional is the one buying the tool. The AI does the production work, the CPA owns the result. That's sensible.
Ramp, Stampli, and Consark sell directly to businesses. No CPA in the loop. No licensed professional reviewing the output. Ramp's blog describes a customer who trusts 98% of transactions to sync without review. Stampli markets Billy as "your AI employee" who handles AP end-to-end. Consark says "final authority remains with finance leadership" — but doesn't require that leadership hold any accounting credentials.
Both approaches use AI for accounting. One keeps a licensed professional accountable for the output. The other doesn't.
The accountability chain
When a CPA firm does your accounting and something goes wrong — a misclassification that hits your tax filing, an error that flows into financial statements — there's a clear accountability chain. The CPA carries malpractice insurance. They're subject to state board discipline and peer review. They signed an engagement letter that defines their professional responsibility. There's a real person, with a real license they can lose, who is responsible for the work.
When a software company's AI agent does your accounting and something goes wrong, the accountability chain looks different. The software company isn't a CPA firm. They don't carry malpractice insurance for your books. The relationship is governed by a Terms of Service, not an engagement letter. Somewhere in that Terms of Service there's a limitation of liability clause that caps their exposure at what you paid for the subscription.
This isn't a theoretical concern.
What happens when nobody checks
In October 2025, Deloitte's Australian firm delivered a $290,000 report to the federal government that contained AI-generated hallucinations — references to academic papers that don't exist, a fabricated quote attributed to a federal court judge, and a fake book attributed to a Sydney University professor. A university researcher caught the errors. Not Deloitte's QA process. A researcher who happened to Google the citations.
Deloitte is a Big Four firm. Thousands of people. Multiple review layers. They still shipped hallucinated content to a government client. The UK's Financial Reporting Council later found that five out of six major firms deploying AI tools had no formal process for measuring whether the AI was actually improving audit quality. They tracked usage for licensing purposes. Not quality.
If Deloitte can't reliably catch AI errors with armies of reviewers, what happens when an AI agent auto-syncs a miscoded transaction to your ERP at 2 AM with nobody reviewing it at all?
We already know what happens. In December 2024, Bench — a VC-backed bookkeeping startup with $135 million in funding and 35,000 customers — shut down overnight, right before tax season. Their model was heavily automated with minimal human oversight. Clients reported recurring categorization errors that were "consistently overlooked." Some were still waiting for their 2023 financial records as late as September 2024. The support team had "little to no accounting knowledge."
Bench isn't what happens when AI accounting fails. It's what happens when you scale accounting automation without anyone accountable for the output.
The 98% problem
Ramp reports 98% accuracy on "ready to sync" transactions. Let's take that at face value.
A mid-market company processing 500 transactions a month at 98% accuracy has 10 errors per month auto-syncing to their ERP. Over a year, that's 120 miscoded transactions flowing into financial statements. Some are trivial. Some hit your tax basis. Some change your 1099 reporting. Some affect how revenue gets recognized.
A CPA reviewing those transactions would catch them. That's what review is. The AI doesn't know which ones it got wrong — that's what 98% accuracy means. It's 98% confident, not 98% verified. The 2% it gets wrong look exactly like the 98% it gets right.
And 98% is the self-reported number. From Ramp's own blog. On their easiest transactions — the ones the system flagged as "ready to sync." The error rate on the ambiguous ones, the ones that actually require judgment, is unknown. As one industry analysis put it: "that remaining 2% is where the IRS lives."
What to ask
If you're evaluating an AI accounting tool, there's really one question that cuts through the marketing.
Who signs the work?
Not who built the software. Not who funded the company. Not what accuracy percentage the press release claims. Who is personally, professionally responsible when the output is wrong?
If the answer is "you are" — you bought a tool, not a service. Tools are fine. But a tool that markets itself as removing your review burden while contractually making you responsible for the output is a gap that nobody in the "AI Bookkeeper Wars" coverage is talking about.
The profession figured this out centuries ago. A bookkeeper records the transaction. An independent reviewer checks it. A licensed professional signs it. Separation of duties.
When you remove the licensed professional from that chain and replace them with a Terms of Service, you haven't automated accounting. You've automated the production of financial data that nobody is professionally responsible for.
That distinction matters the first time something goes wrong.