This is part 2 of a series on How to successfully implement AI. Part 1 covered what AI is. This one covers something more important: Can you trust AI?
I was at an ABFO event in Boston recently, and someone asked: “How do you secure the data and prevent hallucinations?”
AI models don’t “know” things the way a system of record does. They predict. Most of the time, they’re right; sometimes they’re not. Even the best AI gets things wrong sometimes.
So the next question is: How do you build a system where AI errors don’t turn into misstatements and bad guidance?
Most AI tools in finance today plug an LLM directly into workflows and let it generate outputs—numbers, recommendations, summaries—without enough structure underneath.
That’s where most systems go wrong and things break. Because now the same system that can “guess” is also influencing decisions.
We separate two things that should never be mixed: the numbers and the interpretation of the numbers.
1. The numbers (deterministic) - This includes accruals, budget vs. actuals, and forecast rollups. These come from Condor’s financial engine; not AI. They follow defined rules, are consistent, and are auditable. No guessing.
2. The interpretation (AI) - AI sits on top of those numbers, and helps answer questions like:
It finds patterns, flags anomalies, and explains what’s happening. But it does NOT create the numbers. AI can’t corrupt your financials because it never owns them.
Inside Condor, AI doesn’t take action on its own. Any material output like adjusting a forecast, calculating an accrual, and reconciling a balance requires human review and approval.
AI drafts and you decide. And it’s always clear what’s AI-generated and what’s system-calculated.
We treat data security as architecture, not policy.
Everything is traceable, and every output can be audited.
The system we’ve built at Condor mitigates AI errors. And when AI is wrong, it gets caught before it matters. That’s why accounting, FP&A, and clinical teams trust Condor to manage over $19B in R&D spend.