May 12, 2026

How our platform uses AI to produce numbers you can trust

Jen Kyle
CEO, Founder, Condor
AI
automation
condor
security

This is part 2 of a series on How to successfully implement AI. Part 1 covered what AI is. This one covers something more important: Can you trust AI?

I was at an ABFO event in Boston recently, and someone asked: “How do you secure the data and prevent hallucinations?” 

AI models don’t “know” things the way a system of record does. They predict. Most of the time, they’re right; sometimes they’re not. Even the best AI gets things wrong sometimes.

So the next question is: How do you build a system where AI errors don’t turn into misstatements and bad guidance?

Most AI tools in finance today plug an LLM directly into workflows and let it generate outputs—numbers, recommendations, summaries—without enough structure underneath.

That’s where most systems go wrong and things break. Because now the same system that can “guess” is also influencing decisions.

How Condor approaches it differently

First: We don’t mix the numbers and the interpretation 

We separate two things that should never be mixed: the numbers and the interpretation of the numbers.

1. The numbers (deterministic) - This includes accruals, budget vs. actuals, and forecast rollups. These come from Condor’s financial engine; not AI. They follow defined rules, are consistent, and are auditable. No guessing.

2. The interpretation (AI) - AI sits on top of those numbers, and helps answer questions like:

  • What changed?
  • What looks off?
  • What should I pay attention to?

It finds patterns, flags anomalies, and explains what’s happening. But it does NOT create the numbers. AI can’t corrupt your financials because it never owns them.

Second: humans stay in control

Inside Condor, AI doesn’t take action on its own. Any material output like adjusting a forecast, calculating an accrual, and reconciling a balance requires human review and approval.

AI drafts and you decide. And it’s always clear what’s AI-generated and what’s system-calculated.

Third: security is built in—not bolted on

We treat data security as architecture, not policy.

  • Each customer’s data is isolated at the database level
  • AI models don’t have direct access to your data
  • Only the minimum data needed for a task is shared
  • Sensitive data (like patient info) is tightly controlled
  • Nothing is stored, reused, or used to train models

Everything is traceable, and every output can be audited.

What this means in practice

The system we’ve built at Condor mitigates AI errors. And when AI is wrong, it gets caught before it matters. That’s why accounting, FP&A, and clinical teams trust Condor to manage over $19B in R&D spend.