AI Agents for Compliance: Diligent AI's Race to Automate KYC/AML
- Marc Griffith

- Mar 4
- 5 min read

Summary Diligent AI closes a €2.1M seed to accelerate AI agents that automate KYC/AML and investigations into sanctions, PEP and adverse media. Clients include Flywire and Allica Bank. Focus on Europe and the UK, with investors such as Speedinvest and Y Combinator and use cases ready to scale. Key takeaways
|
AI agents for compliance are becoming the key tool to reduce costs and timelines in KYC/AML functions, standardizing decisions and investigations in a traceable way.
Diligent AI, a startup based in London and Berlin, closed a seed round of €2.1 million ($2.5 million) to bring to market its autonomous analysts dedicated to regulated finance. The round was led by Speedinvest, with participation from FinTech investor Shapers and ongoing support from Y Combinator, with the entry of founders and CEOs of N26, Allica Bank, IDnow, Billie and Cybersource. The goal: scale the agents across the UK and Europe.
Why AI agents for compliance are scaling
AI agents for compliance means moving human work from repetitive research to risk assessments and strategic decisions. According to Diligent AI, KYC/AML teams are today overwhelmed by rising volumes: more sanctions as geopolitical tools, more sophisticated fraud and scams, and accelerating transaction speeds.
In frontline teams, speed is often the priority over depth: AI agents aim to restore balance, elevating analysis and reducing repetitive workload.
The CEO and co-founder Edoardo Maschio sums up the impact: 'When you remove repetitive tasks like handling false positives, researching company registries and news, you free the mind for judgment and strategy.' It is a shift from data processing to decision making, with a direct impact on response times and final quality.
What Diligent AI does: AI agents for compliance
Diligent AI, founded in 2023 by Edoardo Maschio (former BCG and Rocket Internet) and Ahmed Gaber (former CTO of Billie), builds agents capable of reading, reasoning and investigating autonomously. The agents cover AML screening, merchant due diligence, resolution of sanctions alerts, PEP and adverse media, as well as gathering and contextualizing information.
The value proposition is twofold: reducing manual operations and increasing the quality and consistency of decisions. Standardizing the investigative process eliminates analyst-to-analyst variability, ensures 24/7 rigor, and reduces fatigue that typically lowers attention at the end of a shift.
"Compliance operations cannot scale proportionally to risk complexity: the way forward is to fight fire with fire, AI against AI," comments Speedinvest.
Operational data on AI agents for compliance
Diligent's agents are already active in Europe, the Middle East, the United States, and Japan, with clients such as Flywire, Allica Bank, Alma, Teya and Tamara. Institutions report operational savings and tougher decisions thanks to uniform investigations across alerts and due diligence, reducing false positives and cycle times.
From a workflow perspective, agents replace static pipelines with dynamic investigations: they consult registries, cross adverse media, contextualize entities and transactions, and document steps. This creates granular audit trails and reduces the risk of procedural gaps, useful in regulatory inspections.
European RegTech landscape: investments and signals
In the 2025–2026 period, the European regtech ecosystem continues funding solutions for compliance, risk and automation: rounds include Falkin (€1.7M pre-Seed), Innerworks (€3.7M Seed), Bits (€12M Series A), Resistant AI (€21M Series B), Hawk (€51.8M Series C) and Taktile (€51.5M Series B). The trajectory is clear: automating regulated processes is a priority to control costs and support digital growth.
For founders and compliance leaders, this signals a competitive edge for early adopters of agent-based solutions, especially where decision quality and traceability are measurable. Early adoption can become an operational standard before competitors close the gap.
Debate: risks and trade-offs of AI agents in compliance
Adopting autonomous agents in regulated spaces raises technical, organizational and ethical questions. On one hand, automating repetitive tasks frees human resources for complex investigations; on the other, it increases dependence on models whose opacity must be mitigated with governance, explainability, and robust audit trails. For banks, secure integration requires access controls, secret management, environment segregation, and pre-emptive validations of agent behaviors on representative case sets.
It is crucial to define guardrails: human-in-the-loop policies for ambiguous cases, confidence thresholds for end-to-end automation, and metrics shared between risk and business (precision on sanctions/PEP, recall on adverse media, average resolution time). The organizational design matters as much as the model: clear RACI, periodic review of prompts and connectors, and change management that trains analysts to be supervisors/decision-makers.
Another issue is regulatory alignment: documentation must allow authorities to understand how the agent arrives at a decision, which sources it uses, and how it handles uncertainty. Explainability and logging are integral parts of the product, not optional: without them, large-scale adoption will remain constrained. On data, teams should assess localization (EU vs non-EU), rights to training data, and minimization of personal information processed.
Finally, there’s the risk of over-automation: relying too much on agents can introduce systematic errors hard to spot. A risk-based approach that balances automation and human oversight based on impact reduces bias and cascading errors. Despite these trade-offs, evidence of savings and decision quality suggests the competitive edge will come from how well we govern AI, not how much we avoid it.
What to monitor in the next 12 months
Three signals: maturity of connectors to authoritative sources (registries, sanctions lists, media), AI compliance frameworks in the EU/UK, and independent benchmarks on false positives/negatives. Institutions should request references, reproducible metrics, and rapid POCs on real cases to validate ROI and quality.
On the market side, partnerships with core banking, processors and screening providers will determine adoption speed. Those who integrate agents directly into transactional systems will drastically reduce friction and time to production.
Resources and useful links
For ecosystem insights: Speedinvest, Y Combinator, and real-world cases from institutions like N26 and Allica Bank. Evaluating multiple sources and requesting independent technical audits increases the robustness of purchasing decisions.
Concrete next moves
For those steering compliance or product, the first steps include mapping high-volume, low-risk tasks for automation, defining KPIs (false positives, TAT, dossier quality), and designing a 6–8 week POC with a set of realistic cases. Setting in advance shared success criteria across compliance, risk and IT accelerates internal scale-up.
From lab to production: how to scale
Industrializing means orchestrating agents, data sources, controls and observability. Implementing human-in-the-loop policies, versioning prompts/models and introducing continuous evaluation on representative samples reduces regressions in operation.
Closing the loop: value and governance
Measure impact not only on costs and timelines, but on the quality and consistency of decisions: audit trails, explainability, and the rate of appropriate escalations. Only then will AI agents for compliance scale sustainably and in line with regulators' expectations.
From here on: a governed race
The trajectory is set: AI agents for compliance will become part of the operational infrastructure; the winners will be those who unite clear metrics, robust governance, and rapid experimentation with trusted partners.




