AI Isn’t the Future of Regulation; It’s Already Here.
Oct 3, 2025
3 min read
Article
Most conversations about AI in finance and compliance frame it as a “future issue.” Something firms will get around to in 2030, once the technology is more stable and regulators have made up their minds.
That mindset is already out of date. AI isn’t waiting for the future; it’s embedded in today’s rules, frameworks, and expectations. From the EU AI Act to SFDR and CSRD, regulators are assuming firms will use automation and AI to keep up. The only real question is whether your firm will meet that expectation, or scramble to explain why you didn’t.
The Regulatory Workload Is Already Beyond Human Capacity
Look at SFDR or CSRD. Hundreds of mandatory datapoints, across multiple frameworks, all requiring accuracy, comparability, and timeliness. Add EU Taxonomy mapping, TCFD, SBTi, and Net Zero trackers, and the reporting workload becomes unmanageable without technological support.
Regulators know this; they’re not pretending armies of analysts with spreadsheets can keep up. Instead, they’ve shifted the focus from how disclosures are built to whether they’re transparent, traceable, and auditable. Those aren’t manual skills, they’re machine skills.
The EU AI Act Is Here, Not Hypothetical
The EU AI Act isn’t a distant concept; it’s already live and phasing in. Over the past year, here’s what’s happened:
August 2024: The Act officially entered into force. It’s the law, not a draft.
February 2025: The first binding provisions began: bans on “unacceptable risk” AI systems took effect, alongside general definitions, scope, and AI literacy measures.
May 2025: The European Commission was required to publish voluntary Codes of Practice for General Purpose AI (GPAI) models, guiding providers and users.
August 2025: A major milestone: obligations for GPAI providers, governance and confidentiality rules, penalties, and the designation of national competent authorities all came into effect. Member States also had to set up market surveillance systems.
That means we’re already in the live enforcement phase. The “future of regulation” is happening now. From February 2026 onwards, the next deadlines are:
February 2026: EU Commission must publish guidelines on classifying high-risk AI systems under Article 6.
August 2026: Most remaining obligations kick in, including requirements for high-risk AI systems. Any system already on the market that undergoes major changes must comply.
August 2027: Full application of Article 6(1): classification rules for high-risk AI systems become binding. GPAI providers that entered the market before Aug 2025 must achieve compliance by this point.
Regulators Don’t Fear AI. They Expect It.
The fear narrative that regulators are suspicious of AI misses the point. What regulators actually fear is bad AI: black-box models, opaque decisions, and unverifiable outputs.
But explainable AI? Auditable AI? Domain-specific AI that’s aligned with regulation? That’s not a threat, it’s the new standard. Regulators are signalling that firms who don’t use AI will struggle to meet disclosure and reporting obligations at scale.
Falling Behind Isn’t a Neutral Position
Firms dragging their feet on AI adoption aren’t playing it safe, they’re creating compliance risk. Manual processes are slow, inconsistent, and untraceable; all red flags in a world where regulators demand proof, not promises.
Meanwhile, competitors are already moving. AI tools are being deployed to monitor controversies in real time, pre-fill reporting templates, map investments to frameworks, and audit trail every step. These firms aren’t waiting for the “future of regulation,” they’re shaping it.
Final Word
It’s time to retire the idea that AI is tomorrow’s problem. In finance and ESG, it’s already embedded in today’s frameworks.
Regulators don’t just tolerate AI, they’re counting on it. The firms that recognise this shift will spend 2025 leading. The ones that don’t will spend 2026 explaining to regulators why they fell short.
AI isn’t the future of regulation; it’s already here.