AI Act & Fintech: A Guide to Article 50, DORA, and PSD3 Compliance
The European Commission, through its new European AI Office, has initiated a foundational effort to regulate the outputs of generative artificial intelligence, signaling a new era of mandatory transparency for the financial sector. This is not a routine update; it is a structural shift in regulatory expectations that will touch every AI-driven customer interaction, internal report, and fraud detection system.
On November 5, 2025, the AI Office launched the development of a new Code of Practice focused on the transparency of AI-generated content. This seven-month, multi-stakeholder process is designed to create the official guidebook for operationalizing the mandatory transparency obligations set forth in Article 50 of the EU AI Act. These obligations become fully enforceable in August 2026, and preparations must begin immediately.
It is critical for executives to understand what this Code is, and what it is not. This new Code pertains specifically to Article 50: Transparency Obligations. It focuses on the outputs of AI: the labeling of deepfakes, text, and audio, and the disclosure requirements for chatbots. This is not the same as the General-Purpose AI (GPAI) Code of Practice (finalized in July 2025) , which addresses the providers of foundational models (like OpenAI or Google) and their obligations regarding model governance and systemic risk.
A financial firm using a third-party AI model to build a customer-facing chatbot is a "deployer" and must comply with this new Article 50 Code. While the Code is technically "voluntary," adherence provides a "presumption of conformity" with the AI Act's mandatory rules. Choosing not to follow the Code means bearing a significantly heavier legal burden to prove compliance. Therefore, this Code will become the de facto baseline for all regulatory audits.
New Liabilities for Deployers
For a financial institution, understanding the distinct liabilities for "deployers" and "providers" is paramount. Most firms will act as "deployers"; entities using an AI system in a professional capacity. The obligations for deployers under Article 50 are focused on disclosure to the end-user.
Key deployer obligations include:
Chatbot and Interaction Disclosure: Firms must ensure that natural persons are informed they are interacting with an AI system. This directly impacts all AI-driven customer service tools, from chatbots to robo-advisors.
Deepfake Disclosure: Any deployer generating or manipulating image, audio, or video content constituting a "deepfake", such as for marketing or training, must disclose that the content is artificial.
Public Interest Text Disclosure: This is the most significant gray area. A deployer must disclose that text published with the purpose of informing the public on matters of public interest (e.g., market analysis, economic outlooks) has been artificially generated.
Critically, Article 50 provides a vital exception for this text-disclosure rule: it does not apply if the AI-generated content has "undergone a process of human review or editorial control" and a person "holds editorial responsibility for the publication". This exception elevates the "Human-in-the-Loop" (HITL) model from a best practice to a crucial legal safe harbor. Establishing a robust, auditable process for human editorial sign-off on all AI-generated public communications will become a cornerstone of Article 50 compliance.
"Accidental Providers" and Fragile Tech
While deployer obligations are manageable, two significant hidden risks can create catastrophic compliance failures.
The first is the "Accidental Provider" Trap. The AI Act's most severe rules are reserved for "providers" of "high-risk" systems. The Act explicitly classifies AI used "to evaluate the creditworthiness of natural persons or establish their credit score" as high-risk. A fintech firm ("deployer") that licenses a third-party GPAI model and fine-tunes it on proprietary data to create a new credit-scoring tool may, in that instant, be re-classified as a provider of a high-risk AI system . This single act of innovation triggers the full, multi-million-euro compliance stack: mandatory risk management systems, data governance protocols, technical documentation, human oversight mechanisms, and conformity assessments before the system can be used. This trap necessitates a new, C-suite-level governance framework where legal and compliance teams have direct oversight of AI R&D.
The second risk is the "Fragile Compliance" Gap. Article 50 mandates that providers ensure their outputs are "marked in a machine-readable format" and "detectable". The two leading technologies for this are C2PA, which provides a "digital nutrition label" of provenance metadata , and digital watermarking, like Google's SynthID, which embeds an imperceptible signal into images, audio, or text.
However, these technologies are fragile. C2PA metadata is trivially "stripped" from an image by the simple act of taking a screenshot or uploading it to most social media platforms. Text watermarks, which rely on statistical patterns in word choice, are completely destroyed by basic paraphrasing tools. A firm can be fully compliant, yet a malicious actor can strip its safeguards and repurpose its content for fraud, leaving the firm unable to prove its compliance. This fragility means technology alone is not a sufficient defense; it must be backstopped by the auditable human processes outlined in the "editorial control" safe harbor.
DORA, PSD3, and Supervisory Scrutiny
The AI Act's transparency rules do not exist in a silo. They are a new, foundational layer that interconnects with and amplifies existing financial regulations.
1. DORA & Deepfakes: The Digital Operational Resilience Act (DORA) mandates that financial entities implement comprehensive ICT risk management and conduct regular, advanced "Threat-Led Penetration Testing" (TLPT). TLPT is an intelligence-driven simulation of real-world adversary tactics (TTPs). In 2024 and 2025, AI-driven deepfakes, such as the $25 million fraud perpetrated via a deepfake C-suite video call, have become a confirmed TTP. Therefore, a firm's DORA compliance is no longer adequate if its TLPT only tests networks. To remain compliant, TLPT engagements must now be expanded to test human and biometric resilience against AI-driven impersonation attacks.
2. PSD3 & AI Fraud: The third Payment Services Directive (PSD3) strengthens consumer fraud protections. This creates a paradox: generative AI enables hyper-personalized, industrial-scale scams , forcing firms to deploy defensive AI (such as behavioral analysis and real-time transaction monitoring) to meet PSD3's robust fraud-prevention mandates. However, as noted, if this defensive AI also informs creditworthiness, it risks triggering the AI Act's "high-risk" classification. Firms must now carefully design and document their AI models to keep fraud detection (a DORA/PSD3 imperative) legally separate from credit scoring (an AI Act high-risk trigger).
3. Supervisory Enforcement: The AI Act will be enforced by the EU's existing, powerful sectoral supervisors.
BaFin (Germany) is already focused on unjustified discrimination and will use the AI Act's high-risk rules to audit AI credit-scoring models for fairness and bias.
ESMA (Securities) connects AI to MiFID II's "best interest of the client" obligation. It will use Article 50's transparency rules to scrutinize "opaque decision-making" in robo-advisors and demand comprehensive records .
EIOPA (Insurance) has already established principles of fairness, explainability, and human oversight. The AI Act codifies these principles into hard law, giving EIOPA a new tool to enforce its governance framework.
Transparency by Design
The convergence of the AI Act with DORA, PSD3, and existing supervisory mandates creates a formidable regulatory challenge. The rules are complex, the technology is fragile, and the liabilities are significant. However, these regulations are all aimed at solving the single biggest barrier to AI adoption in finance: a profound lack of trust.
Firms that treat this as a back-office compliance task risk being defined by their opacity. But the institutions that embrace "transparency by design", using these new rules to proactively label, explain, and verify their AI-driven insights, will convert this burden into a powerful market differentiator. In an era of rampant deepfakes, the firm that can prove its communications are authentic will build a moat of trust that competitors cannot cross.