The European Commission has opened a public consultation to shape guidelines and a Code of Practice for EU AI Act transparency. The draft framework focuses on labels for AI‑generated or manipulated content, clear disclosures when people interact with chatbots, and transparency expectations for biometric categorisation and emotion recognition. Read the announcement on the Commission’s site here.
EU AI Act transparency – GPAI transparency obligations already apply
Key provisions for general‑purpose AI (GPAI) took effect in August 2025. The Commission’s Guidelines for GPAI providers explain when a model counts as GPAI, when it rises to systemic‑risk status, and what documentation and disclosure are required. These expectations dovetail with EU AI Act transparency rules that users will notice—like chatbot notices and synthetic‑content labels.
Training‑data transparency: publish a public summary
To support copyright and accountability, GPAI providers must publish a public summary of training content. The official Template for the Public Summary of Training Content outlines how to describe data categories, major sources/domains, and processing. Teams that start drafting now will be ready when audits or customer reviews ask for evidence of Data Privacy stewardship.
Labeling and chatbot disclosure: making content transparent
The consultation seeks practical input on deepfake labels for public‑facing media and chatbot disclosure in interfaces. The goal is to make AI involvement obvious without harming accessibility. Providers should test how labels appear on videos, images, and audio, and how notices behave in mobile and multi‑modal interfaces. These are core elements of EU AI Act transparency and tie into Responsible AI guidance.
EU AI Act transparency – Who enforces and what penalties apply
The European AI Office coordinates Member State authorities responsible for market surveillance and notifications. While timetables vary by measure, the direction is clear: keep documentation current, respond to information requests, and remediate risks quickly. Article 99 penalties can be significant, so invest in internal governance early. Legal primers and industry explainers will keep updating as the Code of Practice matures—watch our AI Update tag for changes.
EU AI Act transparency – What deployers should do now
- Map systems against GPAI/non‑GPAI and risk tiers; record where EU AI Act transparency applies. 2) Plan labels and notices for public content and chatbots. 3) Prepare the training‑data summary if you are a GPAI provider, using the Commission template. 4) Collect technical documentation (model cards, evals, incidents). 5) Assign ownership for transparency updates across product, legal, and compliance. For copyright, align disclosures with AI Copyright policies.
How non‑EU companies can align
If your services reach EU users, the rules can still apply. U.S. teams can benchmark against NIST’s CAISI work and the AI Safety Institute’s documents. These efforts increasingly cross‑reference EU guidance, making it easier to maintain one internal standard while serving multiple markets.
What to watch next
Expect drafts of labelling guidance and examples that product teams can reuse. Also watch procurement rules: public‑sector buyers will likely require EU AI Act transparency evidence during vendor reviews. As enforcement ramps, organizations that invested early in documentation and disclosure will move faster and face fewer surprises.