top of page

How to Interpret the EU AI Act - And What It Means for You

Artificial Intelligence (AI) is no longer an experimental technology - it’s embedded in how we work, learn, coach, and lead.

With the EU Artificial Intelligence Act (EU AI Act) now in force, every organisation, from large corporates to solo consultants, needs to understand its responsibilities.


This article summarises what the law requires, how to assess your own compliance, and the five golden rules to ensure your use of AI stays on the right side of the law.


Ten Key Aspects of the EU AI Act

  1. Risk-Based Classification – The Act divides AI systems into four categories: unacceptable, high, limited, and minimal risk. The higher the risk, the more stringent the requirements.

  2. Banned Practices – Certain AI uses are prohibited outright: e.g., social scoring, manipulative behaviour modification, or indiscriminate biometric surveillance.

  3. High-Risk Obligations – AI systems used in recruitment, education, healthcare, and law enforcement face detailed conformity, risk management, and transparency obligations.

  4. General-Purpose AI (GPAI) Rules – Foundation and large models must demonstrate data governance, transparency, and security measures.

  5. Transparency Requirements – Even lower-risk AI systems must inform users when they interact with AI or when content is AI-generated.

  6. Oversight and Governance – The EU AI Office and national regulators will supervise compliance and maintain a register of high-risk systems.

  7. Territorial Scope – The law applies to any provider or user whose AI output is used in the EU, even if the provider is outside the EU.

  8. Alignment with Other Laws – The EU AI Act works alongside GDPR, consumer protection, and product safety regulations.

  9. Penalties – Non-compliance can result in fines up to €35 million or 7% of global turnover.

  10. Phased Implementation – The Act entered into force on 1 August 2024, with obligations taking effect gradually over the next two years.


Purpose of the Law

The EU AI Act’s core purpose is to build trust in AI while encouraging innovation. It seeks to:

  • Protect citizens’ fundamental rights and safety.

  • Encourage transparent, explainable, and responsible AI use.

  • Provide consistent rules across the EU single market.

  • Maintain Europe’s competitiveness in ethical AI.


In essence, the Act ensures that AI systems benefit people - not manipulate or exploit them.


Your Responsibilities as an AI User

Even if you don’t build AI systems, you have duties as a deployer or user. You must:

  • Understand the risk category of the AI tools you use.

  • Verify that your chosen tools are compliant and reputable.

  • Implement human oversight - never outsource full decision-making to AI.

  • Maintain documentation, logs, and incident reports.

  • Inform users or clients when AI is used to generate or analyse information.

  • Avoid using AI in ways that could be deceptive, discriminatory, or intrusive.

  • Align with GDPR when handling personal or sensitive data.


The principle is simple: if you benefit from AI, you share responsibility for how it is used.


How to Know You’re Not Falling Foul of the Law

To check your compliance posture:


  1. Classify your AI tools. Identify their intended purpose and risk level.

  2. Understand your role. Are you a provider, deployer, or end-user? Each role carries specific duties.

  3. Check documentation. Verify the AI provider’s transparency and data-handling commitments.

  4. Implement oversight. Always review AI outputs before using them externally.

  5. Avoid banned uses. Confirm your use cases are lawful.

  6. Integrate compliance with privacy. AI and GDPR obligations overlap significantly.

  7. Track updates. Laws and enforcement guidance will evolve - stay informed.

  8. Document everything. Evidence of due diligence matters if questions arise.


Responsibilities for Small-Business Owners, Coaches, and Professional Users

If you’re a leadership coach, consultant, or small-business owner using AI for writing, research, scheduling, or client work, the EU AI Act still touches you - even if indirectly.


Here’s how to stay safe and responsible:


a. You Are Accountable for What You Put In

Before entering any client data, notes, or materials into an AI tool, ensure you have explicit permission and a compliant data-processing agreement to use that information. If the data contains personal or confidential details, confirm that the AI provider’s privacy policy, security standards, and terms of use comply with EU law.


Tip: Avoid free-to-use AI. Information entered in such tools can make its way into the public domain as part of their language models.

 

b. You Are Accountable for What Comes Out

You’re responsible for what you publish, send, or share that AI has generated. Review outputs for:

  • Accuracy and factual correctness

  • Bias or discrimination

  • Inappropriate or copyrighted material

  • Confidentiality breaches


If AI creates misinformation or biased advice and you share it with clients, you can be held liable for the outcome.


c. You Must Be Transparent with Clients

If you use AI to create materials, summaries, or training content, inform clients. Transparency builds trust and meets the Act’s disclosure requirements. A simple statement like “This report was generated with the assistance of AI and reviewed by [your name]” suffices.


d. You Should Choose AI Tools Responsibly

Select AI providers that demonstrate compliance - look for tools with published privacy policies, GDPR alignment, and clear data-handling documentation. If a tool seems “too good to be true,” it probably doesn’t meet EU standards.


e. You Should Keep a Record

Maintain a simple register of the AI tools you use, their purposes, and any client data involved. This can be as easy as a spreadsheet showing:

  • Tool name and provider

  • Use case (e.g., content drafting, data summarising)

  • Whether client data is used

  • Last compliance check date


This record demonstrates responsible AI governance - a hallmark of a trustworthy professional.


Reflection

Ask yourself:

If a client or regulator asked how I ensure my AI use is ethical and compliant, could I show my process?


If not, start documenting today. Responsible AI is not only a legal duty - it’s a business advantage.


Five Golden Rules for Safe AI Use

  1. Map the risk before you start. Know what type of AI you’re using.

  2. Keep documentation. Record how you use AI and where data flows.

  3. Always review AI output. Nothing leaves your desk unchecked.

  4. Be transparent. Tell clients when AI assists your work.

  5. Update your knowledge. The AI landscape and regulations evolve quickly.


Conclusion

The EU AI Act is more than legislation; it’s a framework for trustworthy innovation. Whether you’re a global enterprise deploying predictive models or a leadership coach using AI to support clients, you are a steward of ethical, responsible, and transparent AI use.


By following the Act’s principles, and applying them through structured governance tools like the Pétanque NXT AI Compliance Gap Assessment Worksheet, you can harness AI’s power safely and confidently.



This blog was ideated and the framework developed by the Pétanque Compliance Team. The sections were co-created with the assistance of AI and the final version was workshopped to conclusion by the Pétanque NXT Compliance Team.

Comments


The Hague, Netherlands | Johannesburg & Cape Town, South Africa

Email us
LinkedIn
Youtube

At Pétanque NXT your abundance is our aim. We are management consultants who focus on strategy and process with expertise in project and change management, using our award-winning storyboard process mapping methodology to help you make change happen.

bottom of page