The AI Act
Get ready for the AI Act: build on your existing GRC framework
Two key provisions of the AI Act are presently in effect:
- Organisations must not use prohibited AI systems.
- Employees must be trained in AI-related competencies.
By classifying your systems and using our AI policy templates, you are fully supported in meeting the applicable legal requirements.
The AI Act aims to ensure that businesses and organisations across the EU use artificial intelligence responsibly. The first step is identifying where and how your organisation uses AI.
This provides visibility and enables better risk management – which is currently the core compliance requirement.

Companies all over Europe already build sustainable GRC programmes with Wired Relations
The challenge
Evolving systems and legislation
There are two main challenges in achieving AI compliance:
- AI systems evolve rapidly, even after they’ve been integrated into your IT landscape.
- Legislation is constantly evolving and being updated.
As a result, many organisations are trying to strike the right balance – staying compliant without over-implementing and wasting valuable time and resources.
That’s why we recommend building an overview and classifying your AI use. This ensures you're prepared as the rules gradually come into force and become more defined.
Common challenges right now
I have no idea where we’re using AI
How do I get a full overview of the types of AI we’re using?
How can I make sure we’re prepared and have the necessary insight when the rules take effect?
What risks are we taking on by using AI?
How to Document AI compliance with Wired Relations
Get a complete overview of your
systems
Start by building a comprehensive database of your IT systems and vendors. Many organisations have already done this as part of their data protection and information security work.

Identify AI usage within the organisation
Once you have a clear view of your IT landscape, the next step is identifying which systems involve AI. In Wired Relations, this is marked directly at the system level, helping you build a bottom-up overview.

Classify your AI systems
Next, determine the type of AI system in use. The AI Act defines four risk categories:
- Prohibited AI systems
- High-risk
- Limited risk
- Minimal risk
You can also choose alternative classifications to meet the requirements of other laws, regulations, or AI frameworks.
Classification is crucial as it dictates the compliance requirements and informs how you structure your compliance activities.

Prepare your employees to work with AI
Wired Relations provides ready-to-use policy templates for AI usage in your organisation. These can be easily distributed to employees, allowing you to ensure policies are read and understood.

Document your AI systems
As with GDPR and information security, AI systems and processes must be documented. Wired Relations allows you to document your AI systems and link to supporting documents – enabling you to demonstrate compliance to partners and authorities.

Manage AI risks
Ongoing risk management is key when working with AI. Wired Relations includes risk assessment and management tools specifically for AI systems.

Task management – life cycle management
AI systems evolve, so managing risk and ensuring compliance is a continuous process. With the Task Manager in Wired Relations, you can manage ongoing and recurring tasks throughout the year.

AI policies
Wired Relations provides a package of templates covering the most essential AI policies.

Gaining visibility into your AI usage is not just a legal requirement – it also creates real business value:
- Risk Management: AI introduces risk. You can only manage risks you’re aware of. The overview is the first step.
- Competitive Advantage: A good understanding of your AI landscape helps you leverage it more efficiently.
- Efficient Compliance Processes: Focus your compliance efforts where they add the most value.
- Improved Cybersecurity: AI can introduce vulnerabilities. These are easier to mitigate when you have full visibility.
The benefits of having an AI overview
What is the AI Act?
The AI Act is EU legislation that governs the use of artificial intelligence. Its aim is to foster trust in AI systems by prioritising safety and the protection of fundamental rights.
What’s already in force?
From 2 February 2025, the first provisions of the AI Act must be observed:
- Organisations must not use prohibited AI systems.
- Employees must be trained in AI competencies.
These requirements help ensure that organisations do not develop systems that pose a risk to safety, dignity, or fundamental rights – and that staff are equipped to use AI responsibly.
What’s coming later?
- From 2 August 2025, the EU Commission will supervise large general-purpose AI models (e.g., ChatGPT, Claude, Gemini).
- From 2 August 2026, limited-risk AI systems must meet transparency requirements (e.g., disclosure of AI-generated content).
- From 2 August 2026 and 2027, high-risk AI systems must comply with strict documentation, risk management, monitoring, and accountability obligations.
We closely monitor updates to the legislation and continuously assess how Wired Relations can support new requirements.
What is AI classification?
Not all AI systems are alike – the AI Act categorises them into four risk levels:
- Prohibited AI systems (e.g., AI used for mass surveillance)
- High-risk AI systems (e.g., AI used in critical infrastructure, education, employment, and essential services)
- Limited risk AI systems (e.g., chatbots, AI-generated content – text, audio, or video)
- Minimal or no risk AI systems (e.g., recommendation engines in streaming platforms, autocorrect, navigation tools)
Why doesn’t Wired Relations offer a separate AI product or module?
AI documentation is a natural extension of existing GDPR and information security work. A separate module isn’t required.
Moreover, only a few AI Act requirements are currently in effect, and much of the regulation is still being developed. The provisions already in force are:
- No use of prohibited AI systems
- AI-related staff training
What should we do now, since the AI Act is not fully in force?
For now, focus on:
- Determining whether a system involves AI and to what extent
- Providing an AI usage policy for employees – both of which are supported in Wired Relations.
We are monitoring developments closely and will adapt the product accordingly.