The AI Act

Get ready for the AI Act: build on your existing GRC framework

These key provisions of the AI Act are presently in effect:

  • Organisations must not use prohibited AI systems.
  • Employees must be trained in AI-related competencies.
  • The EU Commission will supervise large generative AI models, such as ChatGPT, Claude, and Gemini.

By classifying your systems and using our AI policy templates, you are fully supported in meeting the applicable legal requirements.

The AI Act aims to ensure that businesses and organisations across the EU use artificial intelligence responsibly. The first step is identifying where and how your organisation uses AI.

This provides visibility and enables better risk management – which is currently the core compliance requirement.

Wired Relations dashboard showing all systems with filter options, system status, labels, owner, and responsible person.

Companies all over Europe already build sustainable GRC programmes with Wired Relations

The challenge

Evolving systems and legislation

There are two main challenges in achieving AI compliance:

  • AI systems evolve rapidly, even after they’ve been integrated into your IT landscape.

  • Legislation is constantly evolving and being updated.

As a result, many organisations are trying to strike the right balance – staying compliant without over-implementing and wasting valuable time and resources.

That’s why we recommend building an overview and classifying your AI use. This ensures you're prepared as the rules gradually come into force and become more defined.

Common challenges right now

I have no idea where we’re using AI

How do I get a full overview of the types of AI we’re using?

How can I make sure we’re prepared and have the necessary insight when the rules take effect?

What risks are we taking on by using AI?

How to document AI compliance with Wired Relations

Get a complete overview of your

systems

Start by building a comprehensive database of your IT systems and vendors. Many organisations have already done this as part of their data protection and information security work.

Wired Relations dashboard displaying all registered systems with filtering options.

Identify AI usage within the organisation

Once you have a clear view of your IT landscape, the next step is identifying which systems involve AI. In Wired Relations, this is marked directly at the system level, helping you build a bottom-up overview.

Illustration of a magnifying glass and a warning symbol representing identification of AI usage within the organisation.

Classify your AI systems

Next, determine the type of AI system in use. The AI Act defines four risk categories:

  • Prohibited AI systems
  • High-risk
  • Limited risk
  • Minimal risk

You can also choose alternative classifications to meet the requirements of other laws, regulations, or AI frameworks.

Classification is crucial as it dictates the compliance requirements and informs how you structure your compliance activities.

A card in Wired Relations showing the type of AI system in use.

Prepare your employees to work with AI

In Wired Relations, you get access to templates for policies on the use of AI in your organisation. Adapt the template directly in the system so it fits your exact needs. Once the policy is ready, you can easily share it with your colleagues and track who has received and acknowledged it.

Illustration of balanced scale and checkmark symbolising compliance with AI policies

Document your AI systems

As with GDPR and information security, AI systems and processes must be documented. Wired Relations allows you to document your AI systems and link to supporting documents – enabling you to demonstrate compliance to partners and authorities.

System overview in Wired Relations with AI-marked systems clearly identified.

Task management – life cycle management

AI systems evolve, so managing risk and ensuring compliance is a continuous process. With the Task Manager in Wired Relations, you can manage ongoing and recurring tasks throughout the year.

Table visualising annual compliance tasks distributed across the calendar year.

Gaining visibility into your AI usage is not just a legal requirement – it also creates real business value:

  • Risk Management: AI introduces risk. You can only manage risks you’re aware of. The overview is the first step.
  • Competitive Advantage: A good understanding of your AI landscape helps you leverage it more efficiently.
  • Efficient Compliance Processes: Focus your compliance efforts where they add the most value.
  • Improved Cybersecurity: AI can introduce vulnerabilities. These are easier to mitigate when you have full visibility.

The benefits of having an AI overview

Frequently Asked Questions about the AI Act and Wired Relations

What is the AI Act?

The AI Act is EU legislation that governs the use of artificial intelligence. Its aim is to foster trust in AI systems by prioritising safety and the protection of fundamental rights.

What’s already in force?

The first provisions of the AI Act must be observed:

  • Organisations must not use prohibited AI systems.
  • Employees must be trained in AI competencies.
  • The EU Commission will supervise large general-purpose AI models (e.g., ChatGPT, Claude, Gemini).

These requirements help ensure that organisations do not develop systems that pose a risk to safety, dignity, or fundamental rights – and that staff are equipped to use AI responsibly.

What’s coming later?
  • From 2 August 2026, limited-risk AI systems must meet transparency requirements (e.g., disclosure of AI-generated content).
  • From 2 August 2026 and 2027, high-risk AI systems must comply with strict documentation, risk management, monitoring, and accountability obligations.

We closely monitor updates to the legislation and continuously assess how Wired Relations can support new requirements.

What is AI classification?

Not all AI systems are alike – the AI Act categorises them into four risk levels:

  • Prohibited AI systems (e.g., AI used for mass surveillance)
  • High-risk AI systems (e.g., AI used in critical infrastructure, education, employment, and essential services)
  • Limited risk AI systems (e.g., chatbots, AI-generated content – text, audio, or video)
  • Minimal or no risk AI systems (e.g., recommendation engines in streaming platforms, autocorrect, navigation tools)
What should we do now, since the AI Act is not fully in force?

For now, focus on:

  • Determining whether a system involves AI and to what extent
  • Providing an AI usage policy for employees – both of which are supported in Wired Relations.

We are monitoring developments closely and will adapt the product accordingly.