8 questions and answers about the AI Act

Artificial intelligence is on everyone’s lips, and keeping up with developments can feel like trying to jump a Japanese high-speed train in passing. Do you also lack an overview of how to work compliant with AI? You’re not alone. But it doesn’t have to be difficult. Build on your existing compliance efforts and meet the requirements of the AI Act.

Published: 
May 7, 2025
Gry Josefine Løvgren
Content Specialist

Read more from the author

What is the AI Act?

The AI ​​Act is a European law that regulates the use of artificial intelligence. The EU's AI Act is the world's first comprehensive legislation on artificial intelligence. The purpose of the regulation is to create trust in AI systems with a focus on security and respect for fundamental rights.

In August 2024, the AI ​​Act was adopted by the EU and is thus part of Danish legislation. The requirements in the AI ​​Regulation will be implemented in stages from 2024-2027.

What has come into force so far?

On February 2, 2025, the following requirements will come into effect:

  • Organisations can not use prohibited AI systems
  • Employees must be trained in AI skills

These rules are meant to ensure that companies and authorities don’t develop systems that pose threats to safety, citizen rights, or dignity, and that employees are properly equipped to use AI.

What comes into force later?

  • From August 2, 2025: The EU Commission will supervise large generative AI models, such as ChatGPT, Claude, and Gemini.
  • From August 2, 2026: For AI systems with limited risk, specific transparency requirements (e.g., disclosure of AI-generated content) must be followed.
  • From August 2, 2026 and 2027: For high-risk AI systems, strict requirements apply for documentation, risk management, monitoring, and accountability.

How are AI system risks classified?

Not all AI systems are created equal, which is why you need to determine the type of AI you’re dealing with. The AI Act defines four risk categories:

  • Prohibited AI systems (e.g., AI used for mass surveillance)
  • High-risk AI systems (e.g., AI used in critical infrastructure, education, hiring, or essential public/private services)
  • Limited-risk AI systems (e.g., chatbots, AI-generated text, audio, and video)
  • Minimal or no-risk AI systems (e.g., recommendation engines on streaming platforms, autocorrect, navigation)

How do you work with the AI Act in Wired Relations? (And in general…)

Document your AI systems
You already know this from your data protection and information security work. Systems and processes need to be documented. The same applies to AI systems. This is easily done in Wired Relations, where you can use labels to put a tag on the AI ​​system. You can also link to more detailed documents.

Task management – annual cycle
AI systems are developing, and therefore it is an ongoing task to manage risks and ensure compliance. In Wired Relations Task Manager you can manage your tasks (including recurring ones) and thereby your annual cycle.

AI policies
With Wired Relations, you get relevant templates for the most important AI policies. You can also send a policy about the use of artificial intelligence to employees to ensure that they are well-prepared. You will get a confirmation when the policy has been read.

How do you use the Danish Data Protection Agency’s DPIA for AI in Wired Relations?

The Data Protection Impact Assessment (DPIA) in Wired Relations is based on the Data Protection Agency’s generic DPIA template. There are few differences between the Danish Data Protection Agency's generic template and the template for impact assessments for AI.

In the impact assessment for AI, the assessment of legality must be assessed in relation to different phases (development, testing and operation), and specific AI threats must be addressed.

Both can be done in the generic impact assessment used in Wired Relations.

What is ISO 42001?

ISO/IEC 42001 is an international standard specifying requirements for establishing, implementing, maintaining, and continuously improving an AI Management System (AIMS). It targets organisations that deliver or use AI-based products/services, ensuring responsible AI development and usage.

The standard addresses AI-specific challenges like ethics, transparency, and continuous learning. It provides a structured way for organisations to manage AI-related risks and opportunities, balancing innovation and governance. It is a strong foundation for AI compliance.

Can you work with ISO 42001 in Wired Relations?

Yes, you can. Just like other standards, you can document how your organisation meets the requirements. It’s not mandatory to use ISO 42001 for AI Act compliance — but it’s a helpful option.

Get our articles straight in your inbox

Sign up for our monthly newsletter Sustainable compliance.

Subscribe here