Artificial intelligence is on everyone’s lips, and keeping up with developments can feel like trying to jump a Japanese high-speed train in passing. Do you also lack an overview of how to work compliant with AI? You’re not alone. But it doesn’t have to be difficult. Build on your existing compliance efforts and meet the requirements of the AI Act.
The AI Act is a European law that regulates the use of artificial intelligence. The EU's AI Act is the world's first comprehensive legislation on artificial intelligence. The purpose of the regulation is to create trust in AI systems with a focus on security and respect for fundamental rights.
In August 2024, the AI Act was adopted by the EU and is thus part of Danish legislation. The requirements in the AI Regulation will be implemented in stages from 2024-2027.
On February 2, 2025, the following requirements will come into effect:
These rules are meant to ensure that companies and authorities don’t develop systems that pose threats to safety, citizen rights, or dignity, and that employees are properly equipped to use AI.
Not all AI systems are created equal, which is why you need to determine the type of AI you’re dealing with. The AI Act defines four risk categories:
Document your AI systems
You already know this from your data protection and information security work. Systems and processes need to be documented. The same applies to AI systems. This is easily done in Wired Relations, where you can use labels to put a tag on the AI system. You can also link to more detailed documents.
Task management – annual cycle
AI systems are developing, and therefore it is an ongoing task to manage risks and ensure compliance. In Wired Relations Task Manager you can manage your tasks (including recurring ones) and thereby your annual cycle.
AI policies
With Wired Relations, you get relevant templates for the most important AI policies. You can also send a policy about the use of artificial intelligence to employees to ensure that they are well-prepared. You will get a confirmation when the policy has been read.
The Data Protection Impact Assessment (DPIA) in Wired Relations is based on the Data Protection Agency’s generic DPIA template. There are few differences between the Danish Data Protection Agency's generic template and the template for impact assessments for AI.
In the impact assessment for AI, the assessment of legality must be assessed in relation to different phases (development, testing and operation), and specific AI threats must be addressed.
Both can be done in the generic impact assessment used in Wired Relations.
ISO/IEC 42001 is an international standard specifying requirements for establishing, implementing, maintaining, and continuously improving an AI Management System (AIMS). It targets organisations that deliver or use AI-based products/services, ensuring responsible AI development and usage.
The standard addresses AI-specific challenges like ethics, transparency, and continuous learning. It provides a structured way for organisations to manage AI-related risks and opportunities, balancing innovation and governance. It is a strong foundation for AI compliance.
Yes, you can. Just like other standards, you can document how your organisation meets the requirements. It’s not mandatory to use ISO 42001 for AI Act compliance — but it’s a helpful option.
Sign up for our monthly newsletter Sustainable compliance.