– Sofie Bøttiger Hansen, Senior Business Consultant, NNIT
The EU's regulation on artificial intelligence, the AI Act, sets a regulatory framework for a rapidly growing technological area. But what do the new rules mean in practice? Here you will get answers on how to get ready for the AI Act and get better control of the overall approach to regulatory cyber compliance.
With the EU's recently adopted regulation on artificial intelligence (Artificial Intelligence Act), a framework is set for the use and development of artificial intelligence (AI) in Europe. The need for increased control and regulation has been hotly debated, but now that the legislation is a reality, it can be difficult to assess how you as a company or organization are covered by the rules.
In this article, we guide you through the most important questions about the AI Act, so that you can continue to use AI in a safe and compliant way.
What is the AI Act?
The AI Act regulates the development, marketing, and use of AI in the EU. The purpose is to ensure that AI is developed from a human perspective and that the use of AI harmonizes with European values and ethics, including democracy, legal certainty, and basic human rights.
The regulation's approach is risk-based and aims to support the deployment of safe and reliable AI solutions, without slowing efficiency or innovation.
In practice, this means that AI solutions are divided into four risk categories: