By Nicholas H. Karlsen and Sofie Bøttiger Conlan, NNIT Cybersecurity and Compliance
AI technology comes with great potential, but also considerable risk. Some say the possibilities are endless; at the very least, AI is revolutionary in our time in the same way as electricity and the steam engine in the 19th Century. That is why we should embrace regulation and not wait and see what happens when risks become reality.
Imagine not quite knowing the potential or effects of a new, omnipotent technology and not setting boundaries for its implementation and use! Would you get behind the wheel of a car with no brakes and drive it in a province with no traffic regulations?
The EU’s AI Act is a first, responsible stab at setting some ground rules to guide current and future implementation of AI systems. And it is pre-emptive rather than reactive, which is quite impressive considering 27 nations had to arrive at a consensus in a short time span. Put simply, we see it as sensible versus reckless implementation.
Harmonizing implementation across EU as well as taking the responsible lead on acceptable use of AI technology makes a lot of sense, and here is why:
Clear Guidelines for Acceptable Use
The AI Act, while far from perfect, sets limits for unacceptable and high-risk use of AI . With its four risk categories, the AI Act outlines what constitutes unacceptable and high-risk behavior.
In that sense, the AI Act provides a map to navigate between right and wrong when entering the AI territory and takes a stand on the more dubious potential uses of AI to violate human rights, including physical and emotional surveillance and manipulation. Any such use is highly regulated and comes with a whole set of requirements for detailed and documented risk reduction, data validation, activity logging, information procedures, human controls etc.
And this allows companies and organizations to identify acceptable categories and design their systems to stay within the boundaries of acceptable use.
Protection of Well-established Human and Legal Rights
In many ways, the AI Act takes its cue from other EU legislation such as the Human Rights conventions and GDPR. In that sense, the AI Act is neither radical nor surprising – it imposes a set of regulations that to a large degree were already in place in the physical world.
The European consensus on and protection of human rights, data privacy, non-discriminatory environments and freedom of speech logically extends to AI system use with the AI Act.
Compliance Ensures Responsible Choices
While some may see the AI Act as more red tape and as hampering AI innovation, we see it as a prudent approach to implementation of new technology – an example to follow for other territories that have yet to introduce regulation.
When compared to established conventions and norms across the EU, we do not see the risk categories as controversial or restrictive. Yes, the AI Act requires companies and organizations to be mindful and take necessary steps to ensure compliance, but compliance also ensures responsible choices. Unbridled use of AI should not be the norm.
AI Management Systems Based on ISO/IEC 42001 is a Good Start
We acknowledge that in particular small and medium-sized enterprises may find it difficult to get started with implementing trustworthy AI, even with the AI Act in hand. Like any law text, it takes a lot of practice to read and implement the rules in the right way. At the same time, we feel confident that this aspect will only improve with experience, and as updates are made in the years to come.
A good place to start is to consider the new ISO/IEC 42001 standard, which specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS). An AIMS based on ISO/IEC 42001 ensures accountability and responsibility, it improves decision-making on AI, ensures continuous learning in a vastly evolving field, and cements commitment to the design, development and deployment of trustworthy AI.
Proceeding without Risk Management is not an Option
Finally, and in addition to the AI Act, we recommend you do not abandon all common sense and human control. AI systems may simulate human intelligence and the ability to draw logical conclusions (extrapolation), but in reality, AI systems only know what they know or have been trained to know (interpolation) – and what they know may very well be false and/or biased.
We will leave you with a link to Tech Policy Press’ article on AI risk management centered on the case of Enron – and what happens when potential and risk is not properly balanced, and company values and intentions are not practiced and upheld (with the help of official regulation): What today’s AI companies can learn from the fall of Enron.
Our point is, embrace the AI Act and ISO ISO/IEC 42001 standard as tools to ensure responsible choices and to save you having to develop a comprehensive AI risk management system yourself. Proceeding without is not a responsible option.
About EU’s AI Act:
EU’s AI Act was passed on June 13, 2024. Enforcement will begin in full by August 2026, at which time all actors must ensure and demonstrate compliance. Link to AI Act.