AI Act: These Innovations Will Be Introduced By the New Law

The EU's AI Act has been finalised. This means that member states will have to fulfil new requirements regarding the provision and use of AI systems in the foreseeable future. In this article, we shed light on the requirements of the AI Act, who these regulations affect and what measures companies in particular should take.

The Beginnings Of the AI Act

With the advance of artificial intelligence, the need has arisen to regulate the supply and use of these new technologies across Europe. The new AI law aims to create a balance. On the one hand, it seeks to ensure a sufficient degree of freedom to utilise the potential of AI profitably. On the other hand, the regulations are intended to guarantee ethical principles on the basis of which the data of individuals can be handled responsibly.

The European Commission submitted the first legislative proposal in April 2021. In December 2023, a provisional agreement was reached between the European Commission, the European Council and the European Parliament. The final decision has been in place since March 2024. It is therefore clear that companies operating in the EU must be prepared for extensive changes if they offer AI products, systems or services.

In this way, the AI Act is intended to promote Artificial Intelligence with all its innovations and at the same time counteract risks. After all, new technologies also harbour new dangers that still need to be managed properly. The overarching goal therefore is a harmonised EU internal market for AI in which the rights of individuals are respected and investments in AI technologies are promoted.

 

When Do the New Requirements Apply?

The transparency and governance regulations for general AI systems will apply from the beginning of 2025, with all other obligations coming into force in 2026. Only the requirements for high-risk systems must be implemented after 36 months.

But beware: if an AI system proves to be prohibited under the new legal regulations, it must be abolished no later than six months after the AI Act comes into force.

Objectives Of the AI Act

The AI Act pursues various approaches to achieve a harmonised internal AI market for the EU.

Driving Innovation Forward

The EU wants to promote the development of new AI systems. Clear guidelines should give companies direction without restricting them when they want to try out new things. EU-wide harmonised standards should ensure greater transparency and fair competition. In this way, there is legal certainty for all parties involved and fundamental rights are protected.

Finding Best Practices

The classification of AI systems ensures that the risks associated with artificial intelligence become more comprehensible in the long term. The requirements for high-risk systems in terms of risk management, data governance, documentation, etc. also help to identify best practices for the use of AI.

Dealing With Risks

An important aim of the AI Act is to find a way of dealing with the risks posed by AI. This includes prohibiting unacceptable risks and avoiding violations of fundamental rights. Part of this plan is to prevent manipulative and unethical techniques that influence a person's behaviour in such a way that they harm themselves or others. Exploitation of vulnerable people, for example due to age, disability or political views, should also be averted.

For all of this to work, legal certainty is needed for everyone involved. The law aims to strengthen ethics and safety requirements in order to achieve this goal.

Tip:

New technologies always harbour risks. In the article AI and Its Dangers: The Potential For Abuse of Artificial Intelligence, we look at the dangers of AI and consider the possible consequences.

Who Is Affected?

The definition of artificial intelligence in the law is quite broad in order to cover as many different technologies and systems as possible. According to article three of the AI Act, an AI system is a

„…machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.“

Companies in particular are therefore affected by the AI Act to a large extent and may have to make far-reaching changes.

The new requirements also specifically affect providers who place AI systems on the market or put them into operation in the EU, regardless of their location. This also applies to companies that are not based in the EU, provided that the results from these systems are used or intended for use in the EU (Article 2 of the AI Act). All product manufacturers who place products with AI systems on the market or put them into operation within the EU under their own name or brand are also affected. Of course, the law is not only aimed at suppliers, but also at users of AI products within the EU.

In future, compliance with the regulations will be monitored by supervisory authorities - centrally coordinated by the European AI office.

The Most Important Points Of the AI Act

The AI Act defines artificial intelligence quite broadly, as AI encompasses a very wide range of technologies. The legal text therefore includes both simple technologies that are designed for a specific use case as well as more complex applications - such as deep learning or generative AI. This significantly expands the scope of the law. Exceptions apply, for example, to AI systems for military purposes and, to a limited extent, to systems that are available under a free, open-source licence.

In addition, the provisions of the AI Act provide for a comprehensive risk framework with requirements for the different risk levels. AI systems are classified according to risk and categorised depending on what data is collected and what actions are taken on the basis of these decisions. Depending on the respective risk category, the law contains various obligations.

Risk Levels

These are the different risk levels in detail:

This Applies To High-Risk AI Systems

In the case of high-risk systems, for example in the area of critical infrastructure, the requirements for users and providers are correspondingly strict. According to Article 3 Section 2, this includes, among other things:

Risk Management:
If a high-risk system is used, you are obliged to identify potential risks to the health, safety and fundamental rights of users. These risks must then not only be assessed, but appropriate measures must also be taken to minimise them.

Data Governance:
Data sets used must be relevant, representative, error-free and as complete as possible.

Technical Documentation:
This contains important information on the systems and processes used. It includes a description of the AI system used, its individual components and its development process. Additionally, it documents monitoring, control and functionality.

Recording Obligations:
A high-risk system must have a log function to document events during the system life cycle. In this context, relevant changes within the system or situations that harbour risks are particularly relevant.

Transparency and Provision Of Information For Operators:
High-risk systems must be designed in such a way that operators can use them appropriately. The systems are provided with operating instructions that contain all relevant information in a comprehensible form.

Human Supervision:
Natural persons must be able to adequately supervise AI systems. This includes interpreting the output, understanding the capabilities and limitations and (if necessary) interrupting the operation of the system.

Accuracy, Robustness, Cyber Security:

High-risk systems must be as resistant as possible to faults and malfunctions. This can be achieved through technical redundancy. In addition, detailed plans must be available on how to act in the event of a disruption.

Unauthorised attempts by third parties to gain access to the system must also be prevented.

Users have the opportunity to lodge a complaint and can request explanations regarding decisions made on the basis of risky AI systems that may restrict their rights. They must also operate their AI system within the framework of the law and in accordance with the provider's guidelines. This entails obligations in terms of purpose, use cases, data processing, supervision and monitoring.

Modern Insurance Coverage - Even For New Technologies

Artificial intelligence will continue to change our working world - across many different industries. The new regulations of the AI Act are just another reason to get to grips with this topic and make your company fit for the future. After all, regardless of whether you are already using existing systems, designing your own or even offering AI systems yourself, there are always risks involved and violations of the new law can be costly.

If you cause damage to your customers or other third parties that is based on the use of artificial intelligence, your company is protected with Professional Indemnity Insurance through exali. Thanks to open coverage, this even includes new technologies such as AI. Because we know that your business is changing and we want to offer you coverage that takes these developments into account.

Would you like to know more? Then give us a call! From Monday to Friday from 9:00 am to 06:00 pm (CET) you can reach us on + 49 (0) 821 80 99 46 - 0 or via our contact form.

This Applies To General AI Systems

The current version of the law has been amended to include general-purpose AI systems (GPAI) and generative models. These models are suitable for many tasks and can be used in both general and high-risk systems. They are therefore the basis for many AI systems in the EU and extremely relevant.

As a result, GPAI systems must fulfil a number of transparency requirements in accordance with Article 50. This is intended to minimise systemic risks if these models are used across the board. These transparency requirements include:

Please note: The new law does not affect existing regulations on personal data, product safety, consumer protection, etc.. Companies are therefore not released from existing legal obligations.

Why Companies Should Be Concerned With It

It is important that companies obtain a comprehensive overview of the AI systems they use or have developed themselves. You must qualify these according to the risk levels defined in the law. If a risk is identified, you must assess how the associated legal regulation affects your company. All of this should be done as quickly as possible so that you can find an appropriate way of dealing with these effects and realise the objectives of the AI Act.

What Does This Mean For Companies?

The AI Act is not only particularly relevant for managers with leadership responsibilities in areas such as compliance or data governance. Anyone involved in the development, implementation and utilisation of AI technologies must also deal with the new requirements. Board members from committees may also be affected by this topic. In concrete terms, this means

Build up a wealth of knowledge:
Companies need to build awareness and knowledge across all hierarchical levels about which AI technologies are being used in the company and what risks are involved. This requires plans to deal with these risks correctly. This also includes an impact assessment for the rights of those affected.

Tip:

The use of AI tools can open up completely new opportunities for your business - if you approach the implementation of the new tool correctly. Our article will help you with this: AI In Business: How To Use Artificial Intelligence As a Freelancer.

When classifying an AI system on the basis of the legally defined categories, the following must be borne in mind: The list of prohibited or high-risk systems can always be expanded. This must be taken into account in the assessment. Measures must also be taken to ensure that the company is not caught unprepared if the criteria are expanded.

Secure Data Management:
As part of this, companies should scrutinise their data management practices. This also includes implementing processes to ensure the quality, security and protection of data.

Ensure Compliance:
It is a good idea to set up a dedicated ethics team. Its members monitor whether the AI practices in a company are legally and ethically acceptable. It also guides other departments in this area to act correctly.

Involve All Stakeholders:
This requires a holistic approach that corresponds as closely as possible to the broad definition of AI in the law. Everyone involved, from management to employees, must work closely together to ensure that the company operates in compliance with the law. This enables the implementation of standards and practices for dealing with AI systems without losing sight of the law. These standards should cover all areas from development to implementation and maintenance. The best approach for all parties involved is ‘security by design’ - in other words, everyone takes the issue of security into account when developing AI systems.

Establish internal regulations:
Companies are well advised to define clear internal regulations for dealing with artificial intelligence - even if it is ‘only’ a matter of experimenting a little with the possibilities of ChatGPT and co. This requires clarifying the handling of sensitive information, the use of AI-generated work results, competences and responsibilities of individual persons. The necessary documentation requirements must also be observed at all times.

Creating Expertise:

Training courses not only ensure competent use of AI tools. They also promote an open, ethical culture and stimulate innovation. This allows you to balance technological progress and social responsibility.

Seek Dialogue:
As in so many areas, the same applies here: Communication is everything. Companies must therefore seek dialogue with their stakeholders. This means extensive discussions with customers and business partners on the topic of artificial intelligence. Those responsible should communicate how they intend to comply with legal regulations and address the expectations of the individual parties.

Conclude Modern Contracts:
If companies purchase AI solutions from third parties, reliable contracts are the most important thing. They should take special AI circumstances into account. Outdated standard purchase contracts have no place here.

Trust Through Transparency:
Providers of AI systems should focus on transparency in their dealings with users. This creates acceptance. In addition, they can better realise the implementation of the system with their clients.

Regular Audits and Updates:
Systems must be regularly updated and audited to ensure compliance and consistently provide transparency.

What Are the Penalties For Violations?

According to Article 99 of the AI Act, the maximum limit for fines is 15 million euros or three per cent of global annual turnover - depending on which amount is higher. If prohibited AI systems are used, fines of up to 35 million euros or seven per cent of turnover are also possible.

AI Act: New Challenges For Companies?

In addition to providers, the AI Act places particular obligations on companies. It is therefore best for them to familiarise themselves with the new requirements at an early stage. This way, every affected business will be future-proof when the regulations come into force and can fully utilise the potential of artificial intelligence.