Safe and ethical use of AI

The AI Act: Regulatory framework for artificial intelligence

The rapid development and spread of artificial intelligence (AI) not only offers immense opportunities, but also presents numerous challenges and risks that need to be overcome.

01.10.2024Text: bbv0 Comments
Crossroads at night

The European Union is responding to these requirements with the AI Act (Artificial Intelligence Act) by offering a comprehensive regulatory framework to ensure the safe and ethical use of AI. For companies interested in implementing AI models within the context of digitalisation, the consequences of the AI Act are critically important. In this article, you will learn about the provisions contained in the AI Act and how you can implement these effectively in your company.

Why is a regulatory framework necessary for AI?

The use of artificial intelligence opens up completely new possibilities, but also harbours a number of risks. Without statutory provisions, AI systems could make wrong decisions that endanger people’s health and fundamental rights. The AI Act therefore aims to guarantee the transparency, safety and trustworthiness of AI systems.

It is particularly important in high-risk applications, such as in medical diagnostics or the use of autonomous vehicles, that strict regulations are implemented in order to prevent misuse and unforeseeable damage. A clear regulatory framework also encourages innovation, since it offers companies the necessary conditions to operate within defined rules. This strengthens consumer confidence and promotes the responsible further development of technological solutions.

According to which categories are AI models classified in the AI Act?

The EU AI Act, which was definitively approved and adopted by the member states in May 2024, is the world’s first comprehensive law governing the supervision of artificial intelligence and classifies AI systems into four risk groups: unacceptable, high, acceptable and minimal risk. This differentiation enables targeted regulation based on the risk potential of the respective programmes.

The Regulation was published in the Official Journal of the EU on 12 July 2024 and came into force 20 days later on 1 August 2024. Generally speaking, the AI Act will be fully applicable 24 months after entry into force, so from August 2026.

However, there are specific exceptions: For AI systems that pose an unacceptable high risk, the rules will apply six months after entry into force, which means that the provisions must be implemented from February 2025. The rules on codes of practice will apply nine months after entry into force, so from May 2025. Rules on general-purpose AI models will apply after 12 months, so in August 2025, while the obligations for high-risk AI systems will become applicable 36 months after the entry into force, so from August 2027.

AI solutions with unacceptable risk

AI models in this category include applications that pose a significant threat to fundamental rights and public safety. According to the AI Act, the following AI practices are prohibited:

  1. Systems used for social scoring
  2. Programmes designed to manipulate cognitive behaviour
  3. Real-time remote biometric identification systems are prohibited in publicly accessible areas for law enforcement purposes. An exception to this is the targeted search for victims of certain crimes
  4. Models that create or enhance facial recognition databases by randomly selecting images from the Internet or video surveillance recordings
  5. Emotion recognition technology in the workplace and in educational institutions
Symbolic image of the AI webinar series
Generative artificial intelligence

Webinar series on AI

In our webinars, we examine the topic of AI from different perspectives. Discover the potential for your company.
Register now

High-risk AI systems

AI applications in the high-risk category play a key role in the AI Regulation because they can influence important areas of society such as medicine, education and the judicial system. They are divided into two main categories:

  • The first category includes AI systems that are used in products that require third-party evaluation according to EU rules. Examples of these are medical devices and autonomous vehicles.
  • The second category relates to standalone AI applications that directly endanger fundamental rights. They are specified in Annex III of the Regulation and, among other aspects, include management of critical infrastructure, access to education, personnel decisions and credit scoring.

Companies that develop or use high-risk AI systems are obliged to conduct rigorous risk assessments. They also have to fulfil certain transparency requirements and provide proof of regular audits to ensure compliance with ethical and legal standards.

General-purpose models

The AI Act regards low-risk models as applications such as chatbots and recommendation tools. Such systems are defined as posing a moderate risk and are subject to certain transparency requirements. For example, they must be clearly disclosed as AI technology and the basic functions must be explained.

In the case of very low-risk systems, which do not explicitly fall under the provisions of the AI Act, companies are encouraged to voluntarily create codes of conduct to regulate the responsible use of AI. In addition, companies are generally expected to train their employees in the use of artificial intelligence in order to create a basic understanding and awareness.

Swiss AI Impact Report 2024
On the path to becoming an AI-native company

Swiss AI Impact Report 2024

The Swiss AI Impact Report 2024 provides you with in-depth insights into the use of generative artificial intelligence in Swiss companies.
Download

What does the AI Act mean for companies?

The AI Act entails extensive obligations for companies. First of all, companies have to classify their AI models and clarify the risk category to which they belong. AI solutions that are classed as high risk require special measures such as the implementation of risk management systems and regular reviews. In addition, certain transparency obligations apply to companies, including providing the data used to train the AI as well as technical processing and validation data sets.

For companies, this means that they have to comply with legal requirements as early as in the development phase of AI solutions. This includes documenting decision-making processes, ensuring data quality and conducting extensive tests. Because the individual requirements vary greatly depending on the classification of risk, we recommend considering professional advice on the topic of AI to ensure compliance with all regulatory requirements.

Successfully implementing AI models with bbv

Do you feel uncertain about the impact of the AI Act on your company? bbv can provide you with comprehensive consultancy services to ensure your AI strategies are legally compliant and future-oriented. Contact us for a personal consultation or take part in our “Generative AI” workshop , which offers you in-depth insights into the use of artificial intelligence and its potential.

Note: We do not offer legal advice. The information contained in this article is based on our own research and is not legally binding. For binding legal advice, we recommend that you always consult a legal expert.

The expert

Stefan Häberling

Stefan Häberling is Head of Business Area AI at bbv Software Services.

Mastering AI integration

Change management for the introduction of AI in a corporate setting

AI/KI
Efficiency and Personalisation

AI as a factor for success in e-commerce

AI/KI
Industry specific AI solutions

Beyond the hype: How bbv’s AI Hub is shaping the future of specialised AI

AI/KI

Attention!

Sorry, so far we got only content in German for this section.