ARXEDO Logo

EU AI Act FAQ

Clear answers to complex regulations.
Navigate the new European Artificial Intelligence landscape with confidence.

How are AI agents addressed within the AI Act?
The term 'AI agent' is often used inconsistently in public debate, due in part to a blurry and still evolving demarcation between AI agents and other kinds of AI. Nevertheless, there is broad agreement that an AI agent must have the ability to receive and process input from their environment, and execute actions based on this processing that may interact with or affect their environment (e.g., issuing function calls). The term 'Agentic AI' is sometimes used to describe more sophisticated configurations that integrate multiple AI agents. Nevertheless, the precise interrelation between these two terms is still evolving.
What type of systems are regulated under the AI Act?
The AI Act does not apply to all AI solutions, but only to those that fulfil the definition of an 'AI system' within the meaning of Article 3(1) AI Act.

The AI Act follows a risk-based approach and introduces rules for AI systems based on the level of risk they can pose. Any AI practices with an unacceptable risk to health, safety or fundamental rights enshrined in the Charter of Fundamental Rights are prohibited (e.g. AI systems used to detect emotions of employees at work, except medical and safety reasons; certain social scoring practices). AI systems with high risk for health, safety or fundamental rights need to meet certain requirements to make sure they are safe and trustworthy (e.g. AI systems used at border control management; law enforcement, or autonomous vehicles could be examples of high-risk AI systems). Certain AI systems need to meet transparency requirements (e.g., deep fakes will have to be labelled as AI-generated; chatbots should inform that a person is not communicating with a human). All other AI systems remain unregulated and can be placed on the market, put into service or used in the EU without any requirements – at the time of preparation of the proposal of AI Act, it was estimated that these would be 85%.
Will the AI Act apply to the systems already used on the market?
The AI Act does not automatically apply to all AI systems placed on the market before its application date. Instead, compliance obligations are phased in depending on the category and whether the system undergoes significant modifications.

The AI Act will apply from 31 December 2030 to AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex X that have been placed on the market or put into service before 2 August 2027. In case such large-scale IT systems are evaluated or the legal acts in Annex X are replaced or amended before 31 December 2030, the AI Act shall be taken into account.

The AI Act will also apply to high-risk AI systems that have been placed on the market or put into service before 2 August 2026 only in case those systems are significantly modified. The AI Act will also apply from 31 December 2030 to high-risk AI systems that have been placed on the market or put into service before 2 August 2026 and are (intended to be) used by public authorities.

Lastly, the AI Act will apply from 2 August 2027 to general-purpose AI models that have been placed on the market before 2 August 2025.

However, the rules on prohibited AI practices (Article 5) apply to all AI systems, without taking into account the date of their placement on the market.
Are "logic-based" or "knowledge-based" approaches also covered by the AI system definition?
Focusing specifically on the building phase of the AI system, Recital 12 of the AI Act further clarifies that ‘[t]he techniques that enable inference while building an AI system include machine learning approaches that learn from data how to achieve certain objectives, and logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved.’ Those techniques should be understood as ‘AI techniques’.

In addition to various machine learning approaches, that are often understood as an AI technique, the second category of AI techniques mentioned in Recital 12 of the AI Act are ‘logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved’. Instead of learning from data, these AI systems learn from knowledge including rules, facts and relationships encoded by human experts.

Based on the knowledge encoded by human experts, these systems can ‘reason’ via deductive or inductive engines or using operations such as sorting, searching, matching and chaining. By using logical inference to draw conclusions, such systems apply formal logic, predefined rules or ontologies to new situations. Logic and knowledge based approaches include, for instance, knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning, expert systems and search and optimisation methods.
What distinguishes an AI model from an AI system? Will guidance be issued?
The AI Act distinguishes between an AI system, defined in Article 3(1) and a general-purpose AI model defined in Article 3(63).

The AI Act refers to AI models as those that ‘are essential components of AI systems. They do not constitute AI systems on their own. AI models require the addition of further components, such as for example a user interface, to become AI systems. AI models are typically integrated into and form part of AI systems’ (Recital 97). ‘Large generative AI models are a typical example for a general-purpose AI model, given that they allow for flexible generation of content, such as in the form of text, audio, images or video, that can readily accommodate a wide range of distinctive tasks’ (Recital 99).

The AI Act defines an AI system as a ‘machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’ (Article 3(1), Recital 12).