AI Literacy: what is it and how to get started?

As of 2 February 2025, the first articles of the European Artificial Intelligence Act (AI Act) have come into force (1). These are the provisions of Chapter I and II of the AI Act. For those few who do not yet know the AI Act by heart, these are primarily the provisions regarding AI literacy and prohibited AI systems.

This contribution attempts to present the main features of the AI literacy obligation in an understandable and practical manner. It if therefore not an in-depth (legal) analysis.

In the context of banned AI systems, this contribution will not elaborate on this beyond mentioning that, as of 2 February, organisations are no longer allowed to offer or use these (2).

1. AI literacy: what?

Article 4 of the AI Act, one of the shortest articles in the entire legal text, imposes an obligation on providers and deployers to ensure an adequate level of AI literacy among their staff. In doing so, they should take into account certain characteristics of the staff, such as their technical knowledge, experience, training, pre-existing level of knowledge about AI and the context of the use of the AI-system such as the target group of the system, etc.

AI literacy should enable staff (but also affected individuals!) to make informed decisions about AI systems. On the side of the person using the AI system, for example, this will have to do with the opportunities and risks, as well as possible harm, that the system may cause (3).  So it is not just about technical issues, but also about ethical issues for example.

Some elements that can certainly be covered here are:

  • A basic understanding of how AI systems work;
  • Risks and limitations of AI;
  • Permitted use cases;
  • How to interpret the output of an AI system;
  • Target group specific topics (e.g. HR, production, marketing, …).
2. AI literacy: who?

The obligation to ensure an adequate level of AI literacy applies to both providers (i.e. whoever developed the AI system) and deployers (4) (i.e. whoever deploys/uses the AI system). Moreover, this obligation applies to all AI systems, regardless of the risk category to which they belong! Any organisation using AI systems, including popular generative AI tools such as ChatGPT or Copilot, is therefore covered by this obligation (5)! 

Given the widespread use of the ChatGPTs and Copilots (and less savoury tools like Deepseek) of the world, this will also affect many organisations that originally thought they did not fall under the scope of the AI Act.

Within the organisation, it is not just the IT people who will need training, but anyone who is or will be involved with AI in any way!

3. AI literacy: how?

The AI Act does not provide a lot of further guidance on how exactly organisations should promote AI literacy. However, it is clear from the wording used, such as ‘as far as possible’, ‘sufficient’, ‘having regard to’, that there is no ‘one size fits all’ solution as to exactly what organisations should provide.

An organisation has to decide for itself what it considers necessary and sufficient, based in part on the elements mentioned above. The training programme should be adapted to the organisation, to the different profiles within the organisation, to the context of use, to the different levels of existing knowledge within the organisation, and so on. This can make the training programme particularly complex. A predefined strategy with clear objectives is therefore essential.

Furthermore, it is clear that a simple mandatory information session, during which the majority of participants do not even pay attention generally, is absolutely not sufficient. In addition to (interactive) information sessions, an organisation should also provide specific guidelines, appropriate training, practical guidance (e.g. when creating a prompting library), the necessary supporting documentation, etc.

4. AI Literacy: Why?

Why would an organisation want to comply with yet another European legal requirement?

Firstly, the AI Act itself already provides a (financial) incentive to comply. In the worst case, non-compliance can result in fines of up to €35 million, or 7% of total global revenue (6).

Second, a lack of knowledge among employees about the dangers of AI can expose the organisation to breaches of other legal obligations, reputational damage, compensation claims, etc. Just think of a data leak (cfr. GDPR), inadequate levels of cybersecurity (cfr. NIS2/DORA), unlawful discrimination, etc.

Thirdly, this obligation helps to maintain and strengthen customer trust in your organisation by demonstrating that you want to use the potential of AI in a responsible way.

5. How can Apogado help?

As mentioned above, the need to tailor the AI literacy training programme to the organisation (and its employees, context, etc.) can make this process very complicated. Moreover, many organisations do not have the necessary knowledge to organise such training.

Apogado’s team of experts is ready to work with you to develop a personalised approach and strategy, and to provide the necessary training.

Do not hesitate to contact us for a free exploratory meeting!

 

(1) See art. 113, a) AI Act.

(2) See art. 5 AI Act.

(3) Art. 3, 56) en recit. 20 AI Act.

(4) Re deployers, see also: recit. 91 AI Act.

(5) The AI system should obviously still fall under the (not so clear) definition of article 3, 1) AI Act.

(6) Note that the provisions on penalties do not come into force until 2 August 2025; See article 113, b) AI Act.

 

 

Kobe Troch

📩: info@apogado.com

🌐: www.apogado.com

Comments are closed.