artificial intelligence

On Thursday (today), the Artificial Intelligence (AI) act came into force as part of a wider package of policy measures to support the development and trustworthiness of AI.

The act is the world’s first-ever comprehensive legal framework.

Its goal is to guarantee the safety and fundamental rights of people and businesses when it comes to AI while also strengthening uptake, investment and innovation across the EU.  Simultaneously, it ensures that AI systems on the market respect ethical principles by addressing the risks of powerful and impactful AI models.

This act specifically targets providers and developers of AI systems that are marketed or used within the EU.

This also includes free-to-use AI technology, and applies to providers and developers, irrespective of whether they are established in the EU or another country.

For instance, if an American-based company is providing AI-based technology within the EU, it is still subject to penalties if found non-compliant.

The legal framework will be providing clear requirements and obligations regarding to specific uses of AI. Simultaneously, it seeks to reduce administrative and financial burdens for business, particularly for small and medium-sized enterprises (SMEs).

The EU has set up an AI Office, which will play a key role in implementing the act by supporting the governance bodies in Member States in their tasks.

The office aims to enforce regulations for general-purpose AI models, supported by the powers granted to the Commission by the AI Act.

These powers include conducting evaluations of general-purpose AI models, requesting information and measures from model providers, and applying sanctions.

Furthermore, the AI Office fosters an innovative ecosystem of trustworthy AI to maximise societal and economic benefits. This ensures a strategic, coherent, and effective European approach to AI on a global scale, positioning itself as a global reference point.

Non-compliance with the act will lead to hefty penalties ranging from €35 million or seven per cent of global revenue to €7.5 million or 1.5 per cent of revenue. The penalty will range based on the infringement and size of the company.

How does it work?

In general, the AI Act will:

  • Address risks specifically created by AI applications.

  • Prohibit AI practices that pose unacceptable risks.

  • Determine a list of high-risk applications.

  • Define specific obligations deployers and providers of high-risk AI applications.

  • Require a conformity assessment before a given AI system is put into service or placed on the market.

  • Put enforcement in place after a given AI system is placed into the market.

  • Establish a governance structure at European and national level.

Based on assessment, all AI systems considered as a threat to the safety, livelihoods and rights of people are banned. This could range from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.

The act defines four levels of risk for AI systems: Minimal risk, limited risk, high risk, and unacceptable risk.

High risk AI includes tech used in: Critical infrastructure such as transport that can put the life and health of citizens at risk, safety component products such as an AI application in robot-assisted surgery and employment, including CV-sorting software for recruitment procedures, among others

All high-risk AI systems must meet a strict set of obligations prior to being put on the market. This includes adequate risk assessment and mitigation systems, clear and adequate information to the deployer, appropriate oversight measures to minimise risk and a high level of robustness, security and accuracy, among others.

Another example of high-risk technology includes remote biometric identification systems, used in facial and fingerprint recognition. The act prohibits the use of remote biometric identification in publicly accessible spaces for law enforcement purposes.

There are some exemptions which are permitted by the law, yet are still strictly regulated. These include instances where a child is missing or when there is a specific and imminent terrorist threat that must be prevented.

On the other hand, limited risk AI technology refers to risk associated with lack of transparency and so the act introduces specific obligations to ensure that users are informed when necessary.

For instance, when a company makes use of certain AI systems such as chatbots, humans must be made aware that they are interacting with a machine, fostering more trust. In addition, under this law, AI-generated text published for public interest must be labelled as AI-generated. This applies to audio-visual content, including deepfakes.

Although the act entered into force on Thursday (today), it will be fully applicable in two years’ time, with the exception of a few instances.

Prohibitions are set to take effect after six months, the governance rules and the obligations for general-purpose AI models will become applicable after 13 months and the rules for AI systems, embedded into regulated products, will apply after 36 months.

To ensure a smooth transition into a fully-enforceable AI Act, the European Commissioner launched an AI Pact.

The pact is a voluntary initiative that supports the future implementation and invites AI developers from Europe and beyond to comply with key obligations ahead of time.

Related

Two years since its birth, Moneybase features on Microsoft’s Customer Stories

September 19, 2024
by Nicole Zammit

Moneybase has now just been featured on Microsoft’s latest Customer Stories

Finance Minister confirms continuity of food and energy subsidies

September 18, 2024
by Anthea Cachia

Spending on food and energy subsidies as a percentage of the GDP will be at 0.7% in 2025

MHRA congratulates Glenn Micallef on EU role, highlights positive impact on Malta’s tourism and cultural sectors

September 18, 2024
by Nicole Zammit

The lobby group emphasised that Malta’s cultural assets and sports scene are key factors in attracting visitors and fostering economic ...