August 27, 2025 rcplegal 0 Comments

The AI Act is the world’s first comprehensive legislative framework on artificial intelligence, adopted by the European Union. It entered into force on 1 August 2024, with its provisions being implemented gradually between 2024 and 2027.

The objective of the EU Artificial Intelligence Regulation is to promote the adoption of artificial intelligence while ensuring a high level of protection of health, safety, and fundamental rights against the harmful effects of AI systems.

What is an AI system?

An AI system is a machine-based system designed to operate with varying levels of autonomy and capable of adaptability after deployment. Pursuing explicit or implicit objectives, it processes input data to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

While AI brings significant benefits, depending on its application, context, and technological maturity, it can also be misused, providing powerful tools for manipulation, exploitation, and social control. Such practices are considered harmful and abusive as they conflict with the Union’s values of human dignity, freedom, equality, democracy, the rule of law, and fundamental rights enshrined in the Charter. The Regulation addresses these risks by introducing a risk-based framework, classifying AI uses into different levels, each with specific governance and control mechanisms.

The risk-based structure identifies:

  • Prohibited practices (e.g., social scoring, subliminal manipulation)
  • High-risk applications – permitted but subject to strict requirements (testing, risk assessments)
  • Limited risk applications – permitted with minimal transparency obligations (e.g., chatbots disclosing they are AI)
  • Minimal risk applications – permitted without restrictions (e.g., AI in video games).

Current implementation timeline

According to the progressive implementation schedule, several provisions have already become applicable:

  • 2 February 2025: Chapter I (General provisions) and Chapter II (Prohibited practices – “unacceptable risk”), including subliminal manipulation, social scoring, and discriminatory profiling based on sensitive biometric data.
  • 2 August 2025: Chapter III, Section 4 (Notifying authorities and notified bodies), Chapter V (Obligations for GPAI providers), Chapter VII (Governance – AI Office, AI Board), Chapter XII (Penalties and sanctions), Article 78 (confidentiality).

At this stage, the European Artificial Intelligence Office (AI Office), operating within the European Commission, acts as the central supervisory and enforcement body, working closely with Member States and relevant structures.

The Code of Practice for General-Purpose AI Models (GPAI Code of Practice), initially expected in May 2025, was published on 10 July 2025, accompanied by a Q&A document providing practical clarifications.

On 18 July 2025, the European Commission issued Guidelines for Providers of General-Purpose AI Models, clarifying compliance obligations, definitions (e.g., “GPAI”), and conditions for open-source exceptions. These guidelines complement the Code and provide operational clarity.

Major companies such as Amazon, Google, IBM, Microsoft, OpenAI, and Fastweb, among many others, have signed the Code. Both the Commission and the AI Board confirmed that adherence to the Code represents a suitable voluntary instrument for GPAI providers to demonstrate compliance with the AI Act.

Implications for businesses in Romania

The adoption of the AI Act, the GPAI Code of Practice, and the Commission’s Guidelines has direct consequences for Romanian companies, depending on their role within the AI value chain.

  • Romanian companies developing proprietary AI models or LLMs (large language models trained on massive datasets to understand and generate natural language) will be considered providers and must comply with transparency requirements (documentation, data, energy consumption, costs); ensure copyright compliance during training; manage security and reliability risks, especially for “systemic risk” models.
  • Romanian IT firms integrating GPAI models (e.g., ChatGPT API, Claude, LLaMA) into their own applications must verify supplier compliance, document the use of such models, and inform end-clients regarding limitations and copyright issues.
  • Banks, retailers, healthcare, and telecom companies deploying generative AI must evaluate risks and assess the impact on personal data and intellectual property.

Failure to comply may result in loss of access to the EU market, investigations by the AI Office and national authorities, legal liability regarding copyright (if models are trained on unlicensed web data), and reputational risks, as using non-compliant models may undermine customer trust.

Early compliance offers peace of mind, security, and a competitive advantage. Companies adopting the Code and best practices are more likely to avoid risks and secure EU contracts and international partnerships.

Recommended steps for Romanian companies:

  • Identify all AI applications, models, and services used within the company (e.g., ChatGPT, Claude, LLaMA, internal models).
  • Assess the company’s role in each case (provider, integrator, or end-user).
  • Classify models into GPAI, high-risk, or non-critical categories.
  • Adopt the GPAI Code of Practice (voluntarily) if developing proprietary models, or request proof of compliance/Code adoption from suppliers if acting as an integrator.
  • Update internal policies on AI and data protection.
  • Appoint an AI Officer (Data/AI Officer – possibly the DPO or compliance team member) to monitor AI Office and national authority updates (not mandatory but advisable).

National authority in Romania

As of now, a competent national authority for AI Act enforcement has not been officially designated, despite the EU deadline of 2 November 2024.

The National Strategy on Artificial Intelligence (SNIA) 2024–2027 envisages the establishment of a dedicated authority, under the coordination of the Authority for Digitalisation of Romania (ADR) and the Ministry of Research, Innovation and Digitalisation (MCID).

Although national legislation and institutional structures are still under development, this article serves as a warning signal for companies in Romania relying on AI in their operations.

Sector-specific obligations

  • Recruitment/HR companies using AI for CV screening or video interviews,
  • E-commerce platforms using AI for product recommendations, dynamic pricing, or chatbots,
  • Educational institutions employing automated assessments or AI-generated learning materials –

all have obligations of transparency towards users, audits, and safeguarding rights (e.g., non-discrimination).

Systems making decisions on employment, promotions, performance scoring, bonuses, or employee benefits are subject to the strictest compliance requirements, as these involve critical human decisions.

Such companies must:

  • Identify and classify the AI system as high-risk.
  • Request technical documentation, training/testing data, identified risks (bias, discrimination), and mitigation measures from providers.
  • Test the AI system before deployment for efficiency, fairness (bias prevention), non-discrimination, and GDPR compliance.
  • Ensure transparency towards employees and candidates: inform them they are being assessed by an AI system, explain how AI influences decisions, and grant the right to explanation and contestation.
  • Continuously monitor system outcomes and prepare a written risk management plan, including the option to withdraw the system if harmful effects on employees or candidates are detected.

Operators (HR staff, managers, etc.) working with such systems must be trained to understand AI’s limitations, know which decisions cannot be delegated to AI, and retain the ability to verify, correct, or override AI decisions.

Companies should also maintain evidence of checks on accuracy, performance, and impartiality and be able to justify any AI-influenced rejection of a candidate.

Conclusion

Although still at an early stage, Romanian institutions and companies will have to consistently monitor and apply the provisions of the EU Artificial Intelligence Regulation and its subsequent acts. RCP Legal’s team will continue to provide periodic updates in this field.

Author: Atty. Lavinia Rusu

Leave a Reply:

Your email address will not be published. Required fields are marked *