Menu
19 September 2025

Newsletter September 2025

✨AI ✨

➑️Entry into force of Chapter V of the AI Act: what you need to know about GPAI models

After the entry into force of the EU AI Act (2024/1689) on 1 August 2024 and the ban on unacceptable risk systems in February 2025 (e.g. social rating systems), the provisions of Chapter V apply to general-purpose AI models (GPAIs) made available in the EU from 2 August 2025.

πŸ”΅ Important distinction:

  1. general-purpose AI model: A general-purpose AI model is defined as “an AI model, including when that AI model is trained using a large amount of data using large-scale self-monitoring, that exhibits significant generality and is capable of competently performing a wide range of distinct tasks, regardless of how the model is brought to market, and that can be integrated into a variety of downstream systems or applications, with the exception of AI models used for research, development, or prototyping activities prior to their release to the market” (Article 3, (63) IA Act).

  1. general-purpose AI model with systemic risk: A general-purpose AI model with systemic risk is a model with capabilities or impact that are high enough to generate major risks to society, security, or fundamental rights. It is considered as such if it reaches a significant level of technical power or if it is designated by the European Commission because of its equivalent potential impact. (Section 51 IA Act)

πŸ”΅ Obligations of GPAI Model Providers (Section 53 IA Act)

  1. Documentation technique

Develop and maintain comprehensive documentation of the model (training, testing, evaluation), in accordance with Annex XI.

Make this documentation available upon request by the AI Office or national competent authorities.

  1. Information for integrators

Provide accessible documentation for AI system providers wishing to integrate the model, including at least the elements of Annex XII.

This documentation must make it possible to understand the capabilities and limitations of the model, while respecting intellectual property rights and trade secrets.

  1. Copyright compliance

Implement an EU copyright compliance policy, including the identification of protected content, including through advanced technologies.

  1. Transparency on training data

Publish a sufficiently detailed summary of the contents used to train the model, according to a standard format defined by the Office of AI.

  1. Cooperation with the authorities

Collaborate with the European Commission and the competent authorities in the context of their missions.

πŸ”΅ Obligations of Providers of General-Purpose AI Models Presenting a Systemic Risk (Section 55 IA Act)

In addition to the general requirements (section 53), providers of systemically risked DAPM models must comply with the following obligations:

  • Rigorous technical evaluation

Perform model evaluation in accordance with standardized protocols and tools, reflecting the state of the art.

Include and document conflicting tests to identify and mitigate systemic risks associated with the model.

  • Union-wide risk analysis

Identify potential sources of systemic risks related to the development, commercialization or use of the model.

Implement appropriate mitigation measures.

  • Monitoring and reporting incidents

Document, trace and report any serious incidents or malfunctions to the AI Office (and national competent authorities as appropriate) without delay.

Provide corrective actions that are being considered or implemented.

  • Enhanced cybersecurity

Ensure adequate protection of the model and its physical infrastructure against cybersecurity risks

πŸ”΅ Penalties for GPAI Providers (Section 101 IA Act)

The Commission may impose fines of up to 3% of annual global turnover or €15 million on providers of general-purpose AI models, in case of intentional or negligent non-compliance, in particular for:

  • Violation of the regulations,

  • Failure to respond or provide inaccurate information,

  • Non-compliance with imposed measures,

  • Denied access to the model for evaluation purposes.

These provisions will only be applicable from 2 August 2026.

πŸ”΅ Other provisions entering into force

In addition to Chapter V, the following provisions also enter into force on 2 August 2025:

  • Chapter III, Section 4 and Chapter VII: Designation by the Member States of the competent authorities, tasks of the IA Committee, the Advisory Forum and the Scientific Panel,

  • Chapter XII: Penalties for non-compliance with the AI Act (excluding fines for GPAI providers, coming into force on August 2, 2026)

πŸ”— Link to the calendar: https://artificialintelligenceact.eu/fr/implementation-timeline/

➑️ Draft designation of national authorities for the regulation of artificial intelligence

On 9 September 2025, the Directorate-General for Competition, Consumer Affairs and Fraud Control (DGCCRF) and the Directorate-General for Enterprise (DGE) presented a draft designation of the authorities in charge of regulating AI under the European AI Regulation.

The scheme is based on coordination by these two directorates and prioritises the designation of existing authorities according to their sectoral skills and expertise.

In practice, if a company is already regulated in its sector, it will overwhelmingly turn to its usual regulator for the implementation of the AI regulation.

This plan must be submitted to Parliament by means of a draft law.

πŸ”΅ Prohibited practices (Art. 5 RIA)

  • ARCOM and DGCCRF : control of manipulative systems (subliminal techniques, exploitation of vulnerabilities related to age, disability or social situation).

  • CNIL and DGCCRF : ban on social rating systems.

  • CNIL : control of other prohibitions (predictive policing, massive facial recognition databases, inference of emotions at work or school, biometric categorization, repressive biometric identification in real time).

πŸ”΅ High-risk systems (Annexes I and III)

  • The authorities already competent for regulated products have their prerogatives extended for integrated AI (Annex I RIA).

  • For sensitive uses (Annex III RIA):

    • Critical infrastructure : Senior defence and security officials (Bercy, Ecological Transition).

    • Finance and insurance : ACPR.

    • Justice : Council of State, Court of Cassation, Court of Auditors.

    • Education, training : CNIL and DGCCRF.

    • Democratic processes : CNIL and ARCOM.

    • Biometrics, employment, repression, migration : CNIL.

πŸ”΅ Transparency obligations (art. 50)

  • CNIL : emotion recognition and biometric categorization.

  • DGCCRF and ARCOM : AI interacting with the public, deepfakes and synthetic content.

  • ARCOM: manipulation or generation of texts disseminated on subjects of public interest.

πŸ”΅ Technical support and coordination

  • PEReN and Anssi : pooling of AI and cybersecurity expertise to support the authorities.

  • DGCCRF : coordination of controls and role of single point of contact.

  • DGE : representation of France on the European AI Committee.

πŸ”— Link to the diagram: https://www.entreprises.gouv.fr/priorites-et-actions/transition-numerique/soutenir-le-developpement-de-lia-au-service-de-0

πŸ“£ FIRSH experts: The current regulation is based on a mosaic of authorities (CNIL, ARCOM, DGCCRF, ACPR, etc.), at the risk of creating complexity. Faced with the transversality and speed of evolution of AI, a single dedicated Authority could offer more coherence, clarity and efficiency, by becoming the central interlocutor of public and private actors.

Faced with complex and multi-layered regulations, it is important to be accompanied by experienced lawyers who have a broader vision than just the provisions of the regulations, depending on the activity. FIRSH supports its clients in their ACT AI compliance projects thanks to a multidisciplinary expertise in “Intellectual Property, Commercial, Personal Data, contracts and litigation.

✨PERSONAL DATA✨

➑️Google ordered, on appeal, to remove a “Google My Business” listing, Court of Appeal, ChambΓ©ry, 2nd chamber, 22 May 2025 – nΒ° 22/01814

πŸ”΅ Background to the case

A dentist discovers, by typing her name into Google’s search engine, that a “Google My Business” (GMB) listing shows her surname, the address of her practice, as well as a rating with stars and reviews related to her professional activity, some of which are very negative.

It requested the deletion of this file by letter of formal notice addressed to the company Google France; The latter refuses to grant his requests.

Unable to obtain the amicable deletion of her GMB file, the dentist sued the companies Google France, Google LLC and Google Ireland Limited.

πŸ”΅ Answer of the Court of Cassation

The Court dismissed the liability of Google France, a simple service provider with no role in the operation of the search engine, and ruled that the claims against it were inadmissible.

On the other hand, it holds that the data published on the Google My Business listing (surname, first name, profession, address, telephone number) does indeed constitute personal data. Their processing by Google, carried out without consent on the basis of data collected from Infobel, therefore falls within the scope of the GDPR.

Although the initial collection was not unlawful, Google has failed to comply with its obligation to provide information (Article 14 GDPR) and cannot rely on a balanced legitimate interest: the forced subscription to a Google account, the economic valuation of the data and the particular constraints of health professionals lead to the processing being classified as illegal.

Therefore, the deletion of the GMB sheet is justified. Arguments based on freedom of expression or the right to information are dismissed, as opinions about a health professional do not fall within their scope.

Regarding the claims for damages, the Court excludes any liability of Google for disparaging opinions as well as for parasitism.

On the other hand, it found two faults: the creation of a Google My Business listing without consent, combined with the impossibility for the practitioner to respond freely to reviews, constitutes unlawful processing.

The judge therefore ordered the deletion of the file and awarded €10,000 in damages for moral prejudice and imposed procedures.

πŸ“£ FIRSH experts: This decision reminds us that professional data (name, address, number) are also personal data within the meaning of the GDPR insofar as these data make it possible to identify a natural person. The regulation on personal data therefore comes to the rescue of possible acts of denigration and parasitism. It is a legal basis that practitioners must know how to use wisely.

A strong signal: platforms must balance their economic interest with the fundamental rights of individuals.

πŸ”—Link to the decision: https://www.courdecassation.fr/decision/68300ad793ab4231dd3e52d9

➑️Data Act: new contracts, new reflexes

Adopted at the end of 2023, the Data Act will apply to new contracts from September 2025.

Its objective is to ensure that any user of a connected product, whether consumer or professional, has effective access to the data it produces as well as the possibility of authorizing its transmission to a third party of their choice.

What are the challenges and opportunities of the Data Act, which comes into force on 12 September 2025?

πŸ”΅ Facilitating data sharing between businesses and consumers in the context of the Internet of Things

The Data Act aims to facilitate access to and sharing of data generated by the use of connected objects. It makes it easier for consumers and businesses to access the data they co-produce and authorize its collection, use, and transmission.

Thus, in its objective to create a competitive and innovative data market, the Data Act establishes obligations for manufacturers of connected products and providers of associated services (digital services related to connected products), including:

  • On-demand access: Users may request access to product and service data (Article 4 – Rights and obligations of users and data holders regarding access, use and provision of product data and related service data)

  • Transfer on request: Users may request that data be transferred to a third party of their choice (Article 5 – User’s right to share data with third parties)

  • Access by design: Future products and services must be designed in such a way as to allow access to data by design (Article 3 provides that such data is “by default, accessible to the user, in an easy, secure manner, free of charge, in a complete, structured, commonly used and machine-readable format, and is, where relevant and technically feasible, directly accessible to the user.Β Β»).Β This requirement will apply as of September 12, 2026.

πŸ”΅Facilitate the provision of data held by private entities to public bodies in exceptional needs

The provision of data may be required in the event of a public emergency, to limit its effects, or when an authority needs it for a mission in the public interest and cannot otherwise access it (Articles 14 and 15 of the Data Act).

The holding company must then agree on “reasonable” compensation with the beneficiary public body (s. 8). This assessment is based in particular on the costs of formatting the data and the investments necessary to organise their sharing, as specified by the European Commission.

πŸ”΅ Prohibiting unfair terms

The Data Act prohibits or presumes that certain terms are unfair, even between professionals (art. 13). Null and void data that exclude liability in the event of gross negligence or that confer on a party the unilateral power to assess the quality of the data. Others, such as the ban on copying data at the end of the contract or the obvious imbalances in user rights, are simply suspect.

This approach marks a shift in consumer law towards B2B: it requires a review of current contractual practices, particularly in the T&Cs. It is part of a broader movement initiated by the Platform-to-Business (P2B) regulation, now extended by the Data Act: contractual freedom between companies is no longer total when data is at issue.

πŸ”΅ Personal data and GDPR

When data from connected objects makes it possible to identify a person (e.g. usage data related to a home), the GDPR applies in parallel with the Data Act.

It is then necessary to:

  • Identify the data controllers (user, holder or co-responsibility)

  • determine an appropriate legal basis (contract or consent);

  • ensuring transparency and information;

  • apply the principles of security, minimization and limitation.

This analysis must be rigorously documented, integrating the contractual framework and effective practices, particularly in hybrid situations of secondary data exploitation.

πŸ”΅ In the short term, affected companies should:

  1. identify the data generated, used or transferred;

  2. legally qualify each actor involved in their processing or sharing;

  3. review existing contracts (T&Cs, commercial conditions, maintenance);

  4. conclude, where appropriate, new agreements or annexes governing data flows;

  5. systematically integrate the requirements of the GDPR when personal data is involved.

πŸ“£ Firsh experts: The Data Act doesn’t just create a new right of access to data: it forces companies to rethink the way they manage it. Behind the contractual obligations, a change in legal culture is taking place: that of greater transparency in the digital and industrial ecosystems, and a renewed balance between the parties.

Faced with these new contractual obligations and the necessary articulation with the GDPR, FIRSH supports its clients in their compliance projects thanks to multidisciplinary expertise: intellectual property, commercial, personal data, contracts and litigation.

πŸ”— Link to the Data Act: https://digital-strategy.ec.europa.eu/fr/policies/data-act

✨CYBERSECURITE✨

➑️Critical Infrastructure Resilience and Cybersecurity Bill

The draft law on the resilience of critical infrastructures and cybersecurity, which was currently being examined by the National Assembly on 9, 10 and 11 September 2025, aims to transpose three major European texts into French law:

1) NIS 2 Directive

(2) The CDR Directive

(3) the DORA Directive

πŸ”΅ The NIS 2 Directive

The bill transposes the European “NIS 2” directive (2022), which strengthens and expands the cybersecurity framework set out by NIS 1 (2016). Faced with the multiplication of cyberattacks now affecting the entire economic and social fabric (SMEs, hospitals, local authorities, etc.), its scope of application has increased in France from 500 to nearly 15,000 entities, and from 6 to 18 sectors (health, chemicals, research, manufacturing industries, postal services, etc.).

The entities will be classified into two categories: essential categories (EE) or important categories (IE), with obligations respectively adapted to their criticality: regular information to the ANSSI, reporting of incidents, implementation of security measures.

The ANSSI is designated as the competent authority, with enhanced powers of control and sanctions. In the event of non-compliance, fines may reach €10 million or 2% of global turnover.

πŸ”΅ The CDR Directive

The bill transposes the European “REC” Directive (2022) aimed at strengthening the resilience of critical entities in eleven essential sectors: energy, transport, health, banking sectors, financial market infrastructures, health, drinking water, wastewater, digital infrastructure, public administration, space, food production, processing and distribution.

It updates the national safety system for activities of vital importance (SAIV), extended to new areas such as hydrogen, sanitation and heating networks. Operators of vital importance will have to draw up an operator resilience plan, and the 1,500 vital sites a specific resilience plan.

The European Commission will be able to monitor certain strategic entities, and a new sanctions committee will be able to impose up to €10 million or 2% of global turnover in the event of non-compliance.

πŸ”΅ The DORA Directive

The bill transposes the DORA Directive, which is complementary to the DORA Regulation already applicable since 17 January 2025. The text finally transposes the directive of 14 December 2022 accompanying the Digital Operational Resilience Act (DORA) regulation, which provides a stricter framework for the use of digital technologies in the financial sector.

The Directive introduces technical measures to harmonise the prevention, detection and reporting of incidents, and sets common rules for the use of digital service providers by financial entities.

πŸ”΅ Transposition

This European framework reinforces the obligations of public and private actors in terms of digital security, operational continuity and sectoral supervision.

A strategic step forward for digital sovereignty and the protection of critical systems. However, France is lagging behind. NIS 2 had to be transposed by 17 October 2024 at the latest, in accordance with the European timetable.

This discrepancy exposes French players, particularly in the critical and financial sectors, to a risk of temporary non-compliance, while European cybersecurity requirements are being strengthened.

πŸ”΅ Cyber Resilience Act

In the same vein, the Cyber Resilience Act is part of the European Union’s global strategy to combat cyber threats.

The Cyber Resilience Act (CRA) is a European regulation that applies to all products incorporating digital elements, whether they are hardware or software connected to a network, with the exception of certain sectors already covered by specific legislation such as medical, automotive or defense.

The main obligations came into force on 10 December 2024 and will be effective from 11 December 2027.

In the event of non-compliance, the penalties are heavy: up to €15 million or 2.5% of worldwide turnover for manufacturers, €10 million or 2% of turnover for importers and distributors, and up to €5 million or 1% of turnover in the event of providing inaccurate information.

πŸ“£ FIRSH experts: The French bill transposes NIS 2, REC and DORA, massively expanding the obligations of critical actors, with penalties of up to €10 million. The Cyber Resilience Act completes this system, imposing security standards on all digital products by 2027.

FIRSH supports companies in their compliance: contracts, relationships with service providers, best practices and training. More than a constraint, cybersecurity is becoming a strategic lever for governance and sovereignty.

✨DIGITAL PLATFORM ✨

➑️Brussels fines Google nearly €3 billion

The European Commission announced on September 5, 2025, that it would fine Google €2.95 billion for its abusive practices in the field of online advertising technologies.

Amount of the penalty : a record fine of €2.95 billion has been imposed on Google for anticompetitive practices in the online advertising technology (adtech) sector.

Nature of the infringement : the Commission alleges that Google favoured its own advertising services in the online display chain, to the detriment of:

  • competing ad technology providers,
  • advertisers, forced to use its solutions,
  • publishers, deprived of equitable access to advertising revenues.

Sanctioned behavior : This is a typical case of self-preference, as Google has structured its advertising ecosystem to maintain its end-to-end control (demand, intermediation, and supply), thus locking in the market.

Measures imposed : Commission requires Google:

  • put an end to his practices of self-preference,
  • Adopts concrete solutions to eliminate structural conflicts of interest that run across the adtech value chain.

Timetable : Google has 60 days (until the beginning of November 2025) to present the Commission with the precise terms of compliance. Once received, the Commission will assess them thoroughly to determine whether these arrangements eliminate conflicts of interest. Otherwise, subject to Google’s right to be heard, the Commission will impose an appropriate remedy.

πŸ“£ FIRSH experts : This sanction is not a first for Google, which has already been fined €4.1 billion in 2018 for abuse of a dominant position. The company has appealed and has not paid this sum to date. If these record fines remain contested and unpaid for years, we can question the real effectiveness of these sanction mechanisms.

πŸ”—Link to press release: https://ec.europa.eu/commission/presscorner/detail/fr/ip_25_1992

✨AI & INTELLECTUAL PROPERTY✨

➑️June 23, 2025 Order: California District Court Rules on Anthropic’s Unauthorized Use of Millions of Pounds

πŸ”΅ Fact: An artificial intelligence company, Anthropic PBC, is being sued by authors for downloading, without permission, millions of copyrighted books in order to create an internal central digital library and train its “Claude” AI model, which generates revenues estimated at more than $1 billion a year.

To do this, Anthropic combined two methods:

  • the massive downloading of books from pirate sites;
  • the digitization of paper books (including duplicates with pirated works).

πŸ”΅ Legal issue: The question is whether Anthropic’s use of millions of books, for the purpose of training its AI and building an internal library, can be qualified as fair use within the meaning of Section 107 of the Copyright Act, taking into account the four criteria for assessment set out in this text.

πŸ”΅ Fair use: This exception is based on the assessment of the following four criteria:

  1. the purpose and nature of the use, in particular its commercial or educational purpose;

  2. the nature of the protected work (factual or expressive);

  3. the quantity and substance of the work used;

  4. the effect of the use of the copy on the market or on the value of the work.

πŸ”΅ The unauthorized use of works for the training of AIs:

In order to assess the lawfulness of the unauthorized use of the works in the context of Anthropic’s training of its AI, the Court chose not to distinguish between the works coming from lawful and illegal sources, and proceeded directly to the analysis of the fair use criteria.

Analysis of the fair use criteria

  • Purpose of use : the training of an AI is qualified as transformative use. According to the Court, prohibiting this practice would amount to preventing any intellectual reuse of a work, which would be contrary to the logic of fair use. She compares this process to the learning of a reader who, after reading many books, becomes a writer in turn. Copyright protects the form of expression, but not the ideas, concepts or principles, so the assimilation of content to feed an AI model cannot be prohibited. β†’ Criterion in favour of fair use.

  • Quantity used : The Court emphasizes that the issue is not the volume of works absorbed by the model, but what is then made available to the public. However, in this case, no direct reproduction is rendered by the AI. The mere ingestion of a large number of books is therefore not sufficient to characterize an attack. β†’ Criterion in favour of fair use.

  • Effect on the market : it has not been established that training deprives authors of economic exploitation of their works or that it replaces originals. The existence of technical copies used in-house does not divert the demand for books, which retain all their commercial value. β†’ Criterion in favour of fair use.

  • Nature of the works : The Court recognizes that the books used are expressive works, not mere factual data. This factor therefore weighs against fair use, as the reproduction of creative works is less easily justifiable. β†’ Criterion unfavourable to fair use, but isolated.

Conclusion on training

  • The overall assessment of the criteria leads the Court to consider that the training of AI from existing works does indeed fall within the scope of fair use. Three criteria are positive, only one works against, which is not enough to rule out the exception.

An important nuance: the issue of pirated copies

  • The Court expressly recalls that piracy of legally accessible copies constitutes in itself an indisputable and irremediable infringement.

  • However, Anthropic did not only use these copies for training: it kept them, even after it had stopped using them for its models, with the aim of creating a central library bringing together all the books in the world.

  • This separate use, the long-term storage of illegal copies outside of training, does not fall within the exception of fair use. It constitutes an independent infringement of copyright, capable of engaging the liability of Anthropic.

πŸ”΅ The constitution of a permanent digital library

Use of lawfully digitized books

  • The Court ruled that the conversion of legally purchased paper copies into internal digital copies was fair use.

  • It considers that the digital copy is equivalent to the paper copy, as long as there is no resale or distribution and that the use remains limited to conservation and research.

  • The four criteria of fair use are generally retained as favourable, except that of the nature of the (expressive) works.

  • However, this solution is open to criticism: a digital copy is not equivalent to a paper copy since it allows collective, simultaneous, unlimited use and free from the legal and technical restrictions imposed by publishers.

Use of pirated copies

  • Anthropic has downloaded and preserved more than seven million books from pirate databases.

  • For these copies, the Court considers that all the criteria of fair use preclude their use.

  • The case will therefore continue on the merits to judge the constitution of Anthropic’s central library from illegal content.

πŸ“£ Words of FIRSH experts: Behind a methodical analysis of the fair use criteria, the decision appears to be very favorable to AI developers. The Court does distinguish between licit and illicit copies for the digital library, but not for training, thus avoiding deciding the key question of the use of pirated copies that are subsequently destroyed. This omission maintains a major legal uncertainty.

Above all, the Court seems to be unaware of the massive transfer of value: Anthropic benefits from millions of protected works free of charge, without paying the rights holders. Under French law, even if counterfeiting is uncertain, such a practice could be sanctioned on the ground of parasitism.

➑️Kadrey et al. v. Meta Platforms Inc., U.S. District Court, N.D. California 25 juin 2025

On June 25, 2025, the United States District Court for the Northern District of California issued a long-awaited decision in the multi-author litigation with Meta Platforms Inc.

πŸ”΅ Facts:

In Kadrey et al. v. Meta, several authors are suing Meta Platforms Inc., accusing it of using their copyrighted works, without authorization, to train its artificial intelligence models, including LLaMA.

The case raises a fundamental legal question: can the use of copyrighted works for the training of an artificial intelligence model be considered fair use under US law?

πŸ”΅ The authors’ accusations:

Thirteen authors accused Meta of using their books without permission to train its LLaMA and LLaMA 2 language models. According to them, these works were extracted from “shadow libraries” such as Books3, Bibliotik or Library Genesis. The plaintiffs alleged copyright infringement under Title 17 U.S. Code Β§ 106, accusing Meta of reproducing their works in their entirety, creating unauthorized derivative works, and exploiting them for commercial purposes.

πŸ”΅ Meta’s defense: fair use:

For its part, Meta sought a summary judgment on the basis of the fair use exception provided for in Section 107 of the Copyright Act. This doctrine allows, under certain conditions, the use of protected works without authorization, in particular for research, teaching or transformation purposes.

πŸ”΅ Purpose and nature of use:

The court found that training the model on the works in question constituted a transformative use, as the goal was not to reproduce or read the texts, but to allow the AI to learn the structures of natural language. Although the use has a commercial dimension, this was not considered decisive, because the primary purpose according to the judge was of a scientific and statistical nature.

πŸ”΅ No economic damage demonstrated:

The court rejected the authors’ arguments regarding a possible economic impact, pointing to the lack of tangible evidence of harm. No data, expert analysis or concrete evidence was provided to demonstrate a loss of revenue, substitution in the market or a decline in the value of the works.

πŸ“£ What FIRSH experts say:

The American judgment is therefore in Meta’s favor and sends a favorable signal to generative AI players.

However, the court insisted that each situation will have to be assessed on a case-by-case basis.

This approach refers to the broader question of setting up a licensing market for AI training, a solution already recommended in France by the CSPLA (mission report on the remuneration of cultural content used by artificial intelligence systems) and by the Senate in several of their previous reports.

πŸ”—Link to the decision: https://law.justia.com/cases/federal/district-courts/california/candce/3:2023cv03417/415175/598/?utm_

➑️ Landmark $1.5 billion settlement agreement between authors and artificial intelligence developer Anthropic

On September 5, 2025, a settlement agreement of unprecedented magnitude was reached in the United States between several authors and the artificial intelligence developer Anthropic. At $1.5 billion, it represents the largest settlement ever recorded in copyright matters and, more broadly, in disputes relating to the use of protected works for the training of AI systems.

The dispute concerns Anthropic’s massive use of literary works downloaded from pirated free software databases.

The agreement provides for a compensation fund of $1.5 billion, or about $3,000 per work concerned. This amount may be revised upwards depending on the final number of applications.

Beyond its financial scale, the agreement sends a clear message to the artificial intelligence industry: the use of content from “shadow libraries” or illicit sources inevitably leads to legal and economic consequences.

However, Judge Alsup suspended approval of the $1.5 billion settlement agreement between Anthropic and the authors. It considers that essential elements are missing, in particular the precise list of the works concerned as well as the procedures for notifying the members of the class action, so that they can exercise their choice to join or withdraw (opt in / opt out).

✨ Firsh Expert Update: Presented as a pilot agreement for future AI and intellectual property litigation, it will still have to prove its procedural soundness before it can be seen as a reference.

✨INTELLECTUAL PROPERTY✨

➑️Trademarks: Nullity of filings that are part of a “fraudulent global economic strategy”

πŸ”΅ Facts: Two individuals filed a series of seven trademarks including the term “EMPORIO” for products in classes 3, 14, 18, 21, 24 or 25 (not all trademarks cover all goods). It is apparent from the decisions that these two individuals are related since they have the same address, the same surname and the same representative. The trademarks were filed between 2018 and 2022 and have not been the subject of oppositions. The company GIORGIO ARMANI, which uses the trademark “EMPORIO ARMANI”, brings an action for a declaration of invalidity for bad faith against these seven registered trademarks.

πŸ”΅ Right to bring an action for a declaration of nullity

The owners of the contested trademarks tried to have the applications of the company GIORGIO ARMANI declared inadmissible, invoking an abuse of the right to act (in particular in the absence of an initial opposition). The INPI rejects the argument:

  • only conduct dictated by the intent to harm could constitute an abuse;
  • the absence of opposition does not deprive the person of the right to bring an action for a declaration of nullity;
  • GIORGIO ARMANI’s request is sufficiently reasoned and substantiated.

πŸ”΅ Characterization of bad faith

Two conditions must be met:

  • Knowledge of prior use: the reputation of the trademark “EMPORIO ARMANI” establishes a presumption of knowledge of which the applicants could not have been unaware of the exploitation.
  • Parasitic intention : beyond knowledge, it is necessary to demonstrate a desire to unduly exploit the notoriety of others. The INPI retains this element by noting a repeated strategy of opportunistic filings (seven trademarks using the term “EMPORIO”, others imitating for example “CELINE”, “TOMMY”, “OFF WHITE”, etc.).

πŸ”΅ Conclusion

The INPI concluded that these filings were part of a ” fraudulent global economic strategy ” and declared the trademarks invalid. This series of decisions illustrates the growing importance of the ground of bad faith before the Office, even if the sanction remains limited: the cancellation of the titles and a symbolic order to pay costs. The question will now arise as to whether the EUIPO will adopt a similar approach for filings made at European level.

Β πŸ“£ FIRSH experts: FIRSH regularly assists its clients in proceedings before the INPI (opposition, invalidity) in particular to protect their renowned trademarks. The INPI’s decision recalls the importance of carefully selecting the supporting documents to characterize the bad faith of the applicant; The applicant’s conduct outside the contested filing also makes it possible, in some cases, to characterize that his bad faith stems from a fraudulent overall economic strategy.

πŸ”— Link to the decision: INPI, 2 May 2025, NL 23-0243 to 23-0249

➑️ APPLE / OPPLE: the exceptional French fame

APPLE filed a notice of opposition to the registration of the trademark OPPLE and was successful in a wide range of services, including advertising, organization of exhibitions and trade shows, import-export services, sales promotion, sponsorship search, license management, and marketing.

πŸ”΅ Recognition of an exceptional reputation

The Court recalls that the APPLE trademark enjoys a worldwide reputation, in particular for computer hardware. This exceptional reputation justifies enhanced protection against any similar sign.

πŸ”΅ Confirmation de l’opposition

Given the proximity of the signs and the strength of the APPLE trademark, the Court of Appeal confirmed the merits of the opposition.

πŸ“£ What FIRSH experts say:

While this extended protection is consistent for services related to advertising and communication, the application of the same reasoning to more distant services such as auctions or the rental of vending machines may seem questionable. The decision illustrates the powerful effect of a brand’s reputation: it makes it possible to block the registration of similar signs in many fields, even those far removed from the initial core business.

πŸ”— Link to the decision: Paris Court of Appeal, July 4, 2025, RG No. 24/02690

✨FIRSH ECOSYSTEM✨

➑️ Report of the Commission of Inquiry into the Psychological Effects of TikTok on Minors

The report of the National Assembly’s commission of inquiry is now available. It analyzes TikTok’s mechanisms of influence on young people and proposes concrete measures to better protect them.

Chaired by Arthur Delaporte and reported by Laure Miller, the commission looked at the attention-grabbing devices used by the platform, the risks associated with prolonged exposure to content, the comparison between TikTok and its Chinese version Douyin, the psychological effects observed in minors as well as the flaws in regulation and moderation.

The report is published in two volumes: the first devoted to analysis and observations, the second to hearings.

The findings highlight a platform designed to maximize attention and addiction, often inappropriate content, insufficient moderation under the Digital Services Act, a lack of transparency on algorithms and data management, and worrying effects on young people’s mental health.

πŸ”΅ Recommendations:

Among the key recommendations are the ban on access to social networks (excluding messaging) for those under 15 years old, the strengthening of regulatory means, the possible revision of the legal status of platforms to that of publisher, support for the development of sovereign digital tools and European alternatives, the integration of digital education from primary school, the strengthening of the academic management of media and information literacy, the launch of a national awareness campaign and the increase in the number of health professionals in schools.

Β πŸ“£ Words of FIRSH experts : This report is part of a broader and necessary dynamic of reflection on:

  • Regulation of the major digital platforms

  • Protection of minors’ personal data

  • Digital resilience and technological sovereignty

  • Information and media literacy

Subjects on which the practice as well as its laboratory work on a daily basis. This report is therefore a strong political alert, but would benefit from being complemented by longitudinal studies, an international comparative analysis, and a broader consultation with educational, social and digital actors. It nevertheless paves the way for stronger regulation and collective awareness of the mental health issues related to the digital uses of minors.

πŸ”— Official link to the report on the National Assembly website: Report page – National AssemblyΒ 

➑️New Invitalia–Bpifrance agreement: cross-financing for innovative projects between Italian start-ups and French companies

From 15 September 2025, an agreement between Invitalia and Bpifrance allows Italian start-ups to submit innovation projects with French companies (AI, blockchain, IoT, digital economy) and to obtain common, simplified and partly non-refundable funding, under the Quirinal Treaty.

✨ FIRSH NEWS✨

➑️Interview with Claire Poirson for Le Parisien

Our partner Claire Poirson was interviewed by Le Parisien as part of an article dedicated to the death of streamer RaphaΓ«l Graven, known as “Jean Pormanove”.

She provides her legal analysis and insight into the issues raised by this case.

Link to the article: https://www.leparisien.fr/faits-divers/mort-de-jean-pormanove-derriere-le-drame-les-derives-des-trefonds-du-web-20-08-2025-J4ZIAXNFIFBYHGPMVGTZHPBMY4.php

πŸ“£ FIRSH experts : This case illustrates the urgency of an effective application of the regulation to all platforms accessible in France. The principle of human dignity, at the heart of public order, must be an absolute safeguard against the commodification of online humiliation.

➑️Interview with Claire Poirson for Le Parisien

As an extension of her first interview, our partner Claire Poirson was interviewed again by Le Parisien as part of a second article devoted to the death of Jean Pormanove.

In this article, Claire Poirson analyzes the investigation opened by the Paris Public Prosecutor Laure Beccuau on the basis of Article 323-3-2 of the Criminal Code, which punishes the provision of an illegal online platform with the aggravating circumstance of acts committed by an organized gang.

Link to the article: https://www.leparisien.fr/high-tech/mort-de-jean-pormanove-que-risque-la-plate-forme-kick-visee-par-une-plainte-et-deux-enquetes-26-08-2025-BYJGAJRCIFHB5N4ABCQTFVITNA.php

πŸ“£ FIRSH experts: By choosing to mobilise the criminal aspect, the authorities are sending a strong signal. This approach not only anchors the action in compliance with European law, but also reminds digital platforms that they expose themselves to real legal consequences when they fail to meet their obligations.

➑️ Impact France 

Renewed commitment: FIRSH, a company with a mission, has renewed its membership of the Impact France Movement, a network supported by and for entrepreneurs with a social and ecological impact.

➑️In particular, Firsh has assisted its clients in the context of:

πŸ”΅A dispute brought before the summary judge (trademark summary proceedings and ordinary law summary proceedings) for the defence of a well-known brand in the automotive sector;

πŸ”΅ Opposition proceedings before the INPI for the defence of a well-known trademark in the automotive sector against a fraudulent trademark registration;

πŸ”΅ A consultation aimed at determining the conditions of use of a slogan in English only on a website accessible in France and its compliance with applicable law, in particular with regard to the Toubon Law;

πŸ”΅ An advisory mission to a company on the implementation in France of Directive (EU) 2019/882 of 17 April 2019, known as the “European Accessibility Act”, on accessibility requirements applicable to products and services, and the drafting of its accessibility statement published on its website;

πŸ”΅ A consultation for a startup using web scraping to make recommendations to ensure the compliance of its technology and its use with the regulations in force.

➑️ As part of its innovation lab, FIRSH is in the middle of writing its 2025 White Paper on the issues related to Deepfakes.

Addressing one of the major future topics of our societies, this White Paper required in-depth work for more than 6 months and the realization of an in-depth study with multiple French and international experts in artificial intelligence on the technical, sociological, economic and legal aspects of deepfakes. Β The legal reflection that emerges from it is based on a detailed analysis of legislation, doctrine and court decisions. The White Paper therefore gives legal and practical recommendations for the attention of public authorities, companies and any individual interested in the subject…

Its new update aims to stay as close as possible to the news and to bring together new leading legal experts in their fields.

Do not hesitate to contact us on the following email to receive a copy of the contact@FIRSH.LAW White πŸ‘‰ Paper

πŸ“’ To follow us on LinkedIn and receive our newsletter, click here: https://www.linkedin.com/company/firshlaw/ Β .

πŸ“’ There is no direct collection of your personal data and therefore no emailing by Firsh!

Our news
5 January 2024
GREETINGS : FIRSH wishes you an innovative year!
Read more
24 January 2024
Data Act: how can SMEs benefit from it? (Interview – Journal du net)
Read more
1 February 2024
Do deepfakes threaten our democracy? (Podcast – RTL)
Read more