Copyright regulations for Generative AI
A number of countries have created legal documentation to regulate the use of AIs:
- European Union: AI Act - this looks at risk management by creating a regulatory framework with 4 levels of risk for AI systems: unacceptable risk, high risk, limited risk and minimal risk. Those AI systems which pose a threat will be banned. AI systems and models must comply with transparency requirements and EU copyright law by providing detailed accounts of where they have obtained their datasets for training the AI. The newly implemented AI Office will be key in implementing the AI Act which comes into force 1 August 2024 with provisions operating over the following 6-36 months.
- United Kingdom: AI regulation: a pro-innovation approach, (March 2023) white paper was produced under the Sunak Conservative government as part of the National AI Strategy. Detailing plans of the approach to AI regulation the government sought the opinions of stakeholders such as the creative industries and AI companies to create an effective voluntary code of conduct. This was not possible and in February 2024 the government said that the working group was unable to agree. In the government response they stated that creative industries and rights holders are concerned over the large scale use of copyright protected works for training AI systems and wanted to retain control and autonomy over their own work, whilst AI developers posited that they needed access to a wide-range of quality datasets to train and develop AI systems in the UK. There is a lack of any detail in the response regarding any future actions.
- United States: Blueprint for an AI Bill of Rights, (October 2022) this sets out to be a guide rather than an act of law. It lays out five principles to guide the implementation and deployment of AI systems to protect the public. In California a bill has been introduced by Adam Schiff (April 2024) called the Generative AI Copyright Disclosure Act of 2024, which would require AI companies to provide records of their training data including any copyright protected works to the Register of Copyright at least 30 days before they release their AI system.
- China: China is developing a new regulatory approach for AI, linking with their data protection framework. Since 2017 the Chinese government has prioritised AI governance trying to strike a balance between protecting copyright but not stifling AI innovation. Whilst in previous years this took the form of non-mandatory best practice for AI developers, since 2022 the Chinese government has produced mandatory regulations regarding intellectual property and content moderation. Based on three pillars, China's AI governance framework looks at 'Content Moderation', 'Data Protection' and 'Algorithmic Governance'. This first pillar 'Content Moderation' is of chief significance to copyright because AI developers will be held accountable and need to be transparent about the identity of the content used for AI training and any outputs generated by AI's.
This is a constantly shifting regulatory landscape, as AI technologies evolve and countries introduce new regulations governing their use. As such this will be closely monitored and updated.