The bill will classify the risk of AI tools and force developers of generative-AI applications to disclose the use of any copyrighted materials.
Controversies around artificial intelligence (AI) and its use of copyrighted material have arisen in various scenarios after a major uptick in the use of the technology in content creation.
Legislators in the European Union have responded to the growing usage of AI in a vote on April 27, which pushed forward a draft of a new bill designed to keep the technology and companies developing it in check.
Details of the bill will be finalized in the next round of deliberations among legislatures and member states. Though, as it currently stands, AI tools will soon be classified according to their risk level. The risk levels range from minimal and limited to unacceptable.
According to the bill, the high-risk tools will not be banned entirely, though they will be subjected to stricter transparency procedures. In particular, generative AI tools, including ChatGPT and Midjourney, will be obliged to disclose any use of copyrighted materials in AI training.
Svenja Hahn, a member of the European Parliament, commented in response to the bill’s current status as a middle ground between too much surveillance and over-regulation that protects citizens, and “foster innovation and boost the economy.
The bill, which is part of the EU’s Artificial Intelligence Act, was proposed as draft rules nearly two years ago
In the same week, the European think tank Eurofi, composed of enterprises in the public and private sectors, released the latest edition of its magazine that included an entire section on AI and machine learning applications in finance in the EU.
The section included five mini-essays on AI innovation and regulation within the EU, particularly for use in the financial industry, all of which touched on the upcoming Artificial Intelligence Act.
One author, Georgina Bulkeley, the director for EMEA financial services solutions at Google Cloud, said in reference to the legislation:
“AI is too important not to regulate. And, it’s too important not to regulate well.”
These developments come shortly after the EU’s data watchdog voiced concern about the potential troubles AI companies in the United States will run into if they are not in line with its General Data Protection Regulations.