European Union lawmakers have given final approval to the 27-nation bloc’s artificial intelligence law.
European Union lawmakers are set to give final approval to the 27-nation bloc’s artificial intelligence law Wednesday, putting the world-leading rules on track to take effect later this year.
Lawmakers in the European Parliament are poised to vote in favor of the Artificial Intelligence Act five years after they were first proposed. The AI Act is expected to act as a global signpost for other governments grappling with how to regulate the fast-developing technology.
“The AI Act has nudged the future of AI in a human-centric direction, in a direction where humans are in control of the technology and where it — the technology — helps us leverage new discoveries, economic growth, societal progress and unlock human potential,” said Dragos Tudorache, a Romanian lawmaker who was a co-leader of the Parliament negotiations on the draft law.
Big tech companies generally have supported the need to regulate AI while lobbying to ensure any rules work in their favor. OpenAI CEO Sam Altman caused a minor stir last year when he suggested the ChatGPT maker could pull out of Europe if it can’t comply with the AI Act — before backtracking to say there were no plans to leave.
No AI surveillance, AI scoring, or AI targeted at children. AI tools can only be used by law enforcement in order to filter already collected data, and only for serious crimes. Generative AI must be labeled and copyrights must be respected. The European commission reserves the right to review any high-risk uses of AI.
Developers of general purpose AI models — from European startups to OpenAI and Google — will have to provide a detailed summary of the text, pictures, video and other data on the internet that is used to train the systems as well as follow EU copyright law.
Which makes me think that it'll be used to require models to truly open their "source"
The FOSS community really needs to come up with a better definition and licensing model for LLMs and other neural networks, though. I've seen multiple times where people refer to freely provided pre-trained models as "open source"
AIs aren't truly open source unless their training code and the training data is fully provided. Anything else is at most semi-obfuscated and definitely not "open"
Which makes me think that it’ll be used to require models to truly open their “source”
I forgot to mention: That's unlikely. It only requires a "summary", which will be of limited use for reverse engineering the big models. It does, however, provide a club with which to beat small developers.
I don't think many people who publish finetunes on huggingface (think github for AI models) will bother with this. I'm not sure what that would mean for the legality of HF on the whole.
Damn its actually helping foss another good one by the eu. Yeah people calling the llama models foss is just plain wrong and giving the zucc more credit than the deserves.
Why do you need the training data? To me, if you can use it and modify it as you wish then it's open source. If you need a copy of the training data then that's a problem, even outside the EU.
Many (all?) of the so-called open source models have "ethical" restrictions on use, so technically not open. It's close enough to me, for now. In the future, such clauses will become an issue. Imagine if printing presses came with restrictions on what you can and can't print.
It's pretty bad for everyone. Don't throw out your fax machine. But it won't outright kill FOSS. There are exceptions to many rules, which will allow it to survive. How much FOSS will be hampered will depend on how regulators interpret and enforce that.