Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
fdaudensΒ 
posted an update 2 days ago
Post
1360
Fascinating point from @thomwolf at Web Summit: AI misuse (deepfakes, fake news) is actually easier to make with closed models, not with open-source ones.

This challenges the common narrative that open-source AI is inherently more dangerous. The reality is more nuanced - while we may think open source is technically easier to misuse, closed models' accessibility and product-focused design appear to be driving more actual harm.

Important context for current AI safety discussions and regulation debates.

Do you agree? πŸ‘‡

This is not really a surprise.
Generations from big providers are somehow not as restructed as one would expect them to be.
Cooperations tend to have way more money than open source projects, which can lead to better performance. They also tend to have all the big GPUs, so I think this just makes sense.

If they (as in, big tech co) wanted to make generations more safe, you would probably pass the prompt through a safety LLM.

Most open source models are also tailored to local use "at home", meaning, their sizes are usually on the smaller side.