Why Saying "Free Software" But Adding Vague Restrictions is Useless
So, you have this great beautiful LLM model, right? Generating pictures is beautiful, I am using it every day and only contemplating how to run it on my own machine due to low-end hardware yet.
It’s under Apache License 2.0. That means it’s free software. You can use it, modify it, share it, do whatever you want. But then, the author adds this big list of "Out-of-Scope Use." They say you can’t use it for bad things—like breaking laws, harming people, spreading lies, or making illegal content. Okay, fair, nobody wants their work used for evil. But here’s the problem: why even say it’s free software if you’re going to add all these vague rules?
Which international law? How about Shariah laws? Do you know that just be making a picture of a prophet someone could be already in breach of your "Out of Scope Use" rules!!! Leave to people to use it how they wish and want, as you can't be even there when it comes to some breaking of the laws.
You can't simply prevent it. Please make it truly free and remove "Out of Scope Use".
How about creating guidelines to put attention of people to responsibly use, but not disallowing it, as by disallowing it, you are making it proprietary software.
Now if I do not see your "Out of Scope Use" and only see "Apache 2.0" license, then I am still free to make image of a prophet, no matter if I am to break some Shariah laws in Saudi Arabia (I am not doing it, but we are talking about freedom here).
Anyway, let's say country disallowed carrying of guns, well, guess what, criminals will not care and will carry guns anyway.
What is point of you appealing to gentle and intelligent, reasonable and common sense people, if they are already common sense?! They will not be doing bad things anyway. All my friends are like this.
You cannot disallow true criminals doing anything with FLUX Schnell.
The Apache License doesn’t care about how you use the software. It’s free. Period. If someone uses it for something illegal, that’s not the author’s problem. That’s for the police, the courts, the law to handle. But now, the author is trying to be the judge. They say, "Don’t use this for bad stuff," but what is "bad stuff"? It’s so vague. Someone could say, "I feel harassed," or "This is disinformation," and now what? Is the author going to investigate? Are they the law? No. They’re not.
Let me give you an example. Say someone uses the model to make a meme. Someone else says, "This meme is false information, and it harmed me." Who decides? The author? No. That’s not their job. Or what if someone says, "This content harassed me." Is the author going to sit in a courtroom and decide? No. That’s for the legal system.
And here’s the thing: if you’re going to say your software is free, then it’s free. You can’t add all these "but don’t do this, don’t do that" rules. It’s like saying, "Here’s a free car, but you can’t drive it on Tuesdays, or if it rains, or if someone thinks you’re a bad driver." That’s not free. That’s just confusing.
But we speak here of free as in freedom, liberty, not price.
If the author is worried about misuse, they should just say, "This is free software. Use it responsibly. If you break the law, that’s on you." That’s it. No vague rules, no trying to be the judge. Let the law handle the law.
Because here’s the truth: you can’t control how people use free software. Once it’s out there, it’s out there. Adding a list of "don’ts" doesn’t stop bad people. It just makes good people confused. And it makes the whole idea of "free software" feel fake.
So, authors, please: if you want your software to be free, make it free. Don’t try to control it. Don’t add vague rules. Let the law handle the criminals. That’s not your job. Your job is to create. Let the rest of the world figure out the rest.
Because right now, saying "free software" but adding all these restrictions? It’s useless. And it’s not really free.
References:
What is Free Software? - GNU Project - Free Software Foundation:
https://www.gnu.org/philosophy/free-sw.html
The Open Source Definition – Open Source Initiative
https://opensource.org/osd
Example: Violating Laws Without Even Knowing It
Let’s say someone uses the model to generate a funny meme about a politician. They share it online, and it goes viral. But in Country X, making fun of politicians is illegal under their "anti-defamation laws." The person who made the meme had no idea about this law because they live in Country Y, where such memes are completely legal.
Now, according to the author’s rule, the model "may not be used in any way that violates any applicable national, federal, state, local, or international law or regulation." But here’s the problem:
The user didn’t know about the law in Country X.
The author of the model has no way of tracking or enforcing this rule globally.
The law in Country X might be vague or unfair, but the rule still applies.
So, technically, the user broke the rule. But how is the author going to enforce this? Are they going to monitor every use of the model in every country? Are they going to hire lawyers to check if every meme, tweet, or blog post generated by the model complies with every law in the world? Of course not. It’s impossible.
Example 1: Unintentional Spread of False Information (Deepfake)
Imagine a graphic designer uses a picture-generation model to create a promotional image for a charity event. They input a description like, "A group of doctors providing medical care to children in a war-torn country." The model generates a realistic-looking image of doctors in white coats treating children in a makeshift clinic.
However, the designer doesn’t realize that the model has generated a fake logo for a well-known humanitarian organization in the image. The designer uses the image in their campaign, and it goes viral. Later, the organization notices the fake logo and accuses the charity of misrepresenting their involvement.
According to the rule:
False Information: The image contains a verifiably false element (the fake logo).
Harm: The charity’s reputation is damaged, and the organization’s trust is undermined.
Intent: The designer didn’t intend to harm anyone—they just didn’t notice the fake logo.
Did the designer break the rule? Technically, yes. But it was an honest mistake, not malicious intent.
Example 2: Intentional Misinformation (Fake Evidence)
Now, consider a bad actor who uses the picture-generation model to create fake evidence for a political smear campaign. They generate a realistic-looking photo of a politician accepting a bribe from a known criminal. The photo is completely fabricated, but it looks convincing.
The bad actor then spreads the photo on social media, claiming it’s real. The false image goes viral, and the politician’s reputation is severely damaged, even after the photo is debunked.
This is a clear violation of the rule:
False Information: The photo is verifiably false (it was generated by AI).
Harm: The politician’s reputation is harmed, and public trust is eroded.
Intent: The bad actor’s purpose is to harm the politician and manipulate public opinion.
But here’s the problem: the author of the model has no way to stop this. Once the model is released under a free license, anyone can use it, including bad actors. The rule is just words—it doesn’t prevent misuse.