YouTube has been getting much worse lately as well. Lots of purported late-breaking Ukraine war news that’s nothing but badly-written lies. Same with reports of Trump legal defeats that haven’t actually happened. They are flooding the zone with shit, and poisoning search results with slop.
Disagree. Without Section 230 (or equivalent laws of their respective jurisdictions) your Fediverse instance would be forced to moderate even harder in fear of legal action. I mean, who even decides what “AI deception” is? your average lemmy.world mod, an unpaid volunteer?
Just make the law so it only affects things with x-amount of millions of users or x-percent of the population number minimum. You could even have regulation tiers toed to amount of active users, so those over the billion mark are regulated the strictest, like Facebook.
That’ll leave smaller networks, forums, and businesses alone while finally giving some actually needed regulations to the large corporations messing with things.
Do we fine Proton AG for a bunch of shitheads abusing their platform and sending malicious email? How do they detect it if its encrypted? Force them to backdoor the encryption?
If you can’t understand why big = bad in terms of the dissemination of misinformation, then clearly we’re already at an impass on further discussion of possible numbers and usage of statistics and other variables in determining potential regulations.
Proton is not a social medium. As to “how high”, the lawmakers have to decide on that, hopefully after some research and public consultations. It’s not an unprecedented problem.
Another criterion might be revenue. If a company monetises users attention and makes above certain amount, put extra moderation requirements on them.
I think just the people need to held accountable as while I am no fan of Meta, it is not their responsibility to hold people legally accountable to what they choose to post. What we really need is zero knowledge proof tech to identity a person is real without having to share their personal information but that breaks Meta’s and other free business model so here we are.
It is time to start holding social media sites liable for posting AI deceptions. FB is absolutely rife with them.
YouTube has been getting much worse lately as well. Lots of purported late-breaking Ukraine war news that’s nothing but badly-written lies. Same with reports of Trump legal defeats that haven’t actually happened. They are flooding the zone with shit, and poisoning search results with slop.
Disagree. Without Section 230 (or equivalent laws of their respective jurisdictions) your Fediverse instance would be forced to moderate even harder in fear of legal action. I mean, who even decides what “AI deception” is? your average lemmy.world mod, an unpaid volunteer?
It’s a threat to free speech.
Just make the law so it only affects things with x-amount of millions of users or x-percent of the population number minimum. You could even have regulation tiers toed to amount of active users, so those over the billion mark are regulated the strictest, like Facebook.
That’ll leave smaller networks, forums, and businesses alone while finally giving some actually needed regulations to the large corporations messing with things.
How high is your proposed number?
Why is Big = Bad?
Proton have over 100 million users.
Do we fine Proton AG for a bunch of shitheads abusing their platform and sending malicious email? How do they detect it if its encrypted? Force them to backdoor the encryption?
Proton isn’t social media.
If you can’t understand why big = bad in terms of the dissemination of misinformation, then clearly we’re already at an impass on further discussion of possible numbers and usage of statistics and other variables in determining potential regulations.
Proton is not a social medium. As to “how high”, the lawmakers have to decide on that, hopefully after some research and public consultations. It’s not an unprecedented problem.
Another criterion might be revenue. If a company monetises users attention and makes above certain amount, put extra moderation requirements on them.
Yeah, I work for your biggest social media comoetitor, why would I not just go post slop all over your platform with the intent of getting you fined?
Also, it would be trivial for big tech to flood every fediverse instance with deceptive content and get us all shut down
I think just the people need to held accountable as while I am no fan of Meta, it is not their responsibility to hold people legally accountable to what they choose to post. What we really need is zero knowledge proof tech to identity a person is real without having to share their personal information but that breaks Meta’s and other free business model so here we are.
Sites AND the people that post them. The age of consequence-less action needs to end.
Or more like, just the people that post them.