Does anyone get the feeling that we're going to see a huge censoring of the internet under the guise of "preventing violent extremism"?
I feel like the TikTok ban is only the start.
The US is pissed that it couldn't 100% control the narrative on Israel genociding Palestine and sees the internet as the reason why. They've already put a lot of effort into homogenising and controlling the narrative on most big social media sites. I wouldn't be surprised if they started cracking down more under the guise of "stopping misinformation"
If you don't move, you don't notice your chains. Being censored directly on reddit was extremely radicalizing for a lot of people here. Once you've noticed the chains, it's almost impossible to unsee them. Once you've had physical violence committed against you at a peaceful protest, you can't forget just how thin the veneer of civility is. They're creating an entire generation of people like us by actively censoring and over reacting. The illusion is shattered permanently for more people every day.
Yes. For years now, when I’ve engaged with libs on the topic of free speech (usually w/r/t China), I point out that the amount of free speech the people of a country have is directly related to how much of a threat that speech is perceived to be by ruling powers. China has relatively free speech but it’s still a socialist country living in a capitalist world that wants it dead, so it’s not totally unfettered.
Libs love to tout about how Americans have totally free speech (debatable, but still). But up until recently, free speech hasn’t been a threat to power in the US. So sure, let the peasants have free speech, it won’t actually change anything.
Well now it seems that the ruling class do perceive a threat. They thought they could control speech in the internet age by making sure the biggest social media outlets are firmly under their thumb. They have Facebook, Google, and Twitter. But TikTok changed the game. The fact that it’s from China is a happy coincidence for them - if it was instead from a vassal state or some relatively powerless state outside their orbit, they would have muscled their way in.
Not being able to control the narrative is a threat, so that speech needs to be restricted.
I can't wait for america to make its own great firewall due to malding over losing the narrative, posts will need to be smuggled in on USB drives dropped by balloons over the border
the us already has what's in effect a great firewall/iron curtain. Most of the internet touches google/cloudflare and certain phrases are censored already as we saw with akamai and certain phrases, as well as certain Iranian site names at the hosting level. on the platform level the NSA, FBI, CIA, DOD, State dept already control the algorithm, we've known this since twitter files.
there's a difference between them doing something "legally" and already doing stuff
I feel like we've already been seeing it with deplatforming and shadow banning, but the real kicker has been the social cooling due to surveillance and individual social anxiety. More to come is expected imo
I feel like the constant warnings about misinformation is a way to manufacture consent for this. And like all good propaganda, there is truth behind it.
Oh yeah 100%. Liberals yelling about "Disinformatsiya" and other shit to try to make lying sound like a Russian plot. In practice "Misinformation" is anything they don't like or that htey disagree with or that challenges there narrative. Doesn't matter if it's true or not, well documented or not. It's a thought terminating cliche.
yes. and they will mainly be labeling and silencing us, the surveillance is built for this already. they are treating fascists in a much more lenient way online already.
even irl, take the charlottesville protests: imagine how much police brutality there would have been if they were MLs instead of nazis.
did the university fascist counterprotesters get the same violent treatment by the police as the antiwar people?
e: lets organize and move to open platforms, folks. those are more resilient to censorship and surveillance.
We've been in a huge censoring wave for the last 8 years. Since Trump, violent and peaceful resistance to his actions get you delisted, shadow banned, etc. They make some gestures towards banning people on the other side, but society is awash with a miasma of white supremacy, that you cannot really ban it. You can just ban those that get a little too mean.
Additionally, all of these platforms have spent humongous amounts of money basically banning words that more accurately explain the world, killing, murder, suicide, genocide, occupation, resistance, etc. We've found ways of circumventing it with dollar signs, blacking out words, emojis, etc. But it makes that information harder and harder to find.
Add in the extensive self-censorship people are doing (unalived) because they don't know what will get them in trouble with the platforms. Now that I think about it I'm kind of surprised I don't recall hearing "Chilling effect" recently.
it has long seemed to me that the censoring of information in the west is done through distraction and entertainment. there is so much media to consume and the most easily consumed has historically been the media that serves the interests of the powerful. this is still true, though the market concentration of legacy media ownership reached a crescendo just as the internet started to proliferate.
capital has obviously inserted itself into the internet's largest platforms, which all benefit from network effects. the effect that social media, like facebook and twitter, have had on the dissemination of news is hard to overstate. of course, the legacy platforms trying to differentiate themselves as being somehow more legitimate, but that distinction falls apart outside of obvious specific examples. the real difference is the level of interactivity in legacy media is non existent.
legacy media has only ever been interested in creating one-way outputs: articles, videos, etc., where the forum of an engaged audience is presumed to exist and agree with the outputs. web 2.0 phenomenon has completely this blown up. nowhere is this more obvious and absurd than their curated "Town Hall" events where handpicked Joe Blow is brought in to ask an approved question from a note card, and this is meant to represent the public square.
in any event, more to the question of censoring the internet, i think what we're seeing is the attempt to bring the "public square" under some level of control. we all know that people arguing in the comments section is often more interesting and engaging than probably 90% media outputs. when that is taken away, people go elsewhere to do it. communities are still trying to find the level of moderation they desire for that kind of interaction. all the while, the established power structure is seeking to insert itself into that conversation within the largest communities. and yes, i think "preventing violent extremism" is the tactic that gives them the most leeway and power. "national security" implications give the most latitude in avoiding courts and issuing gag orders. "stopping misinformation" is probably going to be the framing that is used more broadly when some censorship becomes public. for example, though the laws around the banning of TikTok are all weird national security legalese, the way it's being framed proponents of the ban is as a source of disinformation. i think this is because the national security argument has a better shot in legal interpretation than "people are lying on my internet program, ban the internet program".
a key piece of censoring the public square is to make sure the censorship itself doesn't invite much attention or scrutiny.
Yeah I came here to say this. Censorship in the west works through changing emphasis or floods of nonsense. Average people don't want to sift through hours or footage or go to obscure forums. They want immediate information or the first thing they find that sounds right.
Many discussions about social media governance and trust and safety are focused on a small number of centralized, corporate-owned platforms that currently dominate the social media landscape: Meta’s Facebook and Instagram, YouTube, Twitter, Reddit, and a handful of others. The emergence and growth in popularity of federated social media services, like Mastodon and Bluesky, introduces new opportunities, but also significant new risks and complications. This annex offers an assessment of the trust and safety (T&S) capabilities of federated platforms—with a particular focus on their ability to address collective security risks like coordinated manipulation and disinformation.
Centralized and decentralized platforms share a common set of threats from motivated malicious users—and require a common set of investments to ensure trustworthy, user-focused outcomes. Emergent distributed and federated social media platforms offer the promise of alternative governance structures that empower consumers and can help rebuild social media on a foundation of trust. Their decentralized nature enables users to act as hosts or moderators of their own instances, increasing user agency and ownership, and platform interoperability ensures users can engage freely with a wide array of product alternatives without having to sacrifice their content or networks. Unfortunately, they also have many of the same propensities for harmful misuse by malign actors as mainstream platforms, while possessing few, if any, of the hard-won detection and moderation capabilities necessary to stop them. More troublingly, substantial technological, governance, and financial obstacles hinder efforts to develop these necessary functions.
As consumers explore alternatives to mainstream social media platforms, malign actors will migrate along with them—a form of cross-platform regulatory arbitrage that seeks to find and exploit weak links in our collective information ecosystem. Further research and capability building are necessary to avoid the further proliferation of these threats.
The removal of political content on Instagram basically is that because it affects leftist accounts but it means practically nothing to right wingers because they've never used political arguments or things in the news to get their arguments across, they just post racist memes
I've already gotten some 30 accounts perm-banned on social media for sending death threats to settlers.
However, no consequences when threatening to DDoS pro-Isisraeli groups and even the themselves, even making good on those threats by sharing IP addresses and posting screenshots, like when I took down the ACDP and SAFI websites, lmfao.
While also listening to bangers from Ansar Allah/Houthis, PFLP and the Al-Qassam/Al-Aqsa Brigades
I mean, more than what's already been happening? I feel like as time goes on, and they gain new insights, awareness and technical ability to censure, they will jump on those opportunities immediately. In other words, we're not just getting by right now on the platforms we enjoy out of their willingness to play fair, or their benevolence; once they have the chance, they'll go for it
Yes. Can't say I was able to imagine just a few years ago that by living in Europe an entire countrys web traffic is location banned from me without any actual consent of the people. And there was never even any discussion about it, they just did this because "misinformation".
I don't think so but for the "wrong" reasons. Social media is completely opaque and closed sourced. I feel like people could be censored without their knowledge and they would have no way of knowing.
The nature of censorship could take on a whole new forum given the tools available. Shadowbanning gives the user the illusion that they are still engaging with the community, without alerting them to the reality that they are not. The net effect is the average user never noticing, and ultimately never trying to circumvent or appeal, a ban. One clue, however, is the lack of engagement with the user. I could see places like Reddit or Twitter creating systems within their shadowban functionality that also create shadow-engagement in an attempt to dispel any notion that their posts or comments are never seen by anyone.
Yeah, it's doable. Weaponized dead internet where you get a trickle of ChatGPT posts.
I am somewhat suspicious of how prevelant shadowbanning is as I almost always hear the term being used by aggrieved right wing people who think they're not getting the engagement they deserve. but it'd also be very hard to evaluate, wouldn't it?
The problem isnt that censorship itself is bad the problem is the censors are a bunch of libs.
Seriously we shouldnt have war footage and beheading videos and right wing propaganda freely disseminated on the internet. However its not in the lefts favor to clamp down on this material clearly detrimental to society right now because we dont hold any institutional power.
The harm being done by this stuff is outweighed by the freedom of information the left and black and gay and trans and indigenous and other marginalized communities need to spread our ideas and exert a presence on this nonphysical space we call the internet.
The internet should be a highly controlled sanitized space used solely for like organizing workers and shit this aint about giving people a wild west to scream into the void only we can do that.
I think a lot of people are ok with their platforms and work around the censorship without moving. The number of times I've seen "unalive", "secs", "oui'd" on tiktok. The most realistic outlook is that people will keep filming the crimes on these platforms and tagging them "geezer" and "isn't real".