• Republicans would be banned if Twitter used algorithms against white supremacy
    30 replies, posted
https://motherboard.vice.com/en_us/article/a3xgq5/why-wont-twitter-treat-white-supremacy-like-isis-because-it-would-mean-banning-some-republican-politicians-too?utm_source=reddit.com&utm_source=reddit.com At a Twitter all-hands meeting on March 22, an employee asked a blunt question: Twitter has largely eradicated Islamic State propaganda off its platform. Why can’t it do the same for white supremacist content? An executive responded by explaining that Twitter follows the law, and a technical employee who works on machine learning and artificial intelligence issues went up to the mic to add some context. With every sort of content filter, there is a tradeoff, he explained. When a platform aggressively enforces against ISIS content, for instance, it can also flag innocent accounts as well, such as Arabic language broadcasters. Society, in general, accepts the benefit of banning ISIS for inconveniencing some others, he said. In separate discussions verified by Motherboard, that employee said Twitter hasn’t taken the same aggressive approach to white supremacist content because the collateral accounts that are impacted can, in some instances, be Republican politicians. The employee argued that, on a technical level, content from Republican politicians could get swept up by algorithms aggressively removing white supremacist material. Banning politicians wouldn’t be accepted by society as a trade-off for flagging all of the white supremacist propaganda, he argued. There is no indication that this position is an official policy of Twitter, and the company told Motherboard that this “is not [an] accurate characterization of our policies or enforcement—on any level.” Any move that could be perceived as being anti-Republican is likely to stir backlash against the company, which has been criticized by President Trump and other prominent Republicans for having an “anti-conservative bias.”
That sounds like a personal problem for the white supremacist politicians then.
When a platform aggressively enforces against ISIS content, for instance, it can also flag innocent accounts as well, such as Arabic language broadcasters. Sounds more like algorithms are just a helltechnology that we're gonna look back on in a few decades and wonder why we ever trusted them so strongly with so little reservations.
The robot wars will start with us creating AI to moderate these people.
who cares? goblins living in the 1800s who haven't gotten with the times can be left behind for all I give a shit about, bigotry has no place in civilized society and everyone has to learn this sooner or later
Twitter is just a platform of the GOP just like fox, they will not police one side to the standard they police all other sides.
Yeah this isn't actually as bad as it sounds, it's not so much that they can't use it because Republicans are all turboracists that'll get banned, they can't because it's still an iffy technology that has trouble distinguishing arabic writing from terrorist propaganda.
Hate to tell you but "algorithms" are driving every piece of technology you've ever used, that term is so overloaded at this point. If you're referring using AI and machine learning, that's literally the future, of significance on par with the discovery of electricity and the creation of the internet, it isn't going anywhere.
Im not saying we'll stop using them, I'm saying that they way we're using them right now is probably going to end up like leaded gas after a while.
and the thing with algorithms. You can continuously improve them until they're literally perfect in predicting shit.
Eating ten pounds of sugar a day because it was healthy in foods with no fats was thing from the 50s to the 2000s.
You can't teach an old dog new tricks seems to be applicable here, one of the few times it is.
It really sucks that these platforms that are near utilities in the scale of use are so consolidated in how they're owned and moderated. YouTube, Facebook, and Twitter have been used to rally the production of systemic violence. They're too big to truly die yet it's clearly in the hands of people who will continue allow them to be weaponized against vulnerable people. I think it would be best to break up so nigh monopolistic social media companies.
Because America just had that many white supremacists. AKA, too many people would be banned/erased, and would create way too much noise that would draw attention to Twitter. Twitter essentially would risk being taken down, if they go after white supremacists. There's that many.
I'd laugh so hard if it consider Trump as a white supremacist And then cry because he's our president
White supremacy really is a lifestyle choice the Republicans are so keen to attack people for.
of all the social media companies, Twitter is the least malevolent IMO. They have the most open and accepting stance of them all, including Reddit, denying very little in the name of free speech. Which can be a double edged sword obviously, but I think the intent is noble.
You rly can't claim "nobility" when you're allowing white supremacists.
I never said their entire platform is noble, and claiming unsourced claims as a point to the contrary doesn't really help things. This is only in a relative viewpoint from the current social media platforms.
Elaborate, please. I don't understand what this means.
I don't like it very much, but Im' part of society, Twitter. And I encourage you to turn the algorithms up to 11.
Well it must be pretty close to time for ol' fido to go live on that farm upstate.
Sounds great, when can we start running the algorithms?
Man even robots think that Republicans are the party of hate
How fucking surprising.
In terms to their competition, I view them in a higher light.
Just because they're potentially aren't as shit as their "competition" (let's be honest, social media sites don't "compete" with each other; they co-exist and thrive off of each other) doesn't mean they deserve any kind of "praise" for it. Like... They're all still going below the bar of standard. They don't deserve any kind of commendation for not going as low; they ain't noble.
only if that shit is predictable. Every great model is only good for a given set of bounds, once you cross them things can get dicey quickly. Predicting human speech or analyzing things like euphemisms will never be 100% perfect or even 90% perfect because people will start using different words to mean the same thing and code their language.
Nope. Manual moderation of services with exponential growth and billions of submissions per day literally isnt possible.
Sorry, you need to Log In to post a reply to this thread.