• Cal Students Develop Way To Expose Fake News Accounts, Bots On Twitter
    15 replies, posted
[QUOTE=Article] BERKELEY (KPIX) — Two Bay Area college students may have found a way to expose fake accounts on Twitter known as bots. From their apartment just off the Cal campus Ash Bhat and Rohan Phadte or RoBhat Labs believe they have cracked the code that’s been bedeviling social media companies and Congress — fake news and bots. “There is a lot of these political propaganda accounts that are spreading fake news,” says Phadte. Like the famous alt-right account “Jenna Abrams” – which was revealed on Friday to be a Russian bot account. They’ve created website called Botcheck.me – as well as a Google Chrome extension that can tell, based on a set of characteristics, whether the account you’re looking at is flesh and blood or ones and zeros. “We essentially train on these characteristics – get a good understanding of what a propaganda bot looks like and what a human looks like and we then predict whether the account is a bot or not based on that,” says Bhat. The project took these 20-year-olds just eight weeks to complete. They already tackled identifying fake news and propaganda on Facebook — so Twitter was next. “Looking further into these accounts, we realized some of these accounts aren’t really human behavior,” says Phadte. “They’re constantly retweeting every minute throughout the 24 hours, they have thousands of followers – but they created their account really only a month ago” Botcheck.me and the Google Chrome extension already have tens of thousands of users and an impressive accuracy rate in separating real people from digital. [/QUOTE] Source: [URL]http://sanfrancisco.cbslocal.com/2017/11/04/cal-students-expose-fake-news-accounts-bots-twitter/[/URL]
This will arguably make them develop more systems that are harder to trace, hopefully we develop a full-proof method of identifying them as time goes on.
[QUOTE=ZombieDawgs;52859480]This will arguably make them develop more systems that are harder to trace, hopefully we develop a full-proof method of identifying them as time goes on.[/QUOTE] All it would take would be for the bot creators to slow down their post rate to that of a more normal rhythm i.e. it only posts within a 12 hour period a varying number times an hour or something and then make it so those bots "ramp up" popularity slower.
[QUOTE=ViralHatred;52859615]All it would take would be for the bot creators to slow down their post rate to that of a more normal rhythm i.e. it only posts within a 12 hour period a varying number times an hour or something and then make it so those bots "ramp up" popularity slower.[/QUOTE] That still impacts their effectiveness though. It's not like email spam where they can just spam from billions of different sources because the effectiveness of this spam is reliant upon follower counts. 1000 bots with 20 followers a piece is not the same as 1 bot with 20,000.
The faster the bot pumps out tweets, the more visibility it achieves due to how many people are on their phones at any given time, looking at their Twitter feeds. Spamming tweets every few seconds guarantees that multiple people will see multiple tweets by the same user whenever they look at their feed.
I'm super interested in the use of (what I would imagine to be) some kind of neural networks to detect fake news and bot twitter accounts, so I went and actually watched the news segment to see if they expanded on what bits of information they collected to train their models [t]https://i.imgur.com/6kvxhM0.jpg[/t] "Hey, we need this segment to look all math-y and science-y, go put a whiteboard in the background" "Sure boss, what do you want me to put on it?" "uhhhhhhhhhhhhhhhhhh, fuck it, just throw the quadratic formula on there or something, that's math-y" [URL="https://medium.com/@robhat/identifying-propaganda-bots-on-twitter-5240e7cb81a9"]They have a pretty good article detailing the basic ideas behind how they made the classifier.[/URL] The neat thing about machine learning is that because the final determination of "does this account look like a bot?" is the end result of potentially thousands of different bits of info about the account, there's (hopefully!) no truly easy way for botmakers to sidestep the classifier for long before the model catches onto them, short of spending the massive amount of resources it'd take to set up believable human accounts rather than churning out endless trash via the shotgun approach. And, as noted above, making them spend more resources per account to duck detection may not fully get rid of the problem, but I'd say it would help mitigate it.
but how does this deal with actual human shitfarms like they had in tunesia?
What if instead of detecting fake accounts we just develop our own botnets that fill social neworks with so much benign garbage that they all shut down and life reverts to sanity?
[QUOTE=zakedodead;52860172]What if instead of detecting fake accounts we just develop our own botnets that fill social neworks with so much benign garbage that they all shut down and life reverts to sanity?[/QUOTE] but that wouldn't change anything because they're already filled with benign garbage created by human users
[QUOTE=ZombieDawgs;52859480]This will arguably make them develop more systems that are harder to trace, hopefully we develop a full-proof method of identifying them as time goes on.[/QUOTE] So eventually [IMG]https://imgs.xkcd.com/comics/constructive.png[/IMG]
They just look to see if the poster is posting an absurd amount of content per day, doesn't seem that impressive.
People who're inclined to listen to these bots probably aren't going to care whether the shit they're eating comes from a bot or a person. The real genius of Russian information warfare isn't in the volume (though that's certainly a factor in their success), it's that they latch onto and warp existing societal conversations, amplify controversy and deepen divisions. They don't spew overt propaganda, their goal isn't to convince anyone outside of Russia that Putin is Number One. For the United States and Europe, their goal is nothing more than to undermine and divide. Large segments of the public are hooked, and I don't see an easy way of getting them unhooked.
[QUOTE=nox;52860987]They just look to see if the poster is posting an absurd amount of content per day, doesn't seem that impressive.[/QUOTE] Would be better to look at the first friends they made, how regularly their content matches content from confirmed bots, how regular their activity start and end times are etc [editline]6th November 2017[/editline] [QUOTE=Psychokitten;52861027]People who're inclined to listen to these bots probably aren't going to care whether the shit they're eating comes from a bot or a person. The real genius of Russian information warfare isn't in the volume (though that's certainly a factor in their success), it's that they latch onto and warp existing societal conversations, amplify controversy and deepen divisions. They don't spew overt propaganda, their goal isn't to convince anyone outside of Russia that Putin is Number One. For the United States and Europe, their goal is nothing more than to undermine and divide. Large segments of the public are hooked, and I don't see an easy way of getting them unhooked.[/QUOTE] Not that warped, these bots cover a wide range of social issues. This not all neo Nazi edgelord poop
[QUOTE=mdeceiver79;52861265]Would be better to look at the first friends they made, how regularly their content matches content from confirmed bots, how regular their activity start and end times are etc.[/QUOTE] Or when you notice that the user you're looking at is posting news&politics 24/7 you know something's wrong. although I wouldn't be surprised if some of the bots try to follow something of a sleep cycle in order to appear more real lol. Because a Jenna Abrams posting 24/7 about dividing social issues and other news etc. should seem a tad bit suspicious
[QUOTE=Sableye;52860157]but how does this deal with actual human shitfarms like they had in tunesia?[/QUOTE] Those things still follow predictable patterns to some extent. The user will only post in the hours of x to y, they only seem to post about z, and they post on average n times a day where n is greater than some heuristic threshold (so the threshold adjusts as the average Twitter user changes for example). It's all pretty much the same deal, just a slightly more varied data set to work with and a greater need to understand the data thoroughly to remove false positive and false negative matches. [editline]6th November 2017[/editline] [QUOTE=nox;52860987]They just look to see if the poster is posting an absurd amount of content per day, doesn't seem that impressive.[/QUOTE] It's not technically astounding. The logic isn't brand spanking new or mind blowing. But it's an interesting data challenge. These bots spit out a lot of shit, working out the identifiers based on the few we've caught so far and working with that data still requires some work.
[QUOTE=nox;52860987]They just look to see if the poster is posting an absurd amount of content per day, doesn't seem that impressive.[/QUOTE] [url]https://medium.com/@robhat/identifying-propaganda-bots-on-twitter-5240e7cb81a9[/url] Saying stuff like this makes you sound dumb.
Sorry, you need to Log In to post a reply to this thread.