• The future of fake news: Can you tell a real video from a deepfake?
    48 replies, posted
Can you tell a fake video from a real one? Artificial intelligence is emerging as the next frontier in fake news — and it just might make you second-guess everything you see. Aviv Ovadya is a technologist who sounded an early warning about the rise of fake news, well before the phrase became a White House favourite. In early 2016, Ovadya sounded the alarm about how vulnerable our digital world was to misinformation and manipulation, outlining a scenario that at the time sounded pretty far-fetched — social media had created an explosive breeding ground for misinformation that would have real-world consequences. One of the scenarios Ovadya is most worried about doesn’t even require the technology for creating deepfakes to be 100 per cent convincing all the time. Instead, he’s concerned that the technology’s very existence could create a situation where people feel they can’t trust what they see — and that might allow those in power to use deepfake technology as a cover for their wrongdoing. http://www.abc.net.au/news/2018-09-27/fake-news-part-one/10308638 This is an interactive article with examples so for the love of god actually click the link and read the article.
I was wrong about every single one. Holy shit. We are headed to a very dangerous place.
Dunno if I'm bad at this, but I only got a few right. This seems like, if it spirals out of control, it could be a huge pain for everyone. I wonder if they'll need to make an AI to detect deepfakes like this.
But what if the AI lies as well?
If there's no AI to detect it, it could be worse than a "huge pain". It could destroy the entire concept of video evidence from both a legal and societal viewpoint. Imagine not being able to believe anything you didn't see happen in person.
i only got 2 wrong, nice part of it is knowing how they move
Got all of them right, the artifacting is pretty obvious. That being said they all would hold up if they weren't being put under scrutiny. I'm not too afraid, there's already been ways for unscrupulous journalists to manipulate footage for a while now, this is just taking it up a notch, take everything with a grain of salt.
4K and 10 bit color makes dithering and blending stand out pretty hard
photoshop has existed for a couple decades now
I got most of them right but it was just by guessing. I'd be interested in seeing how well this sort of thing performs on subjects that aren't just a face looking straight into a camera with perfect lighting.
I got 2 wrong. Personally some of them give off an uncanny feeling and that's what tipped me off, sorta like how the faces in L.A. Noir don't move naturally with the body.
Yeah but photoshop isn't perfect, it's done by humans so you can almost always figure out whether it's fake, and even if you couldn't, photos are not considered the top tier of evidence, videos are. If you can't trust videos, you can't trust anything.
They're only going to get better, so while they may be noticeable now down the line we might have huge problems. Things like Adobe Voco really make me wonder what benefits they are bringing by developing these things? I can only really see downsides
AI watchmen. You're totally right. Frankly my casual language really does undersell the potential issue (at worst, it's probably a few ticks past way uncool). I wonder how convincing a computer generated voice could be when paired with this tech.
I want off this ride
I only got the last one wrong, nice
Fake news aside, think of the porn!
Because someone is dumb enough to think alot of celebrities including A lister females decide on doing pro porn. Some look legit, but not always.
the porn already exists
I guessed all of them. The way they struggle with tracking fast head movement and weird artifacts (especially around the ears) makes them stand out. Although I'm not too keen on the article bamboozling you by forcing you to pick one or the other in the first choice when they're both fakes, that's a bit disingenuous
Welp, I got all of them wrong. Maybe we'll have to start sending or singaling some kind of unique code on videos just to make sure they're legit.
You also need to consider how quickly this technology has progressed in such a short amount of time, especially now that there is machine-learning AI taking control. How long until that artifacting is fixed? A year? Two years? Six months? Then it becomes available on a wider spectrum, and then it becomes public under more innocuous things like the face swap app. I mean come on, what red blooded straight man wouldn't consider for just a minute using a program that can produce a realistic looking sex tape of you (a fit you) boning Jourdan Dunn? And voice synthesis isn't far behind, so it'll sound like you too. I have a strong feeling that the rise of deepfake AI will set the way we deal with the internet back to the pre-social media days and people will either clamp down tight on their privacy settings or just stop engaging altogether and cease posting pictures/videos of themselves. Why would you? So a crazy ex can later commission some deepfake of you for the purpose of extortion because, oh no, I have this very realistic looking video of you having sex with an unconscious ten year old, your neighbor, maybe I'll show their parents the footage? So 4chan or some other online group can decide to target you "for the lulz" and stick your face on bestiality porn? It will breed paranoia, everyone will be a possible target and everyone else a potential threat.
I got them all right except that I said Obama for the first one, and then they tell you at the end that it's also fake too. Fuck you, article, I knew it! Parts of his ears and hair move weirdly. Don't you gaslight me. So I'm curious whether you could train a CNN or something on labeled known fake videos and real videos and reliably detect deepfakes.
Wasn't it banned on most sites tho
Neural Networks can't even tell if there are cars, cats or bridges in a picture with 100% certainty. I think analyzing whether something is a deepfake through a neural network is something that will always be way, way behind creating the actual deepfake.
Could also go the other way. Because everyone and everything can be faked, no one believes anything is real and you can get away with doing things for real and just say it was deepfaked. Blackmail becomes impossible because no one believes the person being blackmailed actually did the thing.
It's really hard to tell how convincing these look when the compression is so bad. Hair is also the biggest give away, it's always super grainy and doesn't reflect light properly. On the Theresa May one you can see the stray strands of her hair not line up properly in places and when it moves it looks misaligned.
Hair and ears seem to instantly set off "FAKE!!!!" in my brain if they look/move wrong. Only got the "Both are real videos!" wrong because I was hardwired to think one of them MUST be wrong.
Keep in mind, the examples in the article are cherry-picked to be as easy as possible. You add some angle combined with movement, and the complexity skyrockets. That's not to say that a lot of footage isn't like this. Especially stuff like them giving speeches.
Pay attention to the clothes and collars too, they move with the fake's head rather than in opposition, like a badly rigged game model.
Sorry, you need to Log In to post a reply to this thread.