• The future of fake news: Can you tell a real video from a deepfake?
    48 replies, posted
why are people saying they got nothing wrong, you instantly get it wrong when you begin.
Some people can tell right away, though.
Obama's hair looks like static, kind of likeif you did a Photoshop auto select. It really threw me off on the first one because the other was clearly fake.
I think the point of it was to show that if you’re not looking for the deep fakes you’ll probably think it’s real.
There's just something off about seeing a fake. Don't know what it is, but it's just obvious when watching it move. Feels unnatural to look at.
Ahhh fuck I got most of them wrong. Guess we have to start recognizing other parts of the human body beyond the face.
Ha, got them all right, fuck you AI. Trick is not to look directly at the face so much as the perimeter of it and the ears and hair until something jumps out as fucky.
The videos in the article are cheery picked to be as difficult as possible so that the point of the article is made more clear (that you need to watch out for fake videos) Good luck adding much complexity to a scene with current tech. Give it 5-10 years before you see anything truly compelling from more than a straight-out of a face.
It's really not. The real footage all have a jitter and morphing artifact in them, leading people to assume they're fake. They either need to stop doctoring the "real ones" or get better camera equipment, because the experiment is left moot when it's that misleading. Doesn't matter if it's deliberate or not, it sullies the results.
The BLATANT fake video of assange interview
By "easy as possible" I meant to make it as easy as possible to perform the faking. But yeah, it's a good scenario to do this so that it's obvious for people that they need to watch out.
I think that's likely a harder problem, though. If it can spot telltale artifacting from a fake, it could probably do a pretty good job of classifying until the fakes improve significantly.
I actually started working on a project right now using similar methods as the one used in the article. We are tasked to generate images of malignant cells, because actual images often require money to obtain and there are also privacy issues. So what you have is actually two networks, one generator that generates the images, and one discriminator that tells you if it's real or fake. You pin them against each other forming two feedback loops, so they're improving on each other based on each other's outputs. I think that's tricky, because you already have a CNN (the discriminator) within this "generator network" whose sole purpose is to tell the generator how to avoid being labeled as a fake image. Plus, it may be hard for a CNN to discern the differences between real or fake videos since they are so extremely similar mathematically.
I got all of them except for the last one. There's this weird shifting going on in his hair in the one that's supposed to be real that I don't see in the "fake" one and also most people thought the "real" one was fake, wtf.
I got every single one (including the fact that Obama was fake at the start, but of course there wasn't an option so I went Trump). That doesn't matter though, the article is right. Attributing that much effort to second guessing whether or not you're watching a real person isn't viable every single time you see a video. Give the technology a little more time and I'm sure it will be indistinguishable from the real thing, which is terrifying.
As an AI researcher, I can tell you that neural networks have the uncanny ability to spot each others work out. It's going to take some significant advancements to have AI-generated images not recognizable as such under scrutiny - but its definitely a possible future, just not now.
Sorry, you need to Log In to post a reply to this thread.