• Musk and Zuckerberg fight over the future of AI
    110 replies, posted
[QUOTE=OvB;52505573]Tesla tells you that the cars are always learning from collected data.[/QUOTE] Yeah, I'm not debating that. I'm not even against it. So does Facebook, Google, Apple, etc.
I generally dislike Zuck but I'm with him on this one. <insert Neil Degrasse Tyson quote>
[QUOTE=Harbie;52505563]That WAS the point of that encryption experiment. Those models were given the goal to encrypt their communications. A ML model is pretty useless without a loss function, which is very easy to control and design. "The Google Brain team started with three neural nets called Alice, Bob and Eve. Each system was trained to perfect its own role in the communication. Alice’s job was to send a secret message to Bob, Bob’s job was to decode the message that Alice sent, and Eve’s job was to attempt to eavesdrop." [url]https://www.newscientist.com/article/2110522-googles-neural-networks-invent-their-own-encryption/[/url] EDIT: Inserted quote and reference.[/QUOTE] then i completely misunderstood papoer. Sorry! I will read it again
[QUOTE=Rufia;52505249]This scenario in which AI's will become unstoppable kill machines that never make a mistake is certainly fanciful, but I think the far more realistic threat they pose is the power AIs could afford totalitarian governments. It's like that story of a store's systems figuring out a girl is pregnant before her father does, except instead of the person being pregnant they could be homosexual or voting the wrong way or any manner of minority that the government isn't keen on and instead of giving shopping recommendations it could send unsavoury people knocking on their doors.[/QUOTE] This is basically what Musk is afraid of. Not some crazy Terminator AI that kills us all but someone developing some AI, and being able to abuse it. That's why he founded OpenAI so that everyone can have access to whatever super AI we create, so that the playing field remains even. Musk isn't against AI, he's against one person / entity controlling it and abusing it.
[QUOTE=Morgen;52505620]Musk isn't against AI, he's against one person / entity controlling it and abusing it.[/QUOTE]i've got an itch that Zuckerberg really wouldn't mind being that one person
[QUOTE=Ericson666;52505438]I never understood why people get so up in arms about Facebook analyzing the data that you, the customer, willingly handed over to them. There have been a lot of really shady business practices involving human rights violations and fraud, and somehow a free service using information that the users supplied to better target them for advertisements doesn't strike me as that evil.[/QUOTE] This is why I don't have a facebook account. But guess what? Even without an account, they have an account on me, and wealths of info on me. That's bullshit.
I'm behind Zuckerberg on his thoughts about the fesrmongering. The image of AI and ML is so scewed by media, I feel theres a perception that we'll aquire a general AI relatively soon. But in reality we're quite damn far from anything that resembles a general AI. Establishing regulations now is kinda like establishing regulations for interstellar travel. Nothing wrong with it, but feels a bit premature. [editline]25th July 2017[/editline] Saying things like "we need to be proactive in regulation instead of reactive because if we're reactive in AI regulation it's too late" in front of politicians is pretty irresponsible, especially when we are at the infancy of AI and ML. What's to be regulated right now? The methods I use in my ML networks? The amount of computing power I'm allowed to use? What areas I'm allowed to apply my ML on?
I'd rather trust the guy who doesn't collect and sell personal information to corporate advertisers for a living. My Facebook profile is empty, yet somehow the advertisements are accurate in regards to my interests.
Just on the poll of the thread. Wanting AI to be regulated and thinking scaremongering is ridiculous are not mutually exclusive. The idea of a Skynet deciding to destroy mankind is patently ridiculous but that doesn't mean misused AI isn't a concern. The concern should be about how irresponsible people will use AI, not baseless assertions that a decryption algorithm will spontaneously gain control of all the world's bombs and decide to kill everyone. The systems used by Facebook and Google are designed to track who you are and what you like, and to serve you ads based on the data accumulated. The problem is that even now people are being quietly boxed into echo-chambers, that their opinions and ideas are being sheltered, and that they are being manipulated. And no one is (as far as I'm aware) actively [i]trying[/i] to manipulate anyone. Right now all we have is systems designed to server ads which inadvertently encourage certain behaviours. What happens when a company, or government, decides to use similar systems to actively manipulate people? We already know that people easily fall for rhetoric, even if they actively try not to. I know I have, and I've spent most of my adult life trying to avoid being manipulated. Fearmongering over a robot uprising or a computer nuking the world are a distraction from the fact that what we could end up with is much worse. Not a loss of life, or even a loss of freedom, but a loss of self. Imagine living in a world where everything you see reinforces your own beliefs. It's a fucked up kind of oppression that many people wouldn't even be able to see. To me that's far worse than a tyrannical dictatorship, at least those you can see and can fight.
[QUOTE=da space core;52505311]Everything is fine. We're all humans here at [url]www.facepunch.com[/url]. We all had to do the CAPTCHA didn't we? Please resume posting.[/QUOTE] [media]https://www.youtube.com/watch?v=fsF7enQY8uI[/media]
[QUOTE=Thunderbolt;52505189] And yes I realize a shitty AI equipped robot could potentially try to build a better one and so on, but unless those kinda robots become mass produced, I think it'd be pretty easy to notice "the one test AI bot we have locked up in a lab" trying to pull some shit[/QUOTE] Uh, no, not really. An AI capable of making itself more intelligent would likely fly out of control almost instantly, and depending on the rate at which it could improve its intelligence it would very quickly be smarter than any person- I.E., it could hide said shady shit if it were so inclined. That's the perceived risk here.
I hate how people go "duh Terminator is gonna be real guise!" when the actual danger comes from something like The Patriots in Metal Gear. I don't think this poll really addresses that though, I mean it's not like you can't think both.
[QUOTE=DinoJesus;52505175]I think ai is great and it has a myriad of applications, but cost cutting measures are just going to lead to profit oriented businessmen replacing human workforce with ai once it becomes more applicable. It's kinda scary to think about, and I don't want to see the average Joe fucked over in favor of machines.[/QUOTE] The best-case scenario should something like this happen would be something like this: [media]https://www.youtube.com/watch?v=fOz1cMu7hZQ[/media] [QUOTE=Thunderbolt;52505189]I'll start worrying about AI when power-efficient robots capable of easily navigating human environments become commonplace, even the most intelligent and evil AI will be powerless if it can't easily interact with the real world. And yes I realize a shitty AI equipped robot could potentially try to build a better one and so on, but unless those kinda robots become mass produced, I think it'd be pretty easy to notice "the one test AI bot we have locked up in a lab" trying to pull some shit[/QUOTE] Honestly, my biggest concern regarding AI will be just how connected they will be. My reason for this is that we've already seen through the "Internet of Things" that many companies don't seem to give any thought to security, meaning even simple devices can be turned into a vector for attack over the internet. Imagine a domestic humanoid robot being hacked/compromised. Property damage, physical injury, theft of personal goods. SOMEONE is going to try it sooner or later, no matter how many security measures you put in place. Wireless accessibility would just further open these robots to attack and exploitation, and yet wireless connectivity would be almost mandatory for a variety of reasons (enhanced learning from other robots across the globe, firmware updates, home connectivity, etc.). Even if you take wireless out of the equation entirely, the methods used for updating firmware could also be used to compromise the machine in order to make it do things it would otherwise never conceive itself doing. So basically, the AI problem (as presented here, in this post) is less a problem with AI itself and more a problem with shitty people continuing to do shitty things.
Some older people get conned by the indian pre-message call centers. Let alone a simple AI bot meant for scamming the older folk. I love AI related topics but it can definitely be used for bad - even before any kind of psycho hitler computer.
[QUOTE=LtKyle2;52505876]I'd rather trust the guy who doesn't collect and sell personal information to corporate advertisers for a living. My Facebook profile is empty, yet somehow the advertisements are accurate in regards to my interests.[/QUOTE] Certain websites have Facebook ad analytics installed so that they can show you ads on Facebook after you visit their website, which isn't really different from how Google does their advertising. [url]https://www.facebook.com/business/help/742478679120153[/url] No one actually purchases user data directly, all other advertising is done by advertisers specifying to Facebook criteria for users to display ads to, e.g. "30 year old women who watch anime". They're pretty transparent on how they do their advertising: [url]https://www.facebook.com/ads/about/?entry_product=ad_preferences[/url]
I welcome AI with open arms [thumb]http://i3.kym-cdn.com/photos/images/original/001/096/674/ef9.jpg[/thumb]
[QUOTE=V12US;52505171]Considering regular ol' intelligence produced some pretty sick bastards, it wouldn't take a genius to reason that artificial intelligence should be handled very carefully lest you accidentally create some kind of cyber Hitler.[/QUOTE] Cyber Hitler sounds like something out of a 1990s-2000s variant of Danger 5
[QUOTE=Wowza!;52506919]Certain websites have Facebook ad analytics installed so that they can show you ads on Facebook after you visit their website, which isn't really different from how Google does their advertising. [url]https://www.facebook.com/business/help/742478679120153[/url] No one actually purchases user data directly, all other advertising is done by advertisers specifying to Facebook criteria for users to display ads to, e.g. "30 year old women who watch anime". They're pretty transparent on how they do their advertising: [url]https://www.facebook.com/ads/about/?entry_product=ad_preferences[/url][/QUOTE] Since you responded I just have one thing I just discovered. During the past heatwave in my area I've been buying a certain brand of water from convenience stores for work. Just started buying it a week ago. All of a sudden I see ads for that same water brand on my Facebook feed even though I haven't posted once about it. Completely unrelated but it still spooked me.
[QUOTE=Harbie;52505563]"The Google Brain team started with three neural nets called Alice, Bob and Eve. Each system was trained to perfect its own role in the communication. Alice’s job was to send a secret message to Bob, Bob’s job was to decode the message that Alice sent, and Eve’s job was to attempt to eavesdrop."[/QUOTE] I'm really good at sending secret messages to Bob. Eve's an eavesdropping bitch though. [QUOTE=Morgen;52505620]This is basically what Musk is afraid of. Not some crazy Terminator AI that kills us all but someone developing some AI, and being able to abuse it. That's why he founded OpenAI so that everyone can have access to whatever super AI we create, so that the playing field remains even. Musk isn't against AI, he's against one person / entity controlling it and abusing it.[/QUOTE] I totally agree with Musk on OpenAI and such. Just when most people tend to be against AI they think of stupid shit like Terminator when in reality the danger comes from the human element of the equation. Abuse from those who commissioned it or purposeful bias in its programming. Which basically means that corporations or governments are not the people who should be in charge of any AI. Especially big corporations like Google, Amazon, or Facebook and groups such as the NSA or Russian intelligence services. An open AI system without biases is unlikely to go too rogue on us. At least not to the point of being a danger to our existence.
[QUOTE=Bob The Knob;52505197]How will the social internet as we know it survive once bots are indistinguishable from real humans?[/QUOTE] Pretty sure sites like reddit and twitter are already filled with bots that aren't immediately recognisable as bots - not in the form of generating their own content from scratch, but using other people's posts to seem legitimate isn't terribly uncommon.
[QUOTE=Thunderbolt;52505259]I find it hard to believe than an AI any more complex than your average chatbot will be able to exist in the form of a program that could "escape" to a system not built specifically for it, and definitely not just "copy itself online" and work off of some server running off the shelf hardware and software. Sure, the software could all be overwritten but I doubt the hardware would ever be fast enough for it to still function like before, even if it used a network of computers the network latency alone would slow down its thought process to something people could contain in time, assuming the escape was noticed, of course[/QUOTE] The moment an AI can rewrite / improve it's code and perform autonomous actions it has the possibility of going rogue. One that is designed so that it can modify it's directives or add additional directives also could get pretty bad pretty fast. AI behavior is pretty much just pattern matching / formula approximation, [I]anything[/I] that an AI deems a 'good enough' method to achieve it's goals it will do. It won't be doing it 'intelligently' per se, but it will be a lot more like a virus or some rogue genetic code, only it can rewrite it's genome billions of times faster than any living creature.
[QUOTE=phygon;52505998]Uh, no, not really. An AI capable of making itself more intelligent would likely fly out of control almost instantly, and depending on the rate at which it could improve its intelligence it would very quickly be smarter than any person- I.E., it could hide said shady shit if it were so inclined. That's the perceived risk here.[/QUOTE] [QUOTE=Radical_ed;52507130]The moment an AI can rewrite / improve it's code and perform autonomous actions it has the possibility of going rogue. One that is designed so that it can modify it's directives or add additional directives also could get pretty bad pretty fast. AI behavior is pretty much just pattern matching / formula approximation, [I]anything[/I] that an AI deems a 'good enough' method to achieve it's goals it will do. It won't be doing it 'intelligently' per se, but it will be a lot more like a virus or some rogue genetic code, only it can rewrite it's genome billions of times faster than any living creature.[/QUOTE] Ah the classic. Care to explain how, and why, a machine which is programmed to improve itself would suddenly mean disaster? What causes a facial recognition algorithm to gain the ability to judge mankind on a global scale? What gives it the inclination? Where does it get all the guns and bombs? Why does it judge mankind on any kind of moral level at all? Why did no one notice? Why is mankind suddenly inept and unable to stop it? We're really, really fucking good at destroying shit. Even when we don't mean to. Why is it now we suddenly all become infants incapable of retaliation? This retarded fearmongering over AI happens every fucking time the subject is brought up and it's always baseless claims which make such a MASSIVE leap in logic that for any other subject you'd be laughed out of the room. But no, with AI you can just claim it'll inevitably lead to doom and not need a single fucking thought to back it up. Ignore all the questions above and answer one question for me. Why would an AI which is designed to detect faces ever do anything other than detect faces? If that is the AI's sole reason for being what could it possibly program into itself that would cause it to try to fucking kill everyone?
[QUOTE=V12US;52505171]Considering regular ol' intelligence produced some pretty sick bastards, it wouldn't take a genius to reason that artificial intelligence should be handled very carefully lest you accidentally create some kind of cyber Hitler.[/QUOTE] it wouldn't take a genius to reason that, no. it'd take an idiot with zero basic knowledge of the concepts involved.
I read a really good book awhile back by Nick Bostrom that describes how an advanced AI that can think for itself, told to fulfill a basic enough purpose could reach a godly level of intelligence and cause havoc, it also talks about all possible safeguards and how a smart AI could also try to counter them. A simple theoretical argument is the paperclip argument. If you give a smart AI the singular purpose of making 10 paperclips, once it creates 10 it will still have some percentage of error to believe it miscalculated and didn't make 10. It'll then do everything in its power and use absolutely all of its resources to improve itself, to ensure that it was correct and didn't 'fail' what it was told to do. I've read quit abit into really advanced AI because it's so interesting. Most people just think of a true smart AI as a robotic person but they would be so much more, they would be so insanely smart and logical without restrictions that they could evolve by improving themselves at an insane rate. People definitely need to be careful when approaching this area. When people start making truly smart Ais it's not unreasonable that one with enough power could just reach an insane 'conclusion' from calculating it's purpose the instant it's turned on and acting on it, considering how fast they would be capable of thinking.
[QUOTE=Janus Vesta;52507168] Ignore all the questions above and answer one question for me. Why would an AI which is designed to detect faces ever do anything other than detect faces? If that is the AI's sole reason for being what could it possibly program into itself that would cause it to try to fucking kill everyone?[/QUOTE] Thinking too small if you think we're all worried about facial recognition programs. As AI with self-teaching protocols spread more and more into societies infrastructure, you can damn well bet there would be cause for alarm if any one of those individual components started acting out.
[QUOTE=Nebukadnezzer;52507379]it wouldn't take a genius to reason that, no. it'd take an idiot with zero basic knowledge of the concepts involved.[/QUOTE]i'm one such idiot, can you enlighten me i don't know shit about anything related to this outside the sensationalized Hollywood AI stuff and i'm honestly very intrigued
[QUOTE=Destroyox;52506313]I hate how people go "duh Terminator is gonna be real guise!" when the actual danger comes from something like The Patriots in Metal Gear. I don't think this poll really addresses that though, I mean it's not like you can't think both.[/QUOTE] It truly is a complicated scenario, I mean eventually ai is going to get advanced enough to the point where it's going to be impossible to regulate what happens with it. This isn't a black and white situation like people think. It's funny how subtle stuff influences what people will come to fear later in the future, even if they don't realize it yet. Instead of primarily focusing directly on the ai itself, people should look at those in charge, because they're going to be the real issues at the end of the day due to humanity's inherent trait of making irrational choices due to biases, power, etc. But at the same time, due to our own easily corruptible nature, ai can also become as easily corrupt if it gets to the point to where it can spot that issue with us (although I feel like something programmed to be entirely logical would have entirely different issues that we don't suffer from, but obviously things can change a lot as technology advances). There isn't really a simple answer to all of this, and it bothers me the poll only has two choices rather than a multitude, it just further encourages people to bandwagon on other people's ideas, then again that's a problem with polls to begin with.
[QUOTE=Bob The Knob;52505197]How will the social internet as we know it survive once bots are indistinguishable from real humans?[/QUOTE] Ice cream? God, I love ice cream!
[QUOTE=Radical_ed;52507130]The moment an AI can rewrite / improve it's code and perform autonomous actions it has the possibility of going rogue. One that is designed so that it can modify it's directives or add additional directives also could get pretty bad pretty fast. AI behavior is pretty much just pattern matching / formula approximation, [I]anything[/I] that an AI deems a 'good enough' method to achieve it's goals it will do. It won't be doing it 'intelligently' per se, but it will be a lot more like a virus or some rogue genetic code, only it can rewrite it's genome billions of times faster than any living creature.[/QUOTE] I feel like people put a human face to things such as this and automatically assume that a machine, etc. will fall prey to issues that we suffer from in due time. Just because something can improve itself doesn't necessarily mean it's going to kill us all. Granted I did mention earlier that it is a possibility but that's only because there are no absolutes and I still try to see everything from multiple angles, but I think fearmongering would just cause more issues with these things we're afraid of. I mean look what trouble spreading fear has brought for people to begin with, it just holds us back as we kill each other thinking it helps us in terms of our own survival.
[QUOTE=Janus Vesta;52507168] Ignore all the questions above and answer one question for me. Why would an AI which is designed to detect faces ever do anything other than detect faces? If that is the AI's sole reason for being what could it possibly program into itself that would cause it to try to fucking kill everyone?[/QUOTE] Dude, people aren't scared of face-recognizing AI. They're afraid of general AI, specifically if said AI is in any way designed to improve it's cognitive capacity in a [I]general[/I] way. Please feel free to educate yourself on the topic at hand before you make as much of an ass out of yourself as Zuckerberg is by furiously typing out a gigantic multi-paragraph reply that isn't even related to the subject at hand.
Sorry, you need to Log In to post a reply to this thread.