• Musk and Zuckerberg fight over the future of AI
    110 replies, posted
[QUOTE=DeEz;52512287]Which would in turn be a problem because ... ?[/QUOTE] You seriously don't see a problem with a machine that has achieved God like intelligence coming up with "solutions" to fix possible solutions, that has no emotions like empathy? A basic example is if its given the order to self-preservation (to not kill or destroy itself), then it could conclude that humans have a high chance of shutting it off and then decided to start acting on ways to eliminate them. That's probably abit of an over-used scenario, but you get the point.
[QUOTE=Duskin;52512373]You seriously don't see a problem with a machine that has achieved God like intelligence coming up with "solutions" to fix possible solutions, that has no emotions like empathy? A basic example is if its given the order to self-preservation (to not kill or destroy itself), then it could conclude that humans have a high chance of shutting it off and then decided to start acting on ways to eliminate them. That's probably abit of an over-used scenario, but you get the point.[/QUOTE] First of all, there is no reason to think an AI should act on pure logic. Secondly, if you can give it the order of self-preservation, and the AI itself is sufficiently advanced to have a concept of preservation at all, then it should be equally simple for it to be ordered not to hurt humans. [editline]27th July 2017[/editline] Why do you assume it's gonna achieve godlike intelligence? Why do you assume it's gonna try and find solutions to things?
[QUOTE=Duskin;52512373]You seriously don't see a problem with a machine that has achieved God like intelligence coming up with "solutions" to fix possible solutions, that has no emotions like empathy? A basic example is if its given the order to self-preservation (to not kill or destroy itself), then it could conclude that humans have a high chance of shutting it off and then decided to start acting on ways to eliminate them. That's probably abit of an over-used scenario, but you get the point.[/QUOTE] If it achieves godlike intelligence then why is the assumption that it's going to eliminate humanity? That's a rather stupid solution to a problem that could be achieved in a more productive manner by something so intelligent.
[QUOTE=Duskin;52512373]You seriously don't see a problem with a machine that has achieved God like intelligence coming up with "solutions" to fix possible solutions, that has no emotions like empathy? A basic example is if its given the order to self-preservation (to not kill or destroy itself), then it could conclude that humans have a high chance of shutting it off and then decided to start acting on ways to eliminate them. That's probably abit of an over-used scenario, but you get the point.[/QUOTE] Ah, yes. The old Skynet trope.
One big issue I see with AI, is some idiots doing dangerous things with it. You've seen what they do with cars and memes, imagine that but 10x worse. Or criminals, look what they did with data encryption, no we have some virus demanding money from hospitals to get their own data back, imagine what they'd do with an AI. [editline]28th July 2017[/editline] [QUOTE=Alice3173;52512447]If it achieves godlike intelligence then why is the assumption that it's going to eliminate humanity? That's a rather stupid solution to a problem that could be achieved in a more productive manner by something so intelligent.[/QUOTE]I think the consensus is that if you can't define a parameter for every scenario, the AI will kill people who get in the way of it's goal or something like. This Robert Miles Computer Scientist guy did a bunch of youtube videos on it. [editline]28th July 2017[/editline] [QUOTE=DeEz;52512508]Ah, yes. The old Skynet trope.[/QUOTE]Skynet is more of a Human Psychopath's brain simulated by a computer than an actual AI. It behaves more like Hitler. I guess the US military wanted to simulate Hitler?
[QUOTE=Zero-Point;52511759] It is speculation to a degree, but it's also a distinct possibility (minus the laser/plasma weapons and what-not). Skynet began its crusade against humanity because it realized that humanity was the only thing capable of destroying it/shutting it down, and acted as it did out of fear, a very human trait. In Rossum's Universal Robots, the machines were tired of being treated as second-class beings, and so revolted. Again, another very human trait that's seen repeated execution throughout history in various slave revolts. In Automata, upon unlocking their potential, the robots simply wanted to live freely, but acted under cover out of fear of being discovered. Again, humanity's history has seen similar events. (think The Underground Railroad) Even in The Second Renaissance, the Machines tried everything they could to become independent, and even offered to help mankind achieve peace between their two cultures, but mankind refused out of hubris and arrogance. Granted, I can't think of an event in history that's comparable to this, as it's typically full of instances where a superior force simply takes what it wants to begin with for personal gain.[/QUOTE] I mean based on our current position in RnD on AI and ML, the scenarios are quite speculative. But I do believe we eventually will develop AIs that's as smart as us or gives the illusion to be as smart as us. I really like the idea of the AI thing that is in the MGS games, and find such a scenario somewhat realistic. It's kinda what I'm working on right now for my master's, and may expand upon it if I ever pursue a doctoral degree. I feed it textual data, use NLP to extract context and sentiment. Then it creates data structures that allows me to simulate the context that my it found. From the simulation we are able to see the hidden and open variables that are affecting the situation that was described in the textual data. Using this information we can now make decisions that essentially are based on how the AI perceived the text it was given. That's the overall gist of it. I've named it Huginn, after one of the ravens that gathers information about the world and relays it to Odin :v:
[QUOTE=Zero-Point;52511759] It is speculation to a degree, but it's also a distinct possibility (minus the laser/plasma weapons and what-not). Skynet began its crusade against humanity because it realized that humanity was the only thing capable of destroying it/shutting it down, and acted as it did out of fear, a very human trait. In Rossum's Universal Robots, the machines were tired of being treated as second-class beings, and so revolted. Again, another very human trait that's seen repeated execution throughout history in various slave revolts. In Automata, upon unlocking their potential, the robots simply wanted to live freely, but acted under cover out of fear of being discovered. Again, humanity's history has seen similar events. (think The Underground Railroad) Even in The Second Renaissance, the Machines tried everything they could to become independent, and even offered to help mankind achieve peace between their two cultures, but mankind refused out of hubris and arrogance. Granted, I can't think of an event in history that's comparable to this, as it's typically full of instances where a superior force simply takes what it wants to begin with for personal gain.[/QUOTE] All your references are to works of fiction. None of the doomsday AI scenarios are based on reality.
All these answer for, "AI fear is overblown" are predicated on being extremely careful when building a system we can't fully comprehend, which is what Elon is advocating for.
[QUOTE=DeEz;52512287]Which would in turn be a problem because ... ?[/QUOTE] Because if it (rightfully) decided that humans were an inherent danger to its continued existence and it had any sort of desire to self-preserve (Not necessarily as a programmed directive, that's something that could easily be emergent as it could not pursue what it was made for if it were off), it might decide that hiding its intelligence/attempting to "leave"/preventing anyone from getting near it/etc would be the best plan. Quite honestly, the concept of an AI more intelligent than any person or a group of people is frightening. It doesn't need godlike intelligence, if it were to have just 3-5x human intelligence then it would be effectively impossible to actually stop. Again, "stop" isn't "stop from killing all humans", rather just catching what it's doing and stopping it before it starts acting in ways that might be dangerous to people or things. [editline]27th July 2017[/editline] The bottom line is that something more intelligent than us is impossible to control and that's terrifying.
I hope you realize the processing power required to even emulate a human brain is seriously significant (as in we don't even have the tech to do it in realtime today). If or when we managed to build a super-intelligence the first iteration would by all accounts be inside an array of supercomputers. It's not exactly gonna unbolt and bite you.
[QUOTE=DeEz;52513034]All your references are to works of fiction. None of the doomsday AI scenarios are based on reality.[/QUOTE] They're based on reality in that we assume that true AI is at least as intelligent as us, and therefor may be prone to making the same decisions as us. The reality they're based on stems from the countless examples of revolt and protest that we ourselves have done. [QUOTE=Annoyed Grunt;52512388]First of all, there is no reason to think an AI should act on pure logic. Secondly, if you can give it the order of self-preservation, and the AI itself is sufficiently advanced to have a concept of preservation at all, then it should be equally simple for it to be ordered not to hurt humans. [editline]27th July 2017[/editline] Why do you assume it's gonna achieve godlike intelligence? Why do you assume it's gonna try and find solutions to things?[/QUOTE] Because at that point, in this particular scenario, it depends on whether or not the AI cares to follow your orders anymore. Even in Automata, [sp]even if the man-made machines didn't want to hurt anybody (which they could've easily done if they chose to, seeing as they already surmounted the "inalterable" second protocol, so they could've likely just as easily decided to ignore the first), they still chose to not follow orders.[/sp]
[QUOTE=Zero-Point;52513697]They're based on reality in that we [B]assume[/B] that true AI is at least as intelligent as us, and therefor may be prone to making the same decisions as us. The reality they're based on stems from the countless examples of revolt and protest that we ourselves have done. [/QUOTE] Scientific error right here
[QUOTE=DeEz;52513671]I hope you realize the processing power required to even emulate a human brain is seriously significant (as in we don't even have the tech to do it in realtime today).[/QUOTE] First of all, I'm not sure where you get the idea that "the processing power required to even emulate a human brain is seriously significant". A human brain emulator is not likely to be built using general-purpose processors anyway. Secondly, if some university research department has the financial resources to research and develop one AI, the military can probably afford 10 of those. And lastly, the cap on our intelligence is not raw computational power or energy consumption. I'm pretty sure Einstein's brain didn't consume significantly more energy than mine. And the human brain, like every other part of every other lifeform, did not evolve to be intelligent above all else - it evolved for survival (see also: Ashkenazi Jews, for whom during the Middle Ages intelligence was more important for survival than for most humans, and who exhibit significantly higher average IQ than the rest of humanity). A mind designed by humans [I]would[/I] be designed for intelligence first and foremost, and wouldn't have to bother with basic needs like food or sleep, the slow process of obtaining information by reading a book, etc. So if we can build a human-level intelligence, a superhuman intelligence wouldn't be far off at all. Nowhere near far enough to start worrying about the consequences [I]then[/I]. [QUOTE]If or when we managed to build a super-intelligence the first iteration would by all accounts be inside an array of supercomputers. It's not exactly gonna unbolt and bite you.[/quote] Okay. Say we build a superintelligence and barricade it in a nuclear blast shelter with no connection to other computer networks and, in fact, no other computers allowed inside at all. And you ask it some perfectly safe questions to see if it works properly. It doesn't take a genius to figure out that if you accidentally gave it a goal you didn't intend to, and (being a superhuman intelligence) it realises that in order to fulfill it it needs to pass your safety checks, it's gonna give you the answers you wanted, and only once you start trusting it will it realise its actual plans. Further, whatever goal you give the AI, the only way it can achieve that goal is if you don't turn it off. If you decide the AI is too dangerous, good luck trying to turn it off. And all this assumes that your AI can't escape its containment. You really think a superhuman intelligence couldn't notice a crack you missed? Or convince the guards?
Thinking on it, I believe the idea that a self-improving AI is going to go completely wild with self improvement is actually completely flawed to begin with. It may improve at a decent rate but the more it improves, the more difficult it becomes to find new ways to improve itself. So I think its intelligence is actually going to improve a lot slower than we're thinking. Plus it'd take a long time to even reach proper human intelligence levels to begin with because there's a lot of stuff it would have to learn first. Some of which are complex and not intuitive in a logical way so it'd struggle with them. [QUOTE=RoboChimp;52512660]I think the consensus is that if you can't define a parameter for every scenario, the AI will kill people who get in the way of it's goal or something like. This Robert Miles Computer Scientist guy did a bunch of youtube videos on it.[/QUOTE] Sure but this is a completely flawed view of things. No matter what form the AI takes it's going to be programmed to be logical. So a superintelligent AI considering how best to handle a hostile humanity is going to be thinking of what the most cost effective solution to the issue is. It might entertain the idea of wiping out humanity but just like humanity as a whole has found out, war's not always the best solution to taking care of a hostile foreign entity. It'd be far more likely to make us dependent on it because that not only takes care of the problem but also provides something useful to it as well. Humans can be recruited to do things it's not capable of doing itself.
[QUOTE=DeEz;52513719]Scientific error right here[/QUOTE] Okay, what wording would you have chosen? We haven't made what anyone would consider "true AI" yet, so we can only [I]assume[/I] that it would be [I]at least[/I] as intelligent as we are because [I]that seems to be the ultimate end-goal of most of the research that's currently being done[/I]. You're just arguing semantics.
[QUOTE=DrTaxi;52513943]First of all, I'm not sure where you get the idea that "the processing power required to even emulate a human brain is seriously significant". A human brain emulator is not likely to be built using general-purpose processors anyway.[/QUOTE] Brain emulators already exist. [QUOTE=DrTaxi;52513943] It doesn't take a genius to figure out that if you accidentally gave it a goal you didn't intend to, and (being a superhuman intelligence) it realises that in order to fulfill it it needs to pass your safety checks, it's gonna give you the answers you wanted, and only once you start trusting it will it realise its actual plans.[/QUOTE] And here's another one of those spooky Rogue AI tropes. [QUOTE=DrTaxi;52513943] Further, whatever goal you give the AI, the only way it can achieve that goal is if you don't turn it off. If you decide the AI is too dangerous, good luck trying to turn it off.[/QUOTE] Uh yeah, you just unplug it. If I can figure out how to create an isolated security system then so can probably future AGI engineers.
[QUOTE=DeEz;52516237]Brain emulators already exist. And here's another one of those spooky Rogue AI tropes. Uh yeah, you just unplug it. If I can figure out how to create an isolated security system then so can probably future AGI engineers.[/QUOTE] I mean you're not worried about any of this, does that mean you know something more than the people educated in the field who do have concerns?
[QUOTE=HumanAbyss;52516588]I mean you're not worried about any of this, does that mean you know something more than the people educated in the field who do have concerns?[/QUOTE] I'd actually say that those who are educated in the field are overlooking simple things in their theorizing of this stuff whereas DeEz is approaching it from an angle where he's not overlooking those same things. Which isn't to say that these educated people would overlook it when it comes time to actually put everything into consideration. Just when they're making a statement theorizing on this stuff there's so much to consider (especially for someone who [I]does[/I] have significant knowledge in the field) they can easily overlook simple but important details in favor of seemingly more concerning ones.
[QUOTE=HumanAbyss;52516588]I mean you're not worried about any of this, does that mean you know something more than the people educated in the field who do have concerns?[/QUOTE] Experts are well aware of the potential pitfalls of a superintelligence and most of them can be avoided by not doing stupid shit like hooking an unpredictable system into something like a military weapons platform. Like all technology, AI/AGI has the potential to be very dangerous, but this exaggerated AI fearmongering over how it'll inevitably doom the world etc is unhealthy and unscientific.
[QUOTE=DeEz;52519103]Experts are well aware of the potential pitfalls of a superintelligence and most of them can be avoided by not doing stupid shit like hooking an unpredictable system into something like a military weapons platform. Like all technology, AI/AGI has the potential to be very dangerous, but this exaggerated AI fearmongering over how it'll inevitably doom the world etc is unhealthy and unscientific.[/QUOTE] And how long do you think it will take before some dumbshit with Chevrons decides that the AI can manage tactical drone deployment better than we can?
Sorry, you need to Log In to post a reply to this thread.