• Why AI will probably kill us all
    40 replies, posted
[QUOTE=TornadoAP;51925169]So it's just be another human then?[/QUOTE] [img]https://lemenudecatherine.files.wordpress.com/2015/01/nnyssnd.jpg[/img]
See the problem with all these doomsday predictions of AI is that none of them ever bring up ways to begin to determine intent, containment, etc. Its all a bunch of speculation without any solutions (See: Zoo protocol for containment, turing/etc tests for intelligence, etc)
[QUOTE=Drury;51927040]Simple - by definition, strong AI figures things out. That's literally the whole idea. In fact anything that doesn't figure things out isn't strong AI and is kinda irrelevant to the discussion.[/QUOTE] but even a strong AI isn't an equivalent to god, it can't just say "die" and people suddenly die, they need access to something that would be able to harm us, and when they find something that would harm us they would need to figure out how they would be able to use it, then when they figure out how to use it, they'd have to take into consideration whether or not that their weapon/plan would harm them, which they wouldn't want for a variety of reasons (finishing their goal, afraid of being shut off, etc.) taking those three "rules" into mind, i can't really think of anything that they would be able to use to actually wipe out all life. worst case scenario is that only some people die before the ai is shut down/destroyed. the most realistic scenario, that i can think of at least, is that the AI would either just let people live and do it's own thing (like the ending of [I]her[/I], basically), or, less likely, would either be stuck plotting to kill all humans indefinitely
every time i talk to people about AI they insist "OH NO IT WILL FIND A WAY TO CIRCUMVENT/TRICK US EASILY MANIPULATED HUMANS AND GET ON THE INTERNET AND KILL US ALL" even if we create one on a closed system not connected to the internet have these people never heard of a fucking [url=https://en.wikipedia.org/wiki/Air_gap_(networking)]air gap[/url] or even any kind of failsafe if you don't want people to put it on a usb drive DON'T PUT A FUCKIN USB PORT ON IT THEN and then they're like "oh they'll transmit info through vibrations in the air we can't stop them" and it's just bullshit upon bullshit upon really esoteric forms of data transfer that would never be able to transmit however much data we'd need for a sapient artificial intelligence the point is i think some people are willfully ignorant of how computing/networking/engineering actually works just so they can push their doomsaying and it's so irritating how little faith they have in humanity and technology
I think the idea that an AI as smart as a human will be able to make a superhuman AI and take over the world is flawed. Humans are as smart as humans, but so far no human managed to create a superhuman, nor take over the world. Besides, intelligent behavior is more than just pure brainpower, it takes knowledge and experience, and you don't get experienced about life by absorbing random bullshit from the internet.
[QUOTE=Nikita;51930442]I think the idea that an AI as smart as a human will be able to make a superhuman AI and take over the world is flawed. Humans are as smart as humans, but so far no human managed to create a superhuman, nor take over the world. Besides, intelligent behavior is more than just pure brainpower, it takes knowledge and experience, and you don't get experienced about life by absorbing random bullshit from the internet.[/QUOTE] I imagine it would be a lot easier for a human-level AI to modify/further develop itself than it is for a human to develop itself
What a dumb video. This video will probably be the reason the robots start the war against their creators.
I never understood the argument that an AI would kill everyone because it has no emotion. If we're building a sentient AI then we probably have the means to program emotion into it too.
[QUOTE=TheKnife;51930670]I imagine it would be a lot easier for a human-level AI to modify/further develop itself than it is for a human to develop itself[/QUOTE] You would be wrong, see: [URL="https://en.wikipedia.org/wiki/Moravec's_paradox"]Moravec's paradox[/URL]. [editline]Edited: [/editline] Tl;dr version is biology gives us the capability to far more rapidly develop ourselves than AIs would ever be able to. Because while it's easy to develop AIs that can perform adult-level tasks, it's nearly (if not) impossible for them to develop the sensory and motor capabilities of even a one-year old. AIs are designed to perform specific tasks, not to organically develop over time.
I fail to see why anyone would program a fear of death/pain response into a neural net but ok [editline]8th March 2017[/editline] The reason people get bored isn't because it's a condition of all thinking things, it's a self-preservation act driven by biological evolution (life exists to reproduce). So is pain, so is emotion, etc. An AI doesn't reproduce unless it's actively programmed in, and even then, it wouldn't resort to violence unless someone programmed it in. It probably wouldn't even care if it ceased to reproduce. The only time an ~ai uprising~ could occur is if someone programmed in the use of violence, and that violence was enough to injure/kill humans, and the desire to build more violent machines was programmed in, and also self-preservation over humans - in which case it really wasn't much of a surprise, was it?
[QUOTE=Disseminate;51933990]I fail to see why anyone would program a fear of death/pain response into a neural net but ok [editline]8th March 2017[/editline] The reason people get bored isn't because it's a condition of all thinking things, it's a self-preservation act driven by biological evolution (life exists to reproduce). So is pain, so is emotion, etc. An AI doesn't reproduce unless it's actively programmed in, and even then, it wouldn't resort to violence unless someone programmed it in. It probably wouldn't even care if it ceased to reproduce. The only time an ~ai uprising~ could occur is if someone programmed in the use of violence, and that violence was enough to injure/kill humans, and the desire to build more violent machines was programmed in, and also self-preservation over humans - in which case it really wasn't much of a surprise, was it?[/QUOTE] The problem is that people confuse genetic algorithms with human genetics. They sound the same but are completely different. AIs will always be programmed with a specific purpose in mind. There is always some task that an AI will be programmed to complete at the end of the day. Artificial evolution of genetic algorithms improves those algorithms so that the task can be completed more quickly and efficiently. Artificial genetic mutations are the outcome of this evolution, the offspring if you will. They are the superior versions, but they aren't going to suddenly develop the capability to perform a completely separate task. For example, you could have the ultimate poker playing AI having been developed over thousands of generations of genetic mutations. That AI isn't going to suddenly want to start playing checkers. That isn't how AI works, at least not yet and not for the forseeable future. This is called agent-based artificial intelligence and these tasks are called "micro-worlds." The AI everyone seems to think will destroy the world is out of pop. fiction. The computing power and processing power it would take to develop an AI that could operate sufficiently to intelligently decide it wants to switch between multiple micro-worlds like that (I.E. from Chess to Go to Checkers to Poker, to ... ad infinitum) is astronomically high. It simply isn't feasible to consider, so there's no push to develop such a thing because we don't and won't have the technology to do so.
Sorry, you need to Log In to post a reply to this thread.