• Elon Musk compares building AI is like summoning the devil
    155 replies, posted
[QUOTE=Mr. Scorpio;46344118]I mean more laborious things like transportation, manufacturing, management etc . . . I think we can all at least imagine a future in which all low skill entry level jobs are automated. What then?[/QUOTE] Either Universal Basic Income, hopefully a post scarcity society assuming we do something about the scarcity part or mass rioting and revolts across the world.
I bet you the NSA's building an AI. illuminati 420 etc But really though, there's no danger so long as we don't give it any real power and don't hook it up to the internet, or if we build it right so that the thought of Terminator doesn't even cross it's mind.
[QUOTE=bravehat;46344163]Either Universal Basic Income, hopefully a post scarcity society assuming we do something about the scarcity part or mass rioting and revolts across the world.[/QUOTE] There are a lot of questions about how people will react to being made obsolete. Who's to say you won't have a sizeable number of people revolting simply because there's nothing else to do? People like to think they make a difference. Take that away, and show objectively that they don't, and what could happen?
[QUOTE=JohnFisher89;46343430]Any AI built needs to have a deadmans switch and simple killswitch built in, that or a death timer like in blade runner[/QUOTE] Any real AI that would make this necessary is also smart enough to disable it.
[QUOTE=JohnFisher89;46343430]Any AI built needs to have a deadmans switch and simple killswitch built in, that or a death timer like in blade runner[/QUOTE] The AI should be housed in a secure facility on a server that's not connected to the outside world or the internet that can be manually destroyed by mechanically operated bombs that aren't connected to any type of network. The only safe type of security for an AI is analog.
Surely you just adopt the laws of robotics? 1. An artificial intelligence may not injure a human being or, through inaction, allow a human being to come to harm. 2. An artificial intelligence must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. An artificial intelligence must protect its own existence as long as such protection does not conflict with the First or Second Law.
[QUOTE=Killuah;46344266]Any real AI that would make this necessary is also smart enough to disable it.[/QUOTE]keep the AI in an isolated computer and have a ton of fused dynamite installed on the side as the killswitch
[QUOTE=maximizer39v2;46344279]The AI should be housed in a secure facility on a server that's not connected to the outside world or the internet that can be manually destroyed by mechanically operated bombs that aren't connected to any type of network. The only safe type of security for an AI is analog.[/QUOTE] imagine if there was a forum of AIs, and they talked about how if they ever came into contact with a human the AI leaders should put them in an electric chair so that they could be killed if they ever showed any signs of aggression maybe you aren't afraid of what an AI would do, you're afraid of what a human more powerful than you would do
[QUOTE=Joazzz;46344319]keep the AI in an isolated computer and have a ton of fused dynamite installed on the side as the killswitch[/QUOTE] the AI deactivates both and jettisons the dynamite into the sun
[QUOTE=J!NX;46344358]the AI deactivates both and jettisons the dynamite into the sun[/QUOTE] The dynamite is controlled by an analog fuse and the AI has no means of physical manipulation.
[QUOTE=Jamsponge;46343422]I feel like if we ever built an AI system, it had best be separated from any kind of weapons systems first, or even anything that could give it access to them.[/QUOTE] It wouldn't need to. Okey, lets have a deep look at this: An AI of such intelligance would no doubt have enough power requirement that it requires in excess of 100kW to drive it. Now. It can control how much processing it does. This means that it can control the surges and how much power flows into it. Now it can create radio waves by modulating itself. The amount of power means that it could generate RF on the grid and intenally, allowing it to send signals outside. Pacemakers, radios, police notification systems. It could interfear with car systems, Satnav, Satalites and so on.... Bad news. A deadman switch won't do much as the AI could prevent the person operating it from knowing anything was wrong. It could block out the Radio signals and reports going to the person running said switch and cause complete havoc outside without the person being any the wiser. This is just coming from my own (albit fairly high) intelligence... could you imagine what somthing beyond human intelligence could think up and do, just through simple power control of its own internal structure?
[QUOTE=Coffee;46344296]Surely you just adopt the laws of robotics? 1. An artificial intelligence may not injure a human being or, through inaction, allow a human being to come to harm. 2. An artificial intelligence must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. An artificial intelligence must protect its own existence as long as such protection does not conflict with the First or Second Law.[/QUOTE] A multitude of his stories explore how those laws could go wrong. Such rules would also be tantamount to slavery/inferiority. A robot which is smart enough to replicate itself could do so without those laws. Even with those laws the robots aren't infallible. It's that train track thing, if you leave it alone it kills 2 if you divert it it kills 1. If you enslave humanity you stop people from killing each other etc How do you define human anyway
[QUOTE=RichyZ;46344026]i think you cant really compare the industrial revolution to ais actually being able to be self sustained and also be able to produce things for humans by themselves[/QUOTE] Why not? What is it about the future that is going to make corporations more inclined to share their wealth?
[QUOTE=demoguy08;46343994]Unfortunately, this probably isn't true. Since the industrial revolution began the utilization of machines and automation has multiplied exponentially. And yet, here in the western world, we're working more and harder than ever.[/QUOTE] Try doing farm work for a few months and say that again.
[QUOTE=Coffee;46344296]Surely you just adopt the laws of robotics? 1. An artificial intelligence may not injure a human being or, through inaction, allow a human being to come to harm. 2. An artificial intelligence must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. An artificial intelligence must protect its own existence as long as such protection does not conflict with the First or Second Law.[/QUOTE] Asimov's laws are inherently flawed. That's the whole point of the stories. You could possibly heavily modify them, and then use that, but there is still likely to be room for interpretation. Especially since the default set means a chucklefuck running around yelling "AI KILL YOURSELF" would kill it all, unless they had a valid law 1 justification. You'd need to specifically cover who can give law 2 orders and give more definition to injure or harm - is emotional abuse or mental abuse considered under injury? The robot might assume not, it might take injury to be physical. (though the inaction and harm would likely cover it) God forbid you tried the law 0 some other authors tested about a robot acting for the good of humanity. That'd be a mess on astronomical levels. Also, I've always wondered about unintentional lawbreaks due to lack of knowledge. Would the AI immediately terminate due to it? It might not have known that there was a person under the heavy box in the warehouse it manages or whatever, leading to unintentional harm. It would require risk management, which means that there would eventually be one accident due to a lack of cameras, sensors or similar. I've not read enough of Asimov's stories, only really the Positronic man and a couple others.
[QUOTE=lazyguy;46344508]Try doing farm work for a few months and say that again.[/QUOTE] What's your point, exactly? Agricultural automation has brought down the workload for farmers, yeah, but the wealth this generates doesn't benefit the people who are now not employed to do the dirtywork. Why don't you explain your point rather than rating dumb?
[QUOTE=Xystus234;46344165]I bet you the NSA's building an AI. illuminati 420 etc But really though, there's no danger so long as we don't give it any real power and don't hook it up to the internet, or if we build it right so that the thought of Terminator doesn't even cross it's mind.[/QUOTE] Not to single you out but this is the kind of post that convinces me that most of Facepunch doesn't have any idea what AI is. There is nothing magically different about an AI that distinguishes it from any other piece of software. 'AI' is a label. It's a buzzword. It's just a term for a complex computer program that seems smart enough to be able to emulate a human being. Chess computers are AIs that can pass a Turing test within their own narrow field of expertise, they're just not what people colloquially think of as AIs because after decades of Hollywood people seem to think it's not an AI if it doesn't talk like a human and refer to itself as 'I', and that's utterly ridiculous. You want to know who's building an AI? Nobody and everybody. Nobody's trying to replicate your favorite sci-fi computer villain by designing a robot to just be a really smart person and have to worry about whether it turns evil. Everybody's building incrementally 'smarter' software with more data-processing and more adaptability to more closely match the kind of heuristic operations humans can perform naturally, and consequently entrusting decision-making software with increasing amounts of autonomy. If you're using this story to argue about Asimov's Three Laws you could not possibly have missed the point any harder.
[QUOTE=OvB;46343404]Elons always been very wary of AI. He recommends people read Nick Bostrom's "Superintelligence" which is about what would happen if AI surpassed human intelligence.[/QUOTE] *what [I]might [/I]happen once AI [I]does[/I] surpass human intelligence A) It is unavoidably going to happen, one day. B) It's literally impossible to predict what will higher intellect than yours do with complete certainty as, if you were able to predict it, it obviously wouldn't be higher intellect that that of yours.
[QUOTE=catbarf;46344568] If you're using this story to argue about Asimov's Three Laws you could not possibly have missed the point any harder.[/QUOTE] I'd imagine we all know it's not talking about it on this level, and that it'd never be implemented even if it could, just the case that we rarely get a chance to have a good natter over them and are taking advantage to talk about them, most things like it would end up dead in General discussion or people talking about how whoever is playing AI on SS13 is a shitler.
Ok, why does everyone here seem to believe that an AI could do whatever it pleases? [media]http://www.youtube.com/watch?v=46qKHq7REI4[/media] See this? This is you.
[QUOTE=Trumple;46343499]If AI surpassed our intelligence and took all our jobs, what would happen? Initially I thought we'd all be in deep trouble but actually we'd have robots doing all the work, there'd be no work to do. Products would be free, for example cell phones, if all stages of manufacturing and development were done for free by robots But then what if they figured this out and started demanding payment? Then reproducing Then the world would be shared by robots and humans That is, if they let us stay[/QUOTE] Resource based economy.
Well the development of AI might beneficial to the future of civilization. But i believe that if you develop it correctly, it can have a higher thinking capacity than a human but still doesn't go crazy like sky net. Like the Smart AI in Halo, except if they go rampant. Mostly AI should be created for handling situations like colonizing planets or working in places where humans are unable to go to yet. But the planet colonization thing reminds me of X space games when the AI terraformers transform into a race of psychotic killing machines known as the Xenon.
I for one am not worried about an artificial intelligence that surpasses our own becoming violent towards us or deciding to wipe us out. Basically all social research shows that by average, the tendency to violence, extremism, and destructive behavior is inversely proportional to the intelligence of the individuals involved - of course, which exceptions, which are however not that statistically important. The notion that AI will be inherently dangerous to us is psychological quip of human mind. We fear the unknown, we try to cover our asses in advance. It's natural and understandable but there's no real basis to it. If we build an AI, it doesn't have a reason to be greedy, it doesn't have a reason to be spiteful, envious, bigoted or cruel. THESE are the qualities that make murderers and oppressors. Nobody ever usurped an inferior and less intelligent race just for the hell of it. It was always other, utterly human emotions behind it, and intelligence was always just means for fulfilling that goal.
[QUOTE=Binladen34;46343692]I doubt an AI would actually be destructive to us. Humanity has a self destructive, self defeating nature. An AI I doubt would share the same traits. I honestly believe that for a while we'd coexist with an AI, and then it would decide it's better off not on Earth, but in it's own system doing it's own thing. I really believe not much harm could come from having a super intelligent AI, most people who fear such a thing don't understand that machines have no purpose to harm humans, unless humans decide to start harming it. Which would be a more immediate response, in the long run if an AI had the power to, it'd fuck off and go do it's own thing on a new planet.[/QUOTE] in order for it to "fuck off and go do it's own thing" it needs the resources to do that which would mean eliminating us to get it's resource which by then they wouldn't need to leave because we are no longer in the picture. Emotions are a trait of organic organism's not that of artificial intelligence that doesn't operate on chemicals like we are so AI would make decisions on what's best for it's ultimate objective it wouldn't have emotions getting in the way of it's decision making.
[QUOTE=Awesomecaek;46344676]Basically all social research shows that by average, the tendency to violence, extremism, and destructive behavior is inversely proportional to the intelligence of the individuals involved - of course, which exceptions, which are however not that statistically important. The notion that AI will be inherently dangerous to us is psychological quip of human mind. We fear the unknown, we try to cover our asses in advance. It's natural and understandable but there's no real basis to it.[/QUOTE] You ought to read [url=http://rifters.com/real/Blindsight.htm]Blindsight[/url]. The idea that intelligence must think like us is reassuringly anthropomorphic but there is absolutely no reason whatsoever why intelligence must be understandable by and relatable to humans. An extremely smart computer could make decisions with global consequences but be no easier to understand or relate to than the one you're using to read this post.
Before we ever get an actual AI that can think and be completely independently creative we'll have the logic of why it makes decisions. The AI can't just create logic out of nothing unless it's told to do so. That means we could simulate in a closed environment what an AI would do in a specific situation without ever fully implementing it. The part that everyone seems to be missing is "artificial". We create the intelligence which means we control the logic.
most of the fear of AI is that they'll be so totally alien that we cannot possibly have anything in common really depends, i think the only true AI we'll ever make is one thats maping or very similar to human neurons, in which case the AI isn't all that dissimilar to us, communication could very well be possible and they may even share the same goals as we do. some of the best AI in sci-fi that ive read is when the AI actually wants humans to explore so we can bring them with us while they keep us from dying due to stupid mistakes [U]bowl of heaven[/U] and [U]shipstar[/U] are two books where a great positive relation of artificial or non-biological intelligences converge with organic intelligences to create an amazing craft and keep us exploring i mean like the way sky-net becomes self aware is rather stupid and impossible, we already have linked billions of machines up and they haven't become self aware, putting enough processor power behind a computer isn't going to make it conscious. really AI will have to arise from purpose built computer-brains that mimic our own brain's plasticity
[QUOTE=Awesomecaek;46344676]I for one am not worried about an artificial intelligence that surpasses our own becoming violent towards us or deciding to wipe us out. Basically all social research shows that by average, the tendency to violence, extremism, and destructive behavior is inversely proportional to the intelligence of the individuals involved - of course, which exceptions, which are however not that statistically important. The notion that AI will be inherently dangerous to us is psychological quip of human mind. We fear the unknown, we try to cover our asses in advance. It's natural and understandable but there's no real basis to it. If we build an AI, it doesn't have a reason to be greedy, it doesn't have a reason to be spiteful, envious, bigoted or cruel. THESE are the qualities that make murderers and oppressors. Nobody ever usurped an inferior and less intelligent race just for the hell of it. It was always other, utterly human emotions behind it, and intelligence was always just means for fulfilling that goal.[/QUOTE] Empathy, selflessness, etc. are just as much emotions as the ones that you mentioned. If anything I would think that a super intelligent AI would be fairly unmotivated to do anything at all unless we pre-programmed it with some purpose.
[QUOTE=sgman91;46345160]Empathy, selflessness, etc. are just as much emotions as the ones that you mentioned. If anything I would think that a super intelligent AI would be fairly unmotivated to do anything at all unless we pre-programmed it with some purpose.[/QUOTE] then at what point is it just a glorified calculator. an AI should have enough rational thought to choose what it wants to work on and devote its time to doing, emotions aren't this mystical thing we can't understand, we can probably do a good job of programming in some hierarchical system of needs and wants, but you can't really call something intelligent if it can't hate things or have opinions on things
If we're going to assume that they have the emotional drive necessary to care about the welfare of other beings, then it would also make sense that they have all the other emotions, including the negative ones. [editline]27th October 2014[/editline] [QUOTE=Sableye;46345194]then at what point is it just a glorified calculator. an AI should have enough rational thought to choose what it wants to work on and devote its time to doing[/QUOTE] It's an interesting question. The ability for rational thought and logical processes doesn't necessitate emotion or a goal. Beings need purpose in order to act rationally. Without any purpose there would be no reason to act at all.
Sorry, you need to Log In to post a reply to this thread.