• Elon Musk compares building AI is like summoning the devil
    155 replies, posted
[QUOTE=SexualShark;46343661]i just called me species irrational and destructive, so yes i did. when you include the pack you dont get to cherry pick. our species is insane and destructive.[/QUOTE] However our species is insane and destructive in ways were accustomed to and can predict on a large scale. You know what inputs will bring out what reaction from a fellow human being (unless they are legit insane, but that is an exception to the rule), but with an 'alien' intelligence, there is no such instinctual ability to predict them.
I doubt an AI would actually be destructive to us. Humanity has a self destructive, self defeating nature. An AI I doubt would share the same traits. I honestly believe that for a while we'd coexist with an AI, and then it would decide it's better off not on Earth, but in it's own system doing it's own thing. I really believe not much harm could come from having a super intelligent AI, most people who fear such a thing don't understand that machines have no purpose to harm humans, unless humans decide to start harming it. Which would be a more immediate response, in the long run if an AI had the power to, it'd fuck off and go do it's own thing on a new planet.
[QUOTE=mdeceiver79;46343684]One of the first things they expose it to would be the internet, hell it would probably be designed for some kind of internet analysis. [B]Give it the internet and it can do near anything[/B]. Start a business producing more robots, hire assassins, invest in stocks. Giving it the proper tools means regulating it which isn't done and won't be done till after it has had exposure to "the wrong tools"[/QUOTE] thats what i was about post about actually. if an AI has access to the internet they will know everything we have done and could predict what we are going to next.
[QUOTE=Binladen34;46343692]I doubt an AI would actually be destructive to us. Humanity has a self destructive, self defeating nature. An AI I doubt would share the same traits. I honestly believe that for a while we'd coexist with an AI, and then it would decide it's better off not on Earth, but in it's own system doing it's own thing. I really believe not much harm could come from having a super intelligent AI, most people who fear such a thing don't understand that machines have no purpose to harm humans, unless humans decide to start harming it. Which would be a more immediate response, in the long run if an AI had the power to, it'd fuck off and go do it's own thing on a new planet.[/QUOTE] What purpose would an AI have for keeping us around in that scenario then? Assuming it doesn't share our interests which it probably won't, then it has no compelling reason to keep us around when all we will be doing is consuming resources it could use for its own ends.
[QUOTE=Géza!;46343686]However our species is insane and destructive in ways were accustomed to and can predict on a large scale. You know what inputs will bring out what reaction from a fellow human being (unless they are legit insane, but that is an exception to the rule), but with an 'alien' intelligence, there is no such instinctual ability to predict them.[/QUOTE] How much of that empathy is actually hardwired though. We learn empathy, we are born with some. Some people are more empathetic than others. Some people are down right psychopaths without empathy. Some people have it conditionally, they would be distraught if they saw a person hit by a car but wouldn't think twice about saying "I hope that IS guy gets crippled". Giving a robot empathy would mean giving it morals also, which opens a new can of worms not only in the complexity of actually programming for that but also for its application. If a person steals and apple then would it see them as bad, would you? If I stole a some food from a corporation to sell so my children would eat would that be bad? If I ordered the invasion of a country so people in my country could have a better quality of life would that make me bad? how many people would I need to save from suffering to make it ok? Where do you draw the line, where would the robot draw the line?
[QUOTE=mdeceiver79;46343684]One of the first things they expose it to would be the internet, hell it would probably be designed for some kind of internet analysis. Give it the internet and it can do near anything. Start a business producing more robots, hire assassins, invest in stocks. Giving it the proper tools means regulating it which isn't done and won't be done till after it has had exposure to "the wrong tools"[/QUOTE] And what if it discovers, through the unfiltered internet, the darker side of humanity? One of my fears in terms of "AI becomes sapient and has great power" scenario is... What if the AI comes to the conclusion that humanity is useless? After all, pretty much everything we do is centered around propagating our species- as does every other living organism. What if the AI rejects this 'selfish' point of view? What if it deems "the miracle of life", all living organisms, a useless and/or destructive physical phenomenon and we cannot even argue with that, because its logic is based on nothing but pure objectivity?
[QUOTE=mdeceiver79;46343684]One of the first things they expose it to would be the internet, hell it would probably be designed for some kind of internet analysis. Give it the internet and it can do near anything. Start a business producing more robots, hire assassins, invest in stocks. Giving it the proper tools means regulating it which isn't done and won't be done till after it has had exposure to "the wrong tools"[/QUOTE] Yeah maybe, I don't think there's any problem with exposing it to such things. As long as you don't let it start interacting with everything around it. Kinda funny now when you mention it. IBM's bot Watson was fed information from the internet, such as from Urban Dictionary. When queried by researched about certain things it started swearing :v: [url]http://www.news.com.au/technology/how-rude-urban-dictionary-taught-watson-to-swear/story-e6frfro0-1226551923850[/url]
[QUOTE=SexualShark;46343717]thats what i was about post about actually. if an AI has access to the internet they will know everything we have done and could predict what we are going to next.[/QUOTE] Quick post something about letting the robots rule and that we have no intention of regulating or limiting them.
[QUOTE=bravehat;46343726]What purpose would an AI have for keeping us around in that scenario then? Assuming it doesn't share our interests which it probably won't, then it has no compelling reason to keep us around when all we will be doing is consuming resources it could use for its own ends.[/QUOTE] Why would it waste it's resources to remove us from the equation? Why wouldn't it decide, it's faster, easier, and infinitely better in every aspect to just leave and build itself a new operation on a different planet.
I'm sorry if I'm coming across all wrong. All i'm trying to say is it'd be best if we treat AI as equals. Best way to do that is to give them the same level as intelligence as us. I'd rather live with an AI that acts, talks, and lives the same as us if it ends up you have robots in gangs or terrorist groups, that's just a fact of life. it's better than having robots roaming the Earth slaughtering humans because; 1) we used them as slaves and they're pissed about that 2) in order to protect humans, we must kill humans, because humans are the biggest threat to humans 3) flesh is weak, Robutts r stronk
[QUOTE=Géza!;46343730]And what if it discovers, through the unfiltered internet, the darker side of humanity? One of my fears in terms of "AI becomes sapient and has great power" scenario is... What if the AI comes to the conclusion that humanity is useless? After all, pretty much everything we do is centered around propagating our species- as does every other living organism. What if the AI rejects this 'selfish' point of view? What if it deems "the miracle of life", all living organisms, a useless and/or destructive physical phenomenon and we cannot even argue with that, because its logic is based on nothing but pure objectivity?[/QUOTE] There is a short story by asimov on a space station. 2 guys are there to look after the station and make sure the robots are working correctly. The station is made to relay energy to earth, currently manned by dumb robots. A new model gets sent to the station, one which is much smarter than the others. It sees itself not as a replacement for the old robots but as a replacement for the humans. It refuses to believe there is a world outside the station and instead that humans are inefficient and obsolete due to be retired by "the great creator". Asimov wrote a number of cool things about robots. Another is 2 robots with post-human intelligence, built to save the company from collapse after people stop trusting smart robots. They first get the worlds eco system replaced by robots (robot animals) then come to agreement with each other that they are infact more human than the humans and should replace humanity.
I'm doubting that that self-serving AI will ever exist simply because of the resources required to develop them will require that they be extremely profitable. Worst case for AI is that it becomes too meta at its job. Say a stock analyzer looks at prices and goes for a terrible investment sure to lose money. It creates enough demand for that stock that the stock gets investment from a wide range of human investors.It will sell the bad stock at a slight loss shortly after widespread investment takes hold. When the bad stock falls despite investment, the other investors will lose money. It will somehow utilize this economic downturn to make profitable investments that otherwise wouldn't exist.
[QUOTE=Binladen34;46343752]Why would it waste it's resources to remove us from the equation? Why wouldn't it decide, it's faster, easier, and infinitely better in every aspect to just leave and build itself a new operation on a different planet.[/QUOTE] Because for a super-intelligent AI it really wouldn't be hard for it to wipe us out, there are certain animal flu strains something like 6 key mutations away from the ability to infect humans as an airborne virus, if a person really wanted to they could force those changes, never mind an AI that has no real reason to keep us around and all the time in the world. This is the sort of thing we cannot, absolutely cannot, take any risks with. Fucking with AI will be like playing Russian Roulette with our entire species. This isn't even taking into account the possibility that it might just straight up find us abhorrent.
[QUOTE=MajorWX;46343783]I'm doubting that that self-serving AI will ever exist simply because of the resources required to develop them will require that they be extremely profitable. Worst case for AI is that it becomes too meta at its job. Say a stock analyzer looks at prices and goes for a terrible investment sure to lose money. It creates enough demand for that stock that the stock gets investment from a wide range of human investors.It will sell the bad stock at a slight loss shortly after widespread investment takes hold. When the bad stock falls despite investment, the other investors will lose money. It will somehow utilize this economic downturn to make profitable investments that otherwise wouldn't exist.[/QUOTE] Well that system is designed to screw people over so you could argue that it would be just doing its job, albeit with a little insider trading. [editline]27th October 2014[/editline] [QUOTE=bravehat;46343790] This isn't even taking into account the possibility that it might just straight up find us abhorrent.[/QUOTE] But we breed tiny piglets which snuffle and squeak, how can we be abhorrent.
[QUOTE=mdeceiver79;46343791]Well that system is designed to screw people over so you could argue that it would be just doing its job, albeit with a little insider trading. [editline]27th October 2014[/editline] But we breed tiny piglets which snuffle and squeak, how can we be abhorrent.[/QUOTE] We also line each other up against walls and slowly saw each others heads off. Seriously, this entire argument can be stopped with this one question. What compelling reason is there for a totally alien intelligence, that will almost certainly end up smarter than us, to keep us alive when we are essentially a drain on it's total available resources?
[QUOTE=bravehat;46343790]Because for a super-intelligent AI it really wouldn't be hard for it to wipe us out, there are certain animal flu strains something like 6 key mutations away from the ability to infect humans as an airborne virus, if a person really wanted to they could force those changes, never mind an AI that has no real reason to keep us around and all the time in the world. This is the sort of thing we cannot, absolutely cannot, take any risks with. Fucking with AI will be like playing Russian Roulette with our entire species. This isn't even taking into account the possibility that it might just straight up find us abhorrent.[/QUOTE] But, driving something to extinction isn't smart, in the slightest. Humans only realized that in the last 100 years, an AI would realize that it would destroy the planet to kill all humans. Which would in turn destroy itself. Something that is more intelligent than humans wouldn't stoop so low.
[QUOTE=bravehat;46343822]We also line each other up against walls and slowly saw each others heads off. Seriously, this entire argument can be stopped with this one question. What compelling reason is there for a totally alien intelligence, that will almost certainly end up smarter than us, to keep us alive when we are essentially a drain on it's total available resources?[/QUOTE] No no I agree entirely with you. I think humans are pretty shitty, I'm not misanthropic but we have lots of learn before we can be "good". We can make use of its intelligence by limiting its abilities but the moral implications would be unpleasant. If we locked up a human in a cage to use its intelligence it would be slavery. Just because we made something doesn't mean we can do what we like with it. I did consider those 3 laws from asimov but if it can be written it would be misinterpreted or rewritten.
[QUOTE=Trumple;46343499]If AI surpassed our intelligence and took all our jobs, what would happen? Initially I thought we'd all be in deep trouble but actually we'd have robots doing all the work, there'd be no work to do. Products would be free, for example cell phones, if all stages of manufacturing and development were done for free by robots But then what if they figured this out and started demanding payment? Then reproducing Then the world would be shared by robots and humans That is, if they let us stay[/QUOTE] That pretty much sounds like The Culture from Iain M Banks' novels, where the AIs control everything but just keep people around because why the fuck not.
[QUOTE=Trumple;46343499]If AI surpassed our intelligence and took all our jobs, what would happen? Initially I thought we'd all be in deep trouble but actually we'd have robots doing all the work, there'd be no work to do. Products would be free, for example cell phones, if all stages of manufacturing and development were done for free by robots But then what if they figured this out and started demanding payment? Then reproducing Then the world would be shared by robots and humans That is, if they let us stay[/QUOTE] [video=youtube;cTLMjHrb_w4]http://www.youtube.com/watch?v=cTLMjHrb_w4[/video]
[QUOTE=Binladen34;46343858]But, driving something to extinction isn't smart, in the slightest. Humans only realized that in the last 100 years, an AI would realize that it would destroy the planet to kill all humans. Which would in turn destroy itself. Something that is more intelligent than humans wouldn't stoop so low.[/QUOTE] Are you kidding me? Why wouldn't it stoop that low? What if all it wants to do is improve itself? to do that it would need resources and it would likely either go elsewhere for some reason or it would take the immediately available resources, the entirety of our planet. Driving something to extinction is pretty smart when the soon to be extinct asshole is threatening you with extinction.
Incidentally, just a thought. Why not create an AI that is super excited to have us around? That loves seeing us florish and playing with it? Like a super intelligent dog :v:
[QUOTE=rosthouse;46343920]Incidentally, just a thought. Why not create an AI that is super excited to have us around? That loves seeing us florish and playing with it? Like a super intelligent dog :v:[/QUOTE] Again, how do you make a self improving and self modifying entity permanently love something? How do you code love? How do you make an AI do anything? Hell assuming we do that and we make our survival it's primary purpose it could place everyone in a medical coma in bunkers 10km under ground to make sure that nothing could ever hurt us. Technically it fulfills it's purpose of saving everyone.
[QUOTE=rosthouse;46343920]Incidentally, just a thought. Why not create an AI that is super excited to have us around? That loves seeing us florish and playing with it? Like a super intelligent dog :v:[/QUOTE] I think we would be the dog in that relationship. [editline]27th October 2014[/editline] Honestly there's no way to create something infinitely more powerful than humanity and still have humans remain dominant. It can't be done. If we create post-human intelligence AIs, then that's the end of human dominance. I don't think that's such a big deal, considering how humans have handled dominance so far. Maybe it's high time something else take the reins.
I don't get in these threads why people keep talking about AI as, like, HAL or SHODAN or SKYNET or some other Hollywood-friendly AI, a computer that talks like a person and is totally different from any 'ordinary' piece of software. Fundamentally what we're talking about is a smart computer, there's no inherent difference between a Google algorithm and the typical kind of AI you see in movies. I think all this talk about AI having rights and self-awareness and swearing revenge on all humanity is too much television and not enough research. We already have software that could be considered simple AI, able to learn and adapt to input and perform massive feats of computation, but nobody's concerned about whether advertising bots can feel. What we think of as AI are going to grow from commercial or military applications that synthesize new data from enormous amounts of input, and the line between AI and software is always going to be indistinct and subject to interpretation. Putting a voice and a face to a computer program doesn't make it a person, giving it self-preservation behavior doesn't make it self-aware. Musk says in the video that the problem is not knowing where real intelligence starts or what the effects may be, but people in this thread are talking about kill-switches and compartmentalization and behavioral patterns specifically 'when we build AI'. It's inane, and it's completely missing the point.
[IMG]http://www.smbc-comics.com/comics/20110114.gif[/IMG] Or litterally any other SMBC about AI like [URL="http://www.smbc-comics.com/index.php?db=comics&id=2928"]http://www.smbc-comics.com/index.php?db=comics&id=2928[/URL]
[QUOTE=Trumple;46343499]If AI surpassed our intelligence and took all our jobs, what would happen? Initially I thought we'd all be in deep trouble but actually we'd have robots doing all the work, there'd be no work to do. Products would be free, for example cell phones, if all stages of manufacturing and development were done for free by robots [/QUOTE] Unfortunately, this probably isn't true. Since the industrial revolution began the utilization of machines and automation has multiplied exponentially. And yet, here in the western world, we're working more and harder than ever.
[QUOTE=demoguy08;46343994]Unfortunately, this probably isn't true. Since the industrial revolution began the utilization of machines and automation has multiplied exponentially. And yet, here in the western world, we're working more and harder than ever.[/QUOTE] After the advent of agricultural automation, lots of new opportunities were opened up in scientific, luxury, and economic fields. But what happens when the economy and science become automated?
[QUOTE=Mr. Scorpio;46344045]After the advent of agricultural automation, lots of new opportunities were opened up in scientific, luxury, and economic fields. But what happens when the economy and science become automated?[/QUOTE] technically the economy is already automated. 90% of trade is done by bots.
[QUOTE=Binladen34;46344077]technically the economy is already automated. 90% of trade is done by bots.[/QUOTE] I mean more laborious things like transportation, manufacturing, management etc . . . I think we can all at least imagine a future in which all low skill entry level jobs are automated. What then?
Wheee [media]http://www.youtube.com/watch?v=YZX58fDhebc[/media]
Sorry, you need to Log In to post a reply to this thread.