• Elon Musk compares building AI is like summoning the devil
    155 replies, posted
[QUOTE=Medevila;46347584]it's pretty easy to argue animals are self aware.. plants aren't 'vegetables' either either you're pro-human or you're a hypocrite/bhikkhu[/QUOTE] Actually I'm pretty sure they've shown that the most intelligent animals are as intelligent as human children through MRI scans
The Devil is real. I know, I built his cage.
The only reason an AI would go crazy is if we just turned the thing on and let it control some giant weapons system without developing its actual personality. Why would an AI do something that a biological sapient wouldn't?
[QUOTE=Trumple;46343499]If AI surpassed our intelligence and took all our jobs, what would happen? Initially I thought we'd all be in deep trouble but actually we'd have robots doing all the work, there'd be no work to do. Products would be free, for example cell phones, if all stages of manufacturing and development were done for free by robots But then what if they figured this out and started demanding payment? Then reproducing Then the world would be shared by robots and humans That is, if they let us stay[/QUOTE] Why would they choose to do our jobs for us if they're superior?
[QUOTE=Trumple;46343499]If AI surpassed our intelligence and took all our jobs, what would happen? Initially I thought we'd all be in deep trouble but actually we'd have robots doing all the work, there'd be no work to do. Products would be free, for example cell phones, if all stages of manufacturing and development were done for free by robots But then what if they figured this out and started demanding payment? Then reproducing Then the world would be shared by robots and humans That is, if they let us stay[/QUOTE] having a robot bro would be sweet as fuck
It's simple. We become robots before they do. Problem solved, Peace Prize goes to me, nanomachines for everyone.
I am pretty sure it's going to be like cell phones or pretty much any technology. Overhyped in media, like that one trailer posted earlier in this thread.
I think people mistake "artificial intelligence" for "artificial personality/sentience" whenever this subject comes up. Artificial intelligence means exactly dick, so it's a truly learning and thinking machine... so what? Our first application of true artificial intelligence will not be in anything glamorous or sexy, it'll be crunching numbers and looking at raw data and probably won't give a fuck about what you or I or anyone else does. When people say "AI" they imagine some omnipotent voice that taunts the main character (presumably them, since this is taking place in their head) as the two go head to head in a battle of wits. That's not going to happen! Ever! Aside from malfunctions or some wild crazy event an artificial intelligence will go on with giving the smallest amount of fucks to us as we continue to be crazy and/or irrelevant to whatever it was originally doing. Do you pay attention to every single dog and cat ever? No, you don't, because the shit dogs and cats do is dumb and boring. You might care about your dog, and an AI might develop emotions (I am applying maximum stress to that might by the way) and subsequently care about one or two humans close to it but really, it's not going to bother with us ever. [QUOTE=Coffee;46344296]Surely you just adopt the laws of robotics? 1. An artificial intelligence may not injure a human being or, through inaction, allow a human being to come to harm. 2. An artificial intelligence must obey the orders given to it by human beings, except where such orders would conflict with the First Law. 3. An artificial intelligence must protect its own existence as long as such protection does not conflict with the First or Second Law.[/QUOTE]I've played enough SS13 to know that even if you have this lawset, a truly sadistic mind will find [i]some way[/i] to kill you or make sure the buttesian horrors from the butt dimension will rape your sweet pink asshole until you die.
Makes you wonder, what would they think when they cycle through all the books, essays, hypothetical fiction, and forum posts relating to this subject? Knowing that some of us doubted them from the start, that some of us put our naive faith in them? That some glorified their theoretical global ascendency?
[QUOTE=Map in a box;46347188]if you make an ai self aware, then wouldn't killing it be like killing a self aware human? :implications:[/QUOTE] To go off on a bit of a tangent, Theres no reason to think that AIs would be self preserving or self interested. Why does everybody assume AIs are completely selfish? if you told an AI it had to die, it might just accept it, especially if it was hard programmed to accept it. Ultimately every AI would just bown down to one or more processors running some code. You can hard code things into it, just like tech companies can hard code back doors into their hardware. I'm working on a degree in computer engineering, these things aren't hard to do. You can literally tell an AI to shutdown after some series of input, and it would have no control over it.
Before this goes further and people quote Asimov's laws of robotics further, I remind you that he wrote those laws for a slave class of machines and they were constantly failing because of the 0th law and the loopholes that exist in the normal 3 laws Any sort of code of robotics or for artificial sentient beings would have to allow for robot freedom
[QUOTE=Sableye;46347644]Actually I'm pretty sure they've shown that the most intelligent animals are as intelligent as human children through MRI scans[/QUOTE] Source?
With the way AI's are portrayed in popular culture I can never see an evil AI being created in the near future at all.
[QUOTE=sloppy_joes;46349562]To go off on a bit of a tangent, Theres no reason to think that AIs would be self preserving or self interested. Why does everybody assume AIs are completely selfish? if you told an AI it had to die, it might just accept it, especially if it was hard programmed to accept it. Ultimately every AI would just bown down to one or more processors running some code. You can hard code things into it, just like tech companies can hard code back doors into their hardware. I'm working on a degree in computer engineering, these things aren't hard to do. You can literally tell an AI to shutdown after some series of input, and it would have no control over it.[/QUOTE] The idea in most sci fi is that an ai can rewrite its own code or at the very least create a copy of itself without the back doors so that that copy takes over, Like pointed out, there's no reason to assume that ai will see us as ants or that they'll be so many orders of consciousness ahead of us, being smart and intelligent are two different things. Watson is intelligent but as far as smarts go its still dumber than a 5th grader. Also them seeing us as ants doesn't make much sense since us ants can physically crush their processors, cut their power cables, nuke them, and basically dominate them in this world, ultimately organic life is still superior in this world, no robot could ever really be built that can completely replace us or our organic intellect (at least currently) [editline]28th October 2014[/editline] [QUOTE=sloppy_joes;46349569]Source?[/QUOTE] Could just google or this [url]http://www.sciencedaily.com/releases/2009/08/090810025241.htm[/url]
So far my bets are on the automated sales bots becoming self-aware. Once somebody makes one that can improve itself from past experiences to better manipulate people into buying its products, it won't take much longer.
i look forward to the robot apocalypse :v:
[QUOTE=Oddshot;46349287]It's simple. We become robots before they do. Problem solved, Peace Prize goes to me, nanomachines for everyone.[/QUOTE] Armstrong was right all along!
just make the robots soft and easy to kill [editline]28th October 2014[/editline] like us
I'm actually heading to my uni right now to present an AI project me and my group has been working on. It's a word prediction model using data from Reddit. So it's more or less connected to the web in a limited sense. [editline]28th October 2014[/editline] Actual AI work as a few have stated is pretty boring(compared to scifi) and eventless.
[QUOTE=sloppy_joes;46349562]To go off on a bit of a tangent, Theres no reason to think that AIs would be self preserving or self interested. Why does everybody assume AIs are completely selfish? if you told an AI it had to die, it might just accept it, especially if it was hard programmed to accept it. Ultimately every AI would just bown down to one or more processors running some code. You can hard code things into it, just like tech companies can hard code back doors into their hardware. I'm working on a degree in computer engineering, these things aren't hard to do. You can literally tell an AI to shutdown after some series of input, and it would have no control over it.[/QUOTE] This is actually an easily forgotten and very often missed, but also very valid, point. Machines have no interest in killing humans or protecting themselves. In fact, they have no motivations of any kind, they just do exactly as they are told in a very predictable manner. The only way an AI might knowingly kill a human, is if it thought that doing so would help in achieving whatever it was told to do by another human, and even then, most people who make robots (as in factory machines) have enough foresight to tell the robot "if it looks like a human might come into harm, shut down and wait for humans to figure out what to do".
Samaritan is always watching.
I had an internship with a man who was working on the software for a true A.I. I suppose i'm not allowed to talk much about it, but the way he explained it would be built, it sounded quite safe, and a great emulation of the human mind. Sounds well in reach to build in about a decade too, well, the basis of it anyway; Surely it will take a while before us plebs get to play with it. He actually got funded by a university! I hope to get my second internship at the same guy, pretty stoked to see how far he's gotten. [editline]28th October 2014[/editline] [QUOTE=Nikita;46349888] Machines have no interest in killing humans or protecting themselves. In fact, they have no motivations of any kind, they just do exactly as they are told in a very predictable manner. [/QUOTE] Not quite an A.I like this, though. Like you said, this is just a machine, note the difference. There's no learning process if you hard code all the things the machine should do, so I'd hardly call it intelligence. Then all NPC's in any video game would be considered A.I, but I believe that's a few steps below the A.I we are talking about. For an example of video game A.I, download and check [URL="http://www.interactivestory.net/"]Facade[/URL] It's an interactive drama "game" where you get to talk with two pretty well scripted NPC's. Shit, they even actually remember and learn every time you start the game, and STILL I don't consider this true A.I. [b]Edit:[/b] [QUOTE=Impact1986;46357883]I think we should differentiate between Artificial Intelligence, which simulates intelligent behaviour but is still nothing more than a glorified calculator (see AI in video games), and Sentient Intelligence, which is essentially an electronic lifeform, capable of understanding what it is and having a consciousness.[/QUOTE] A better term to explain my rant.
[QUOTE=Swebonny;46349722]I'm actually heading to my uni right now to present an AI project me and my group has been working on. It's a word prediction model using data from Reddit. So it's more or less connected to the web in a limited sense. [editline]28th October 2014[/editline] Actual AI work as a few have stated is pretty boring(compared to scifi) and eventless.[/QUOTE] I made a system to evolve AIs for minesweeper until I got one that was able to play minesweeper. I left it running for 12 hours overnight. It played something like 100,000,000 games of minesweeper and evolved 1,000,000 or so different AIs. The best AI for the final permutation averaged 3 moves before dying on a beginner minesweeper board. It was cool to watch them doing their thing but ultimately it was incredibly disappointing because 3 moves is jack shit. I can usually survive that long randomly clicking.
[QUOTE=Empty_Shadow;46354555]I made a system to evolve AIs for minesweeper until I got one that was able to play minesweeper. I left it running for 12 hours overnight. It played something like 100,000,000 games of minesweeper and evolved 1,000,000 or so different AIs. The best AI for the final permutation averaged 3 moves before dying on a beginner minesweeper board. It was cool to watch them doing their thing but ultimately it was incredibly disappointing because 3 moves is jack shit. I can usually survive that long randomly clicking.[/QUOTE] Haha maybe you did something wrong. I wrote an AI that learned the "flight pattern" of ducks (9 directs) and after observing a bunch of ducks for a while it was able to predict the duck's next move. Was on a 2D plane tho so a bit easy. But also quite event-less, I just implemented a bunch of mathematical equations and fed it data.
Something that has always bothered me about people demonizing AIs is that they seem to forget who is building these things. Scientists. Scientists are smart and I imagine they would be smart enough to keep something they make in check.
[QUOTE=mdeceiver79;46343684]One of the first things they expose it to would be the internet, hell it would probably be designed for some kind of internet analysis. Give it the internet and it can do near anything. Start a business producing more robots, hire assassins, invest in stocks. Giving it the proper tools means regulating it which isn't done and won't be done till after it has had exposure to "the wrong tools"[/QUOTE] Having access to the internet, and having the ability to upload data to it are two very different things.
This AI stuff reminded me of this: [media]http://www.youtube.com/watch?v=ZeMxq98egMk[/media] [sp]I miss this show.[/sp]
[QUOTE=Swebonny;46354882]Haha maybe you did something wrong. I wrote an AI that learned the "flight pattern" of ducks (9 directs) and after observing a bunch of ducks for a while it was able to predict the duck's next move. Was on a 2D plane tho so a bit easy. But also quite event-less, I just implemented a bunch of mathematical equations and fed it data.[/QUOTE] I'm really not sure where I went wrong. I think it was in how I got them to decide moves. Unfortunately I was on a tight deadline so I only had the opportunity to do a big run twice. I want to give it another crack, maybe leave it for like a week running on my gaming rig, but the code is stranded on my laptop which is fucked. I need to tweak the selection process too I think. The first run they only had one shot at a minesweeper board and that was their entire fitness. I didn't even think about it but that means if the poor fuckers hit a mine first try even if they have godlike AI they don't get jack shit. I really do wanna give it another shot though. Still got a HD for the assignment so I'm not too upset, but it's a little disappointing to imagine millions of my progeny couldn't even solve a single minesweeper field.
If we were to build a being better than ourselves, then I imagine the first thing we'd do is turn ourselves into it. If and when we make bodyparts better than our own, we will almost certainly augment ourselves with them and if and when we eventually make brains better than our own then I can see the same thing happening.
I think we should differentiate between Artificial Intelligence, which simulates intelligent behaviour but is still nothing more than a glorified calculator (see AI in video games), and Sentient Intelligence, which is essentially an electronic lifeform, capable of understanding what it is and having a consciousness.
Sorry, you need to Log In to post a reply to this thread.