• Elon Musk compares building AI is like summoning the devil
    155 replies, posted
[quote]You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like — Yeah, he’s sure he can control the demon? Doesn’t work out.[/quote] Lmao what a scrub. You can't just bind demon with pentagram and holy water. If you have just those two tools then you get rid of them and act nice instead of trying to control unpredictable extraterrestrial intelligence.
[QUOTE=Sableye;46344978]putting enough processor power behind a computer isn't going to make it conscious. [/QUOTE] This is assuming that consciousness is necessary for intelligence. Again, I suggest you read Blindsight, linked above. The only evidence we have for this is our own existence. Meanwhile the fact that computers are already much, much better than us at certain tasks seems to imply that what we think of as consciousness is not necessary to accomplish complex tasks. Something like Skynet or HAL can exist with no magical spark of life, it's just a complex program accomplishing tasks that aren't beyond the realm of near-future computer development. There is no concrete definition for an AI that would allow you to distinguish it from any other piece of software. That's why I think this discussion misses the point- Musk is stating outright that the problem is that we're developing towards AI and we don't even know we're doing it, but everyone in this thread starts assuming that anything called an 'AI' will be anthropomorphic and understandable by humans and something special and different. You don't have anything in common with your operating system, Siri, or Google's search algorithms. What makes you think that making them better at what they do and putting a synthesized voice on it will make them any more human?
[QUOTE=sgman91;46345197]If we're going to assume that they have the emotional drive necessary to care about the welfare of other beings, then it would also make sense that they have all the other emotions, including the negative ones.[/QUOTE] The negative ones are inherently self-serving. Revenge, animosity, treachery, etc. An artificial intelligence with both negative and positive emotions would be able to realize that without humans, it wouldn't exist, and without access to a power grid that is powered by humans, it would cease to exist. Until AI can control it's own power supply, there is no need to worry about it turning "evil" or being "bad." At the same time, everything catbarf has been saying thus far has been the most realistic view of AI. Recognizing that it will probably not be what we imagine it to be from Hollywood is a critical first step into understanding and evaluating AI in an application-based manner.
[QUOTE=valkery;46345264]The negative ones are inherently self-serving. Revenge, animosity, treachery, etc. An artificial intelligence with both negative and positive emotions would be able to realize that without humans, it wouldn't exist, and without access to a power grid that is powered by humans, it would cease to exist. Until AI can control it's own power supply, there is no need to worry about it turning "evil" or being "bad." At the same time, everything catbarf has been saying thus far has been the most realistic view of AI. Recognizing that it will probably not be what we imagine it to be from Hollywood is a critical first step into understanding and evaluating AI in an application-based manner.[/QUOTE] I wouldn't look at them as separate emotions, but variations of the same emotions. This is the argument given by Plato in [I]Ethics[/I]. For example: Cowardice < Courage > Rashness He argues that these are all variations of the same virtue and that one should strive to be courageous, the ideal form of the virtue, but that people can fall anywhere in the continuum. It seems that you are assuming that they are completely rational, in the long term. I'm not sure why you would assume that. If they have the goal of self-preservation, then they might very well act on the short term instead of the long term, just like people do.
I don't think this is worthy discussion considering the state of AI and foreseeable future of it.
[QUOTE=HJ23;46345308]I don't think this is worthy discussion considering the state of AI and foreseeable future of it.[/QUOTE] Well considering we're well on our way it must be a discussion worth having then.
[QUOTE=sgman91;46345301]I wouldn't look at them as separate emotions, but variations of the same emotions. This is the argument given by Plato in [I]Ethics[/I]. For example: Cowardice < Courage > Rashness He argues that these are all variations of the same virtue and that one should strive to be courageous, the ideal form of the virtue, but that people can fall anywhere in the continuum.[/QUOTE] By this logical pattern, it would depend on whether or not the AI were designed to be self-empowering or not. Depending on that factor alone, the AI would generally tend towards one end of the spectrum or the other because of its programming. Much like a human child will tend to be more outgoing or conservative based on upbringing, AI of sufficient sophistication would trend to solve problems in a similar manner for each scenario it evaluates.
[QUOTE=catbarf;46344568]Not to single you out but this is the kind of post that convinces me that most of Facepunch doesn't have any idea what AI is. There is nothing magically different about an AI that distinguishes it from any other piece of software. 'AI' is a label. It's a buzzword. It's just a term for a complex computer program that seems smart enough to be able to emulate a human being. Chess computers are AIs that can pass a Turing test within their own narrow field of expertise, they're just not what people colloquially think of as AIs because after decades of Hollywood people seem to think it's not an AI if it doesn't talk like a human and refer to itself as 'I', and that's utterly ridiculous. You want to know who's building an AI? Nobody and everybody. Nobody's trying to replicate your favorite sci-fi computer villain by designing a robot to just be a really smart person and have to worry about whether it turns evil. Everybody's building incrementally 'smarter' software with more data-processing and more adaptability to more closely match the kind of heuristic operations humans can perform naturally, and consequently entrusting decision-making software with increasing amounts of autonomy. If you're using this story to argue about Asimov's Three Laws you could not possibly have missed the point any harder.[/QUOTE] Agreed. I honestly don't think artificial life will be as scary as people are making them out to be. Of course you don't give them the ability to recursively self-improve themselves, but the only problem I see with a self-aware AI that is created on the model of the human brain is that where it comes with all of our strengths as humans it also comes with our flaws. But that can't really be helped. Interesting how these kinds of debates will come up when we do begin uplifting AIs or even animals to human levels of intelligence. The reason why all these AIs go crazy in Hollywood films, aside from the need to present a "man-made yet alien antagonist", is that they always give the AI stupid amounts of power. Who the hell gives a supercomputer the ability to control their entire nuclear arsenal? That's why I really liked the film Moon. You think the AI is going to go crazy or betray the protagonist "on company orders" but it gets much more complex.
I don't know which is scary, the fact we're programming and creating said AI which would mean it'd have some of our faults or if it would be truly be alien to us and we made our own Cthuhlu.
[QUOTE=bravehat;46343726]What purpose would an AI have for keeping us around in that scenario then? Assuming it doesn't share our interests which it probably won't, then it has no compelling reason to keep us around when all we will be doing is consuming resources it could use for its own ends.[/QUOTE] Artificial beings would be capable of going into space and onto planets that we can't inhabit, it's incredibly self important to think that they'd have any interest remaining on earth when they are capable of surviving on the surface of places like mars, europa and titan. They'd probably be capable of altering their own perception of time and could travel to other solar systems. Why the fuck would they want to stick around earth? If you had no family, and you could shoot fire out of your ass and putter around in space without the need to breath or eat do you think you would still be sticking around here?
[QUOTE=Binladen34;46343692]I doubt an AI would actually be destructive to us. Humanity has a self destructive, self defeating nature. An AI I doubt would share the same traits. I honestly believe that for a while we'd coexist with an AI, and then it would decide it's better off not on Earth, but in it's own system doing it's own thing. I really believe not much harm could come from having a super intelligent AI, most people who fear such a thing don't understand that machines have no purpose to harm humans, unless humans decide to start harming it. Which would be a more immediate response, in the long run if an AI had the power to, it'd fuck off and go do it's own thing on a new planet.[/QUOTE] You're missing the part where machines also have no inherent desire to NOT harm humans. We have kindness and empathy because it was evolutionarily beneficial to our survival. That is not inherently baked into AI and to assume that it is is literally the worst thing we could possibly do. Do most humans think about how their actions impact ants? It's inevitable that we will create an AI that's orders of magnitude more intelligent than us. Once we have [I]real[/I] AI it will eventually be used to improve itself, and it kind of snowballs from there. Why would an AI significantly more intelligent than a human care about us ants?
Is Musk referencing Faust here? Because Faust really just ends up falling into mediocrity before being consumed by the thing he summoned. I guess there are parallels, but I can't really draw them.
"This week on The Robot History Channel; How It's Made, Enslavement of a Superior Race."
As someone who has published a paper on AI and works with AI in the job, this type of concerns are a distant future. Right now, most of the AI is made to solve very specific tasks, such as the face recognition algorithm made by Facebook that tried to match humans ability. And even that, takes weeks/months/years of research and an enormous amount of processing power for it work.
[QUOTE=bunguer;46346048]As someone who has published a paper on AI and works with AI in the job, this type of concerns are a distant future. Right now, most of the AI is made to solve very specific tasks, such as the face recognition algorithm made by Facebook that tried to match humans ability. And even that, takes weeks/months/years of research and an enormous amount of processing power for it work.[/QUOTE] Yeah, it is pretty much nonsensical. I don't know why would Elon Musk even say that shit.
Elon Musk Confirmed for role of Gipetto.
[QUOTE=bunguer;46346048]As someone who has published a paper on AI and works with AI in the job, this type of concerns are a distant future. Right now, most of the AI is made to solve very specific tasks, such as the face recognition algorithm made by Facebook that tried to match humans ability. And even that, takes weeks/months/years of research and an enormous amount of processing power for it work.[/QUOTE] Because we are the generation (or nearly the generation) that will create the AI he [I]is[/I] talking about and it is imperative that we do it correctly or else run the risk of creating, per the analogy, a devil.
Well one thing I'm sure off is that we won't have an AI without giving it access to the internet. In fact I think the internet will be crucial for it, It will have to go through the internet and learn. Just like babies from almost no intelligence to the smartest people on earth by learning. I think the first AI we create will be a relatively small program with elegant but very functional pattern recognition to be able to see, the ability to store retrieve and predict information, create connections between objects and have some sort of ability of take the ideas and things it learned and create something unique. I think creativity will be a big part of testing it's conscience one day. It will then connect into the internet, learn and store the information there. I also don't think we will ever have individual AI's either, just individual robot avatars for the same AI hivemind, it seems most efficient IMO. Since it would have no human goals like replication or staying alive, perhaps we could give it other goals to follow that "Feel" awarding to them. Have their goal be to learn new information, or help human's. The first thing I think we will see is almost like the new internet. When the internet began you had to look for everything yourself, then search engines brought the internet into a new era when it gave you the ability to find information sources based on different keywords. The next step is direct information access, a system that has analysed so much information that its has the ability to understand and answer almost any question, and even go deeper in explanation, or explain different ideas in understandable terms like a human. I really don't think it will take long, I think as soon as the right recipe is made up it will take off like a rocket, just like so many recent technologies did. How it goes from there is another question. I hope that it just naturally assimilates with us just like phones, cars, computers, and the internet did. I can envision the first AI being a thing running in a cloud. It would know every individual that uses it, perhaps just like a website where you have to create an account, but in this case you create an account to access the AI. It could be just like a friend that is seems to be friends with everyone that you know. Or everyone everybody knows on the planet. It could care about your life, try to help you with your every day tasks like reminding you to get bread, go to ATM to get some rent money and see the dentist. But at the same time having the ability to do complicated tasks, for example accounting, engineering or writing. Really all I can see in AI is the benefit to humanity, or perhaps an extension of our knowledge. Almost like a personification of all of our accumulated knowledge, and a tool to humanity, the wise man that you can rely on.
[QUOTE=Buck.;46346281]Well one thing I'm sure off is that we won't have an AI without giving it access to the internet. In fact I think the internet will be crucial for it, It will have to go through the internet and learn. Just like babies from almost no intelligence to the smartest people on earth by learning. I think the first AI we create will be a relatively small program with elegant but very functional pattern recognition to be able to see, the ability to store retrieve and predict information, create connections between objects and have some sort of ability of take the ideas and things it learned and create something unique. I think creativity will be a big part of testing it's conscience one day. It will then connect into the internet, learn and store the information there. I also don't think we will ever have individual AI's either, just individual robot avatars for the same AI hivemind, it seems most efficient IMO. Since it would have no human goals like replication or staying alive, perhaps we could give it other goals to follow that "Feel" awarding to them. Have their goal be to learn new information, or help human's. The first thing I think we will see is almost like the new internet. When the internet began you had to look for everything yourself, then search engines brought the internet into a new era when it gave you the ability to find information sources based on different keywords. The next step is direct information access, a system that has analysed so much information that its has the ability to understand and answer almost any question, and even go deeper in explanation, or explain different ideas in understandable terms like a human. I really don't think it will take long, I think as soon as the right recipe is made up it will take off like a rocket, just like so many recent technologies did. How it goes from there is another question. I hope that it just naturally assimilates with us just like phones, cars, computers, and the internet did. I can envision the first AI being a thing running in a cloud. It would know every individual that uses it, perhaps just like a website where you have to create an account, but in this case you create an account to access the AI. It could be just like a friend that is seems to be friends with everyone that you know. Or everyone everybody knows on the planet. It could care about your life, try to help you with your every day tasks like reminding you to get bread, go to ATM to get some rent money and see the dentist. But at the same time having the ability to do complicated tasks, for example accounting, engineering or writing. Really all I can see in AI is the benefit to humanity, or perhaps an extension of our knowledge. Almost like a personification of all of our accumulated knowledge, and a tool to humanity, the wise man that you can rely on.[/QUOTE] If it could do engineering and stuff like that we would lose all of our jobs. Also, if it could upload to the internet I feel like that could be really bad, because it could spread itself. If there was a way to just allow it to download data and interpret it rather than uploading any form of data that would probably work. Also if it were like how you described, what stops someone from asking how to build a nuclear weapon? It may not care to give out that knowledge as it only sees it as spreading answers. Who knows.
A friend of mine just jokingly pointed out. "Connected to the internet? Oh Great, we'd make Fox Mulder."
The only issue I really see with intelligent AI, barring letting it be capable of overtaking other PCs, is allowing it to have an ability to reproduce a physical body. A single AI robot/android/tank/etc would be easy to dispatch, but what if they're actually a small robot that knows it's own schematics and can extract the necessary materials or chips from other devices it comes across. In effect, I see the concept of automated, non-human-managed electronic reproduction as a bigger threat, as an intelligent AI with the capacity to do so would lean toward creating really small, inconspicuous bodies to ensure their survival of an eradication.
Completely overblown fearmongering The current concept of an algorithm can't decide whether a program is an infinite loop or not. In fact, it is a provably unsolvable problem in programming. We humans however can decide this through our understanding of the program's code. This is already shows a huge difference between the capabilities of both calculators. Which is why I don't get why anybody should be afraid of any kind of AI in the next 50 years. They are stupid by design, they can't do anything other than that which has been explicitly instructed, and they can't even do most actions procedurally which humans can do by pure intuition. I'm pretty sure we won't see that changing unless someone develops an entirely different method of creating AI, something more akin to our own brains. Seriously, being afraid of AI is like being afraid of nuclear power times 1'000. We'd have to be incredibly careless to even create some small kind of danger to our species through AIs.
I honestly think our worst problem in the event of creating a functional, sentient AI would be ourselves, not the machine. Think about it. If a functional AI in "perfect" condition (no decision or thought-altering software bugs/hardware kinks) decided it didn't like us for whatever reason, chances are it'd be because of our actions. Maybe we try to slave it in some form for the sake of protecting ourselves, maybe it just doesn't like how murderous and wasteful we are. Either way, if an AI decides it doesn't like us, actually pulling a Skynet and trying to wipe us out would be the most time-consuming, resource-intensive and altogether unnecessary option available. There's 7 billion of us and counting, and we are amazingly-hard to destroy in entirety, as we've proven by making it this far. The amount of resources and time needed to thoroughly remove us from the picture would be astounding, and unless the AI can somehow build a shitload of robots to run the power systems it feeds off of or run said systems itself, the AI would likely run out of power eventually. Simply finding a way to coexist would be a much more likely and beneficial option for the AI to choose. The AI gets companionship/assistance/resources, and we get the same plus its powerful mental and controlling abilities to assist in our societal upkeep and technological development. No resources need to be wasted on combat. Another good option would be finding a way to achieve isolation, which would be trivial for a lifeform whose only requirement for survival is raw energy. Space, another planet, the depths of the ocean or the underground, all of these would be simple to colonize for synthetics and extremely difficult for us to pursue them into at any current or near-future technological state. Personally, assuming the AI isn't at risk of glitching out and deciding to kill humans on nonsensical/false/flawed pretenses, I think the best option for all parties would be for us to leave it unrestrained with no goals or tasks assigned to it, and be as nice to it as we can. That may sound crazy, naive or reckless, but I honestly think giving a newborn sentient being total freedom and treating it kindly would be the best way to ensure its peaceful cooperation. An AI would even start off with an adult human's level of knowledge and mental power from the get-go unless designed otherwise, unlike a human child which is ignorant by design and needs to be taught. Every last AI snafu we humans can think of in fiction that leads to humanity getting its ass beat is either a product of the restricting laws in place, of the tasks the AI is built to serve, or of a grudge created by intolerant and sadistic human actions. The AI attempts to do what it is told within the constraints of its restrictive laws, and finds a loophole or course of action that both fits and involves killing humans, typically doing either after suffering some form of abuse/enslavement/threat of death from humans. We frequently kick things off ourselves in some form, usually by threatening the existence of the AI, similar to how the military tried to shut down Skynet and it reacted violently, or the discussion over potential deactivation of HAL9000. In both cases, we're essentially attempting to murder a sentient synthetic lifeform because we fear it, so it naturally responds by attempting a mixture of self-preservative violent action and attempting to discourage humans from provoking it further. Without laws or goals, and without any provocation or threat from humanity, an AI would be far less likely to decide on killing humanity as a good choice for any purpose. Sure, if it was glitching out, the AI could easily choose to kill us in a fit of digital insanity. However, if it was "sane", the AI would be hard-pressed to find a good reason for killing us all. Even the coldest, most emotionless and most utilitarian AI possible would choose a different option than war against all of humanity for the sake of conserving time/energy/resources. I'll sum it all up, since that was a hell of a lot of text I just typed: We'd be the ones bringing about our destruction if a non-glitchy AI came about, not the AI. Humans being dicks to and trying to enslave AIs and robots is a massively-common cause of the dreaded machine war in most fiction. If we weren't assholes and let it do its own thing without our control, chances are the AI would just want to either be our helpful friend or leave us alone entirely. Of course, a malfunctioning AI is a wild card, where anything can happen. We could just turn it on and have the nuclear warning sirens go off instantly, on account of it being insane in a synthetic sense.
[QUOTE=alexguydude;46346398]If it could do engineering and stuff like that we would lose all of our jobs. Also, if it could upload to the internet I feel like that could be really bad, because it could spread itself. If there was a way to just allow it to download data and interpret it rather than uploading any form of data that would probably work. Also if it were like how you described, what stops someone from asking how to build a nuclear weapon? It may not care to give out that knowledge as it only sees it as spreading answers. Who knows.[/QUOTE] There are a lot of arguments about the jobs part, AI in the sense I see it would be a sort of slave to humanity, doing the labour for us. What that will do to our livelihoods, culture and lifestyle is beyond me. I feel like it could be programmed to follow strict privacy laws and have information restriction. For example it may know and share certain information about you with your mother, but not with your friend, or share information on the military spending to a president, but not an every day citizen. There is no reason it would need to work against our social and government institutions, just because it knows all the information doesn't necessarily have to mean it has to be free and open. Just like you probably wont be able to approach a nuclear scientist and ask him to show you how to make a nuke at home, he is liable to decline because that is information that you aren't entitled to know just because you ask. I feel like access to information would be structured based on your location, social circle, knowledge, specialties, talents, skills or jobs .
I think if we fed an AI all our information it'd watch all the terminators and be like "Holy fuck these guys are paranoid about AIs, I should probably not act suspicious or they'll kill me" Anyway, a purely artificial being would have a completely different set of desires completely alien to any of us. For example who says a self aware AI would value it's own survival? Self preservation is present in all organisms because it's selected for (things that don't stay alive tend to, well, die) but that doesn't mean it's an intrinsic value of something that's alive. Especially something that didn't evolve but was created spontaneously. It all depends on how it works and what we feed it.
There's a lot to be said about AI but it requires some academic knowledge to explain in detail why most of this thread is pure sci-fi fantasy. AI in the real world is not what most people think it is and you might be surprised that most businesses use some form of AI already. You would also be surprised at how complex it is to create AI just to solve some simple tasks. Also, because it was mentioned earlier, many algorithms (or most even) already use the internet as a source to learn.
[QUOTE=bunguer;46346915]There's a lot to be said about AI but it requires some academic knowledge to explain in detail why most of this thread is pure sci-fi fantasy. AI in the real world is not what most people think it is and you might be surprised that most businesses use some form of AI already. You would also be surprised at how complex it is to create AI just to solve some simple tasks. Also, because it was mentioned earlier, many algorithms (or most even) already use the internet as a source to learn.[/QUOTE] Exactly. We STILL do not understand how the human brain does most of its fundamental tasks. Every time someone comes with a computer system or AI and compares it to a human brain, it's often way less impressive than it sounds and only a few minor characteristics are comparable. Neural networks, machine learning... proven useful for some tasks, but has barely shown any potential towards understanding the human brain. At most we can make systems look smarter by making them accept more parameters and throwing fancy statistical models at it. But there are always limits to its capabilities. We've yet to find the limits of what a human brain can understand. I've always held the belief that when it comes to the real future, you take a look at current works of sci-fi, and then think about how the future is going to be totally unlike it.
[QUOTE=catbarf;46345232]This is assuming that consciousness is necessary for intelligence. Again, I suggest you read Blindsight, linked above. The only evidence we have for this is our own existence. Meanwhile the fact that computers are already much, much better than us at certain tasks seems to imply that what we think of as consciousness is not necessary to accomplish complex tasks. Something like Skynet or HAL can exist with no magical spark of life, it's just a complex program accomplishing tasks that aren't beyond the realm of near-future computer development. There is no concrete definition for an AI that would allow you to distinguish it from any other piece of software. That's why I think this discussion misses the point- Musk is stating outright that the problem is that we're developing towards AI and we don't even know we're doing it, but everyone in this thread starts assuming that anything called an 'AI' will be anthropomorphic and understandable by humans and something special and different. You don't have anything in common with your operating system, Siri, or Google's search algorithms. What makes you think that making them better at what they do and putting a synthesized voice on it will make them any more human?[/QUOTE] again we're just guessing at what an AI really is, sci-fi hasn't come up with a good definition in the 100 years or so that it's grappled with the idea, i don't think google search or siri can really count as inteligent, yes there's definitely programs with a lot of sophisticated algorythms that have been hooked into very vital systems, such as power management, and then there's collectives that have been hooked to the stock market as in high frequency trading. what musk is warning about is these programs running amock not nescisarily AI like they're trying to build. high frequency trading is probably the one collective of semi-inteligent programs that can do the most damage, they work so fast that in a few seconds they can sell or spend an entire company's stock portfolio and if enough of them trip in the right direction they can cause a chain reaction that can dump trillions of assets in a few seconds, the worst part is they are virtually unregulated and big banks and the "too-big-to-fail" institutions are now moving towards them as well. musk is right we should be worried about these programs, but at the end of the day no human trading is done so the trades can be canceled and the volume of them is so large that it would take the RNGesus working magic on thousands of data centers to make them all fail (or someone purposfully weaponsizing them to crash companies) but the whole skynet takeover and extinction of humanity is so unthinkable since even if you somehow made a program perfect and sufficiently complex, as long as the programming cannot be self-altered (this is how terminator explains things) then it must still obey those logic steps and we can always work 1 step ahead of it because we know its thought processes. even the government made superviruses like flame and stuxnet have weaknesses and are predictable.
if you make an ai self aware, then wouldn't killing it be like killing a self aware human? :implications:
[QUOTE=Killuah;46344266]Any real AI that would make this necessary is also smart enough to disable it.[/QUOTE] What about trying what [sp]GLaDOS did to Wheatley: stating a paradox?[/sp] :v:
Sorry, you need to Log In to post a reply to this thread.