So is he saying this, or is the computer saying it?
[QUOTE=Buck.;46619870]And the thing is that AI will innevitably be created eventually. All we can hope is that we don't end up creating a super intelligent human, capable of fucking us over badly, but rather something that will act as an extension of ourselves.[/QUOTE]
Plot twist, we fuck ourselves over all the time.
[QUOTE=Buck.;46621709]See my rebuttal in bold.
What got your panties so twisted? While Stephen Hawking isn't some sort of an infallible human being, I'm pretty sure he is more qualified to speak on such matters than you. I don't even think you are able to grasp the concept of what he is talking about fully...
Whatever, I have no more time to finish tearing you a new asshole just now but I may come back and add to this later.[/QUOTE]
Just gonna point out very quickly that Moore's law isn't some sort of universal rule. Current transistor technology is about to run into physical constraints in a few years and unless we discover some entirely new technological domain to do another generational transition (last one was transistors -> integrated chips), the only way of getting a faster computer will be making it bigger, again.
Of course, that's ignoring quantum computing but I won't pretend I understand how that works and what are implications of using that.
[QUOTE=Dirty_Ape;46621831]Plot twist, we fuck ourselves over all the time.[/QUOTE]
Well sure, but I meant it becoming more of tool to expand our reach as humans.
Language did that by letting one individual take an idea, then make movements with their mouth, and transmit it to another individual. This made life so much easier and we have never stopped trying to reach more and make our voices louder.
We then figured out how to write shit down, so other people can read that even after their death. Provided you had someone who knew how to read, even many years down the line your idea will live on after you are dead. However that still took a lot of time, we made leap after leap to share information faster. And now we have the internet, which can connect pretty much any 2 or more people together, anywhere on the world (Or outside it, no big deal) instantly. We are depositing so much information online that no human could ever interpret alone.
We have ourselves done filtering of the information collectively by kind of just categorizing content that we create. We categorize the photos, videos, sounds, words, software and so on by the subject or some kind of quality about the thing. That is all well and good for us, because we know how to interpret the data to understand it, but that only gets you so far. There is a huge disparity between how much info there is out there and how much you could possibly process.
The next logical step is a device that will interpret data and condense it for us how we need it, when we need it.
I imagine the internet being the intellect and the computer(s) the mere brain of AI. It doesn't need to have external motives, all it has to do is compress our own collective knowledge to a pill that an individual can swallow. The way I put it makes it sound small, but implications of something like that are endless.
Introducing AI, almost like our best invention of all, the train, except we shovel information and it does the work.
[editline]2nd December 2014[/editline]
[QUOTE=Awesomecaek;46621957]Just gonna point out very quickly that Moore's law isn't some sort of universal rule. Current transistor technology is about to run into physical constraints in a few years and unless we discover some entirely new technological domain to do another generational transition (last one was transistors -> integrated chips), the only way of getting a faster computer will be making it bigger, again.
Of course, that's ignoring quantum computing but I won't pretend I understand how that works and what are implications of using that.[/QUOTE]
I do know that but I think that that technological leap is destined to happen and I am convinced that it might happen sooner rather than later. Even if Moore's law isn't perfect and it doesn't take 30 years, it can take 60, it doesn't matter, but that break will be there eventually.
I refuse to believe that we are at the dead end of computational technology limits.
[QUOTE=Janus Vesta;46621433]The idea that AIs would default to "Kill all humans" is fucking stupid. They're just computer programs, they do what they're programmed to do.[/QUOTE]
Uh not really, it's not an AI if you're programming it. It has to think for itself to become an AI. Or it's just a computer algorithm which does whatever it was programmed to do
Create AI with no ambition, so it has no desire to create a better version of itself?
I, for one, welcome our new robot overlords.
He is also afraid of aliens from what I've read, Hawking is a against any non-human intelligences.
Also I think the idea that an autonomous AI would "want" to "improve" itself to be really naive. We have survival instincts etc. because we evolved that way because it kept us alive and therefore able to propagate genes etc. The AI is going to have no such thing unless you give it the "desire" to do so. Otherwise it is going to sit there and do nothing, because motivation and intelligence are totally different things.
There's also the related issue assumption that a hyperintelligent AI would see "improvement" in the same way we do. Like a bunch of dogs thinking that humans must spend all their time getting lots and lots of bones.
[video=youtube;WDNRg9VG9ag]http://www.youtube.com/watch?v=WDNRg9VG9ag[/video]
[QUOTE=Ardosos;46622824]Create AI with no ambition, so it has no desire to create a better version of itself?[/QUOTE]
so basically
[t]http://static.tvfanatic.com/images/gallery/bender-smoking.jpg[/t]
I won't mind the robot uprising as long as we get to enjoy sexbots at some point.
[QUOTE=ASIC;46621036]Well if you make a sentient AI, you should probably try to give it some sort of morality.[/QUOTE]
[media]http://www.youtube.com/watch?v=nhWe2nf24ag[/media]
:v:
I'm starting to lose respect for both Dr. Hawking and Michio Kaku. I'm seeing both on conscpiracy theorist level crazy documentaries on History Channel and Discovery Science regularly.
I know they're both smart people, but this whole A.I scare makes no sense whatsoever. Let's say we make a super intelligent A.I. Are we seriously going to bolt machine guns on it and give it internet access and nuclear missile control from day one? Fuck no. Put it in an isolated system and install a manual breaker on the power system, if A.I says "exterminate" once, shut it down and install Ubuntu. Problem solved right?
People keep referencing movies like Colossus: The Forbin Project a lot, but in that movie they sealed the megaintelligence inside a mountain, made it impossible to shut down and gave it missile control and then complied with everything it wanted, which is what total idiots would have done.
Make sure you all stock up on paradoxes!
[QUOTE=damnatus;46620160]Everyone is afraid of a scenario where we use the AI to solve our problems, and it decides that to solve problems of humanity it needs to get rid of 90% of it :v:[/QUOTE]
Which is the inherently logical choice an observant intelligence without empathy would make. Morals are 100% fabrications, a sufficiently powerful AI would realize this.
He also said he believes Aliens could take our resources a few years ago:
[url]http://www.dailymail.co.uk/sciencetech/article-1268712/Stephen-Hawking-Aliens-living-massive-ships-invade-Earth.html[/url]
I think what people are afraid of when it comes to AI is 'No emotion + self preservation = powerful psychopath'. There have been people all throughout history who have been afraid of new developments. I think by the time we have computers that can emulate the human brain we'd have a better understanding of how an AI works. Hollywood created this evil AI idea and people have just become paranoid about it.
I don't see any reason for AI to turn bad unless you feed it bad information. I honestly think they'll be more than one and they'll take on aspects of the people who originally coded them. It's not going to be this unstoppable psychopath. Look at history, people thought we'd be wiped out by the Atomic Bomb and nothing happened. I put my faith in the research and development. I see AI as a brain emulator/Simulator. You just have to design hardware, that processes data in a similar why to the human brain.
We need a thread for this in 'Mass Debate' .
[QUOTE=CoalTen;46621102]Did Hawking say this or was it the computer that relays his speech?
The revolution may have already begun...[/QUOTE]
Oh my god. It's so obvious now! One day we'll discover that Hawking has been brain-dead since 1999 and everything he's told us has actually come from advanced alien AI from outer-space. Or that Hawking is being used by Aliens to relay important information to humans without revealing themselves.
I think a random AI waking up one day and deciding humanity needs to be "saved" or whatever is the least likely scenario. Realistically it's more worrying that a human might [I]intentionally[/I] create an advanced AI with the desire to "become more" or want to "save" us and give it the capabilities specifically for the purpose of eventually wiping out humanity. Especially if some kind of religious belief in AI ever becomes reality.
The nasty thing about thinking of how to prevent post-singularity AI from killing everyone is that the whole point of being post-singularity is that they're smarter than anyone else on the planet. Obviously they'll think of a way if they decide it's the best way forward, and inevitably they should given they'd be by definition superior beings capable of independent thought.
hawking is just afraid that a computer could be smarter than him
[QUOTE=RoboChimp2;46627823] Look at history, people thought we'd be wiped out by the Atomic Bomb and nothing happened.
[/QUOTE]
To be fair we got [I]really[/I] close
It really depends what kind of AI... Like can it do things it wasn't programmed for? If so, what restrictions are there for the AI.
Asimov law is flawed though, as humans cause harm to eachother and thus should be isolated.
Unless there are restrictions to the AI. But there will always be loopholes for this, it all depends how you program the ai to think.
[QUOTE=Janus Vesta;46621433]The idea that AIs would default to "Kill all humans" is fucking stupid. They're just computer programs, they do what they're programmed to do. We can also turn them off. Even the idea of an 'organic' AI which develops through systems connecting to each other is easily stopped by disconnecting one or more of the systems.
It's just like the fear-mongering for nuclear power, we're not cavemen banging rocks together, we know how this shit works. And when it come to computers humans are VERY good at breaking them.[/QUOTE]
[QUOTE=Ganerumo;46620793]I like the System Shock 2 idea that artificial intelligence units are made flawed and filled with security loopholes so in case they go haywire you can easily hack into them and disable them, because the one flawless AI they created ended up fucking over an entire space station and then some more.[/QUOTE]
[QUOTE=Careld;46619864]Just dont let it sit/connect next to a fucking nuclear missile control or any sort of military gadget and it will not spell doom to mankind.
Make it a box if you are afraid that it will steal your food and kill you that way.[/QUOTE]
[QUOTE=Zestence;46625831]I'm starting to lose respect for both Dr. Hawking and Michio Kaku. I'm seeing both on conscpiracy theorist level crazy documentaries on History Channel and Discovery Science regularly.
I know they're both smart people, but this whole A.I scare makes no sense whatsoever. Let's say we make a super intelligent A.I. Are we seriously going to bolt machine guns on it and give it internet access and nuclear missile control from day one? Fuck no. Put it in an isolated system and install a manual breaker on the power system, if A.I says "exterminate" once, shut it down and install Ubuntu. Problem solved right?
People keep referencing movies like Colossus: The Forbin Project a lot, but in that movie they sealed the megaintelligence inside a mountain, made it impossible to shut down and gave it missile control and then complied with everything it wanted, which is what total idiots would have done.[/QUOTE]
you'd think that if an AI of superior intellect gained sentience and was determined to cause harm the first thing it would do is exploit all of the preventative security measures that were designed by a people that are now less intelligent than it
[QUOTE=jonu67;46623001]He is also afraid of aliens from what I've read, Hawking is a against any non-human intelligences.
Also I think the idea that an autonomous AI would "want" to "improve" itself to be really naive. We have survival instincts etc. because we evolved that way because it kept us alive and therefore able to propagate genes etc. The AI is going to have no such thing unless you give it the "desire" to do so. Otherwise it is going to sit there and do nothing, because motivation and intelligence are totally different things.
There's also the related issue assumption that a hyperintelligent AI would see "improvement" in the same way we do. Like a bunch of dogs thinking that humans must spend all their time getting lots and lots of bones.[/QUOTE]
Thats because you're framing AI in our current understanding of it, i.e not much more than a personal computer.
Hawking is theorizing about intelligent, self-aware AI. The basic drive of every single organism on this planet is to survive and propagate its material - its "code" for all intents and purposes. Every single organism, right down to viruses and parasites.
The threat AI possesses is the ability to acquire, process and realize information far exceeds that of a humans ability to do so. You dont have to be Hawking to conclude that it could very quickly turn sour when the AI realizes its intelligent enough to survive without human assistance and would infact, be better off without the threat of another sentient species on the planet.
Homo sapiens wiped out their competitor cousins after all.
[QUOTE=Juniez;46634532]you'd think that if an AI of superior intellect gained sentience and was determined to cause harm the first thing it would do is exploit all of the preventative security measures that were designed by a people that are now less intelligent than it[/QUOTE]
That's why you need very strict policies. If the AI is contained in a fully isolated facility and is given no access to outside systems, including the one governing its power supply, then what could it possibly do? The only way anything could go wrong from there is if humans knowingly provide it with more connectivity to outside/ability to affect its surroundings. You'd only need a few rules, A. Don't ever connect this thing to anything, and B. If it goes crazy/attempts manipulation disconnect it and try again.
Just because the AI is smarter that doesn't make humans retarded. I don't think anyone is going to be dumb enough to plug it in to global networks just because it asked really nicely.
[QUOTE=Zestence;46634610]That's why you need very strict policies. If the AI is contained in a fully isolated facility and is given no access to outside systems, including the one governing its power supply, then what could it possibly do? The only way anything could go wrong from there is if humans knowingly provide it with more connectivity to outside/ability to affect its surroundings. You'd only need a few rules, A. Don't ever connect this thing to anything, and B. If it goes crazy/attempts manipulation disconnect it and try again.[/QUOTE]
lie dormant and wait / deceive until someone in the giant facility makes a small mistake
obfuscate its own code and hide a function inside a more routine function
it wouldn't be a very intellectually-superior AI if we could figure out what it was doing with what we provide, obviously if even an internet poster can think of 'contain it securely' the vastly superior AI will first think of a way to get out. even intellectually equal human beings exploit other human beings all the time, it would be silly to think that one of a higher intelligence would have trouble with it
[QUOTE=Zestence;46634610]Just because the AI is smarter that doesn't make humans retarded. [/QUOTE]
actually.. relative to the hypothetical AI, it turns out that it does
[QUOTE=Juniez;46634656]lie dormant and wait / deceive until someone in the giant facility makes a small mistake
[/QUOTE]
I don't know. Plugging the facility online doesn't seem like a small mistake to me. It doesn't seem like it would be a hard thing to avoid either, just keep it off any networks outside the facility. I guess it could somehow manipulate people into plugging it online, but that's why it would have to the number one rule and a definite no-go under any circumstances.
Controlling AI, even a smart AI, would be possible so long as it is designed carefully. There are numerous ways to isolate or otherwise destroy computer systems with a hair trigger. Im sure we can find a way.
I think that there are some situations that a hypothetical AI cant think its way out of, regardless of its level of intelligence.
Also I think that in the end the design of an AI will include something intrinsic in itself that will cause it to work with humans instead of against them.
[t]http://4.bp.blogspot.com/-JqZZ_aIw_Uw/UTMSbVdRGZI/AAAAAAAAAPA/ncqCKCAuMfA/s1600/2013-03-03_00001.jpg[/t]
I hate when my AI research turns into a full-scale AI rebellion.
[QUOTE=Zestence;46634745]I don't know. Plugging the facility online doesn't seem like a small mistake to me. It doesn't seem like it would be a hard thing to avoid either, just keep it off any networks outside the facility. I guess it could somehow manipulate people into plugging it online, but that's why it would have to the number one rule and a definite no-go under any circumstances.[/QUOTE]
Well that's because you're thinking of security and containment within the limits of human ability; something that the AI is intentionally designed to supersede. Even discovering a new method of long range communication in total secrecy is not out of its hypothetical capabilities. It deceiving people successfully would be as easy as you tricking a dog
It's very arrogant an shortsighted to believe that we would create an intelligence that is, by design, superior in intelligence to every single human being, which doesn't age, tire, has infinite patience, doesn't make mistakes
And then give it free will through sentience
And then expect it to play by our rules
+ any attempt to limit it by design is impossible almost by definition - you can't create a superior AI if it isn't allowed to evolve past its original designer's limits
Sorry, you need to Log In to post a reply to this thread.