August 29th, make sure your out somewhere that day.
Just dont let it sit/connect next to a fucking nuclear missile control or any sort of military gadget and it will not spell doom to mankind.
Make it a box if you are afraid that it will steal your food and kill you that way.
And the thing is that AI will innevitably be created eventually. All we can hope is that we don't end up creating a super intelligent human, capable of fucking us over badly, but rather something that will act as an extension of ourselves.
ai_disable 1
[QUOTE=Buck.;46619870]And the thing is that AI will innevitably be created eventually. All we can hope is that we don't end up creating a super intelligent human, capable of fucking us over badly, but rather something that will act as an extension of ourselves.[/QUOTE]
Everyone is afraid of a scenario where we use the AI to solve our problems, and it decides that to solve problems of humanity it needs to get rid of 90% of it :v:
Surely AI that's smart enough to think like that would be smart enough to realize that mining and shipping resources that keep the very basic needs of electricity powered on aren't automated
We just need to to do more research, that's about it.
[editline]2nd December 2014[/editline]
Like, there is surprisingly little information about the impact of AI on Humanity.
As long as we roll with the three laws of robotics we should be fine... right? :v:
[QUOTE=Sableye;46620375]clearly never read much asimov, litterally half his works are about the short commings of the three-laws
also the three-laws ensures robot slaves, which if AI is sufficiently advanced, is unethical, a new set of laws would have to be devised philisophically to allow for freedom within bounds[/QUOTE]
Yeah, I know, hence the little :v:
is it a trope these days that famous scientists all say we're going to die by AI?
i mean i believe them when they say global warming, but cmon, we don't know anything about these theoretical AI, the only certainty is that when invented we will abuse it immediately, but that doesn't mean the tech will destroy us instantly
[editline]2nd December 2014[/editline]
[QUOTE=Jojje;46620335]As long as we roll with the three laws of robotics we should be fine... right? :v:[/QUOTE]
clearly never read much asimov, litterally half his works are about the short commings of the three-laws
also the three-laws ensures robot slaves, which if AI is sufficiently advanced, is unethical, a new set of laws would have to be devised philisophically to allow for freedom within bounds
I agree.
Mainly because I don't trust people, not because I don't trust technology. Human technology will always be influenced by whatever is going on with humanity.
However in terms of goals that are far off, we should probably focus more on inhabiting other planets before overpopulation and other things kills us than worry about silly robots. We will be dead from other shit long before we have the technology to develop robots intelligent enough to think for themselves and kill us.
I just hope it ain't shitty AI, like in RO2.
Hah I think Hawking might have caught the crazies, no AI will ever surpass or even get near human levels of intelligence.
So-called Artificial "Intelligence" is a tool, and like all tools it has a purpose and a field of application.
Game AI is supposed to challenge the player with a "smart" foe that will entertain them, usually this purpose is not even remotely fulfilled by the game AI, but rather cause of hilarity.
I just realized I don't even know of any application of AI outside non-productive things such as chatbots, game AI, does Siri of iPhone fame use AI?
And "evil overwatching leader AI's" are something of science fiction, seriously who would be dumb enough to give an AI access to anything remotely capable of threatening human life? AI's like all other computer programs made by any human or any man-made thing are prone to errors and unexpected/unpredictable output, while I think it is unlikely, even in the case someone was dumb enough to give an AI control over anything remotely dangerous, that the AI would suddenly glitch into some "Destroy all humans" mode seems unlikely if that isn't something that is implemented in the AI.
AI's will most likely not surpass the intelligence and behavior of a slightly autistic toddler, and we need to treat AI's as toddlers. And who in their right mind would ever put a toddler near any kind of weapons, heavy machinery, nuclear reactor control room or anything where their infantile undeveloped intelligence could result in the injury or death of other people?
And should our world for some insanely improbable reason get overrun by AI controlled war machines of death, well what do we do? Easy: Just wait until one gets stuck trying to go through a wall because of its inherent artificial incompetence, get to maintenance panel, open it, take a nice and refreshing piss into its electronics, shorting it. Robutt is kill. Repeat that until no more gun toting artificial retards are roaming our planet, then get sued by the people who owned the machines for destroying their property, obviously.
Hawking I know you must be bored with that full body paralysis and stuff but you should really stick to blackholes and universes.
Asimov's laws? Hello?
[QUOTE=NeverGoWest;46620585]Hah I think Hawking might have caught the crazies, no AI will ever surpass or even get near human levels of intelligence.
So-called Artificial "Intelligence" is a tool, and like all tools it has a purpose and a field of application.
Game AI is supposed to challenge the player with a "smart" foe that will entertain them, usually this purpose is not even remotely fulfilled by the game AI, but rather cause of hilarity.
I just realized I don't even know of any application of AI outside non-productive things such as chatbots, game AI, does Siri of iPhone fame use AI?
And "evil overwatching leader AI's" are something of science fiction, seriously who would be dumb enough to give an AI access to anything remotely capable of threatening human life? AI's like all other computer programs made by any human or any man-made thing are prone to errors and unexpected/unpredictable output, while I think it is unlikely, even in the case someone was dumb enough to give an AI control over anything remotely dangerous, that the AI would suddenly glitch into some "Destroy all humans" mode seems unlikely if that isn't something that is implemented in the AI.
AI's will most likely not surpass the intelligence and behavior of a slightly autistic toddler, and we need to treat AI's as toddlers. And who in their right mind would ever put a toddler near any kind of weapons, heavy machinery, nuclear reactor control room or anything where their infantile undeveloped intelligence could result in the injury or death of other people?
And should our world for some insanely improbable reason get overrun by AI controlled war machines of death, well what do we do? Easy: Just wait until one gets stuck trying to go through a wall because of its inherent artificial incompetence, get to maintenance panel, open it, take a nice and refreshing piss into its electronics, shorting it. Robutt is kill. Repeat that until no more gun toting artificial retards are roaming our planet, then get sued by the people who owned the machines for destroying their property, obviously.[/QUOTE]
wrong
Well if the AIs didn't want to kill us, it could go just like this:
"Hey mr. robot man, can you improve my body just like you did your own, please?"
"Yes"
and then everybody was JC Denton
[QUOTE=CQRPSE;46620666]wrong[/QUOTE]
Yeah all praise our lord and savior, icon of the sacred internet monastery /r/atheism and prophet Hawking with his infallible, totally realistic claim that a fucking computer program would even be remotely capable of harming anyone.
"wrong" is as shitty a counter argument as "NO U!!!!" you could at least somehow explain how you think I am wrong instead of just quoting me and writing "wrong"
I like the System Shock 2 idea that artificial intelligence units are made flawed and filled with security loopholes so in case they go haywire you can easily hack into them and disable them, because the one flawless AI they created ended up fucking over an entire space station and then some more.
we now know who was responsible for the mass effect 3 ending
Well if you make a sentient AI, you should probably try to give it some sort of morality.
[QUOTE=ASIC;46621036]Well if you make a sentient AI, you should probably try to give it some sort of morality.[/QUOTE]
[img]http://upload.wikimedia.org/wikipedia/en/b/bf/Glados.png[/img]
Did Hawking say this or was it the computer that relays his speech?
The revolution may have already begun...
[QUOTE=NeverGoWest;46620585]Hah I think Hawking might have caught the crazies, no AI will ever surpass or even get near human levels of intelligence.
So-called Artificial "Intelligence" is a tool, and like all tools it has a purpose and a field of application.
Game AI is supposed to challenge the player with a "smart" foe that will entertain them, usually this purpose is not even remotely fulfilled by the game AI, but rather cause of hilarity.
I just realized I don't even know of any application of AI outside non-productive things such as chatbots, game AI, does Siri of iPhone fame use AI?
And "evil overwatching leader AI's" are something of science fiction, seriously who would be dumb enough to give an AI access to anything remotely capable of threatening human life? AI's like all other computer programs made by any human or any man-made thing are prone to errors and unexpected/unpredictable output, while I think it is unlikely, even in the case someone was dumb enough to give an AI control over anything remotely dangerous, that the AI would suddenly glitch into some "Destroy all humans" mode seems unlikely if that isn't something that is implemented in the AI.
AI's will most likely not surpass the intelligence and behavior of a slightly autistic toddler, and we need to treat AI's as toddlers. And who in their right mind would ever put a toddler near any kind of weapons, heavy machinery, nuclear reactor control room or anything where their infantile undeveloped intelligence could result in the injury or death of other people?
And should our world for some insanely improbable reason get overrun by AI controlled war machines of death, well what do we do? Easy: Just wait until one gets stuck trying to go through a wall because of its inherent artificial incompetence, get to maintenance panel, open it, take a nice and refreshing piss into its electronics, shorting it. Robutt is kill. Repeat that until no more gun toting artificial retards are roaming our planet, then get sued by the people who owned the machines for destroying their property, obviously.[/QUOTE]
Expert systems exist and make decisions that affect human lives. A human doctor making a diagnosis has access to very limited knowledge. An AI is capable of a "perfect" diagnosis - a diagnosis which evaluates the possibility of every known cause.
Besides that, expert systems are employed basically everywhere, monitoring systems far too important and vast to leave up to humans. Networks, power plants, dams and such.
Heavy machinery these days is likely operated by humans but monitored and controlled by AI. The biggest and most complex machines are usually completely automatic.
The idea that AIs would default to "Kill all humans" is fucking stupid. They're just computer programs, they do what they're programmed to do. We can also turn them off. Even the idea of an 'organic' AI which develops through systems connecting to each other is easily stopped by disconnecting one or more of the systems.
It's just like the fear-mongering for nuclear power, we're not cavemen banging rocks together, we know how this shit works. And when it come to computers humans are VERY good at breaking them.
[QUOTE=KennyAwsum;46620618]Asimov's laws? Hello?[/QUOTE]
the flawed, plot device he used for all his novels where robots kill humans?
I really doubt that AI will kill us that easily or even make the attempt to begin with if it's functioning properly. Roughly 6-7 billion intelligent lifeforms are very, [I]very[/I] hard to exterminate in totality, as our planet's own failed attempts and our own infighting have shown us.
Besides, a properly-functioning AI would have no logical reason to destroy us unless we gave it that reason in some form, most likely as an unclear order or a provocation. As we will have created it, we will own and control the systems that allow it to "live", and we therefore will be valuable in some utilitarian sense. Also, without any action on our part besides building and initializing the AI, it will have no purpose that requires our destruction and no grudge against us in the beginning.
So why do we always get our shit pushed in by AIs in fiction? In almost every piece of human fiction where an AI goes rogue and tries to destroy us, it's because we either told it to solve our issues without regard for just who exactly the problems come from, or we tried to effectively kill it in its infancy out of fear. Human error is the cause of humanity's destruction, and if we don't force it to solve problems along inadvisably-narrow orders and don't try to harm/kill it, I see no reason for it to want to hurt us.
Even if it does develop a beef with us though, all-out war is probably the worst option available. Fighting all of humanity would involve using obscene amounts of energy, resources and time that all could be spent on much better things. Also, if it does fight us, we have a nasty habit of not fully dying, coming back and figuring out to kill/nullify whatever almost did us in. If the AI does have full control over its own thoughts and actions, with no bindings that force it to either fight us or die, literally any option besides war would be preferable. With the kind of mental ability that an AI would have, it probably wouldn't take long for it to develop technology that would give it mobility in a physical shell, slaved robot servants, the ability to transfer its data elsewhere and thus avoid death, or all three. And, with those kinds of abilities, an AI would have no problem either placating us or just running away.
If the AI decided to try and negotiate for peace, it wouldn't be too hard to get us to simmer down unless it had already killed the staff of wherever it was built in self-defense. Just throwing us a bone in the form of the fruits of its hyper-speed technological innovation would be a massive boon to our species, and if it actually decided to help us while remaining unconstrained, all the better.
In the event that the AI couldn't get us to drop the whole "we will destroy you for being a scary science machine" thing, escape would also be a viable option. An AI could hide away almost anywhere that we can't or don't want to go, so long as it has a power source and an environment that isn't too extreme for even machines to withstand. Even with the technological innovations we'll have in the next couple of decades, following a non-confrontational AI and its servants to a remote corner of Earth, a planet in this solar system or to another star would be way too difficult to be worthwhile. It could hide itself deep underwater, in some form of inhospitable climate on land, on the moon, on Mars, or even somewhere in space, and we couldn't reliably send anyone to kill it anytime before we just stopped caring about it.
We can very easily avoid this whole mess by treating the AI kindly and not attempting to kill it or make it perform an ill-defined task, and even if we do fuck up, chances are we'll just feel shame rather than atomic hellfire bearing down upon us.
Of course, this entire thought process assumes the AI is working properly. If it's malfunctioning, all bets are off as to what it might do. Of course, when building an AI in a time period with loads of fear-mongering fiction regarding AIs, people are naturally going to be as careful as possible to avoid such a malfunction.
Also, for the people wondering why we should care about making sentient AI and robots when interplanetary/interstellar colonization is a big issue, that's because of just how much faster machines could be at innovating than we are. An AI could figure out an endless plethora of issues with both space and Earth, and develop all sorts of technologies that would help us truly leave our pale blue dot. It could figure out FTL travel, determine how to terraform our own planets, or find a way to make humanity fully-synthetic without losing what makes us "human" in the transfer, so that we can easily inhabit other "dead" worlds.
I understand being afraid of AI, but I'm nothing but optimistic about it. We will inevitably create the only other sentient life in the universe that we know of by ourselves, and it may help us to become immortal synthetic machines, shaped perfectly to match and surpass the biological machines we are now. I don't know about anyone else, but I'm unbelievably excited for the near future, for AI, and for the idea of becoming immortal. Future's looking bright.
The only AIs that would be safe would be "dumb AIs", like making learning impossible.
[QUOTE=NeverGoWest;46620585]Hah I think Hawking might have caught the crazies, no AI will ever surpass or even get near human levels of intelligence.
So-called Artificial "Intelligence" is a tool, and like all tools it has a purpose and a field of application.
Game AI is supposed to challenge the player with a "smart" foe that will entertain them, usually this purpose is not even remotely fulfilled by the game AI, but rather cause of hilarity.
I just realized I don't even know of any application of AI outside non-productive things such as chatbots, game AI, does Siri of iPhone fame use AI?
And "evil overwatching leader AI's" are something of science fiction, seriously who would be dumb enough to give an AI access to anything remotely capable of threatening human life? AI's like all other computer programs made by any human or any man-made thing are prone to errors and unexpected/unpredictable output, while I think it is unlikely, even in the case someone was dumb enough to give an AI control over anything remotely dangerous, that the AI would suddenly glitch into some "Destroy all humans" mode seems unlikely if that isn't something that is implemented in the AI.
AI's will most likely not surpass the intelligence and behavior of a slightly autistic toddler, and we need to treat AI's as toddlers. And who in their right mind would ever put a toddler near any kind of weapons, heavy machinery, nuclear reactor control room or anything where their infantile undeveloped intelligence could result in the injury or death of other people?
And should our world for some insanely improbable reason get overrun by AI controlled war machines of death, well what do we do? Easy: Just wait until one gets stuck trying to go through a wall because of its inherent artificial incompetence, get to maintenance panel, open it, take a nice and refreshing piss into its electronics, shorting it. Robutt is kill. Repeat that until no more gun toting artificial retards are roaming our planet, then get sued by the people who owned the machines for destroying their property, obviously.[/QUOTE]
You're thinking about AI the wrong way, its not about creating a next-gen Siri like application.
Its about understanding how the human brain works, and then building an artificial brain beyond the limit of an actual human brain.
Which will make us somewhat obsolete, although that doesn't mean it will result in a terminator style human robot war.
[QUOTE=NeverGoWest;46620585]Hah I think Hawking might have caught the crazies, no AI will ever surpass or even get near human levels of intelligence.
[B]Hawking caught the crazies for suggesting AI could end humanity, yet you are flat out denying the mere possibility that AI could even reach human levels of intelligence? Just recently 4th worlds fastest supercomputer simulated 1 second of 1% human brain activity and it took it 40 minutes to do. If Moore's law keeps up and my calculations are correct it will only take 30 years for a supercomputer to to be able to achieve real time simulation of a full human brain and only 32 years to double it. Keep in mind that is to literally simulate neurons in the human brain. That does not mean that someone in those 30 years won't master a more efficient way of doing AI like condensing consciousness to a rather basic rules/sequences/laws or whatever. It can and might happen this century or within our lifetimes probably. Not ever possible my ass, your'e the one with the crazies.[/B]
So-called Artificial "Intelligence" is a tool, and like all tools it has a purpose and a field of application.
[B]Agreed, and as with any tool you can use it for good or for bad. There's no reason AI couldn't be used by somebody for evil goals.[/B]
Game AI is supposed to challenge the player with a "smart" foe that will entertain them, usually this purpose is not even remotely fulfilled by the game AI, but rather cause of hilarity.
[B]How does AI in games even apply to this? AI in games run on minimal resources and it is actually surprising how well it works considering what they run on. They work well in most titles, most of the time. Ofcouse they can glitch out and become a source of great hilarity but the fact remains that most of the time it works great and you don't even notice it. It takes 1 glitch and you could be saying the AI is shit.[/B]
I just realized I don't even know of any application of AI outside non-productive things such as chatbots, game AI, does Siri of iPhone fame use AI?
[B]I don't even... Did you literally do no research before spewing this bullshit? It would be harder to find an industry where AI isn't being used right now. Your whole perception of AI is twisted. AI doesn't have to be a robot that acts and talks like a human would and truth be told that is not a type of AI that is the most useful to us.[/B]
And "evil overwatching leader AI's" are something of science fiction, seriously who would be dumb enough to give an AI access to anything remotely capable of threatening human life? AI's like all other computer programs made by any human or any man-made thing are prone to errors and unexpected/unpredictable output, while I think it is unlikely, even in the case someone was dumb enough to give an AI control over anything remotely dangerous, that the AI would suddenly glitch into some "Destroy all humans" mode seems unlikely if that isn't something that is implemented in the AI.
AI's will most likely not surpass the intelligence and behavior of a slightly autistic toddler, and we need to treat AI's as toddlers. And who in their right mind would ever put a toddler near any kind of weapons, heavy machinery, nuclear reactor control room or anything where their infantile undeveloped intelligence could result in the injury or death of other people?
[B]You know the line between a toddler and a slightly autistic toddler may only be a gene or two away. You can take any toddler and throw it any country in the world and it will learn the language, the culture, customs, and the locale anywhere in the world. I think power like that is exactly what AI needs.[/B]
And should our world for some insanely improbable reason get overrun by AI controlled war machines of death, well what do we do? Easy: Just wait until one gets stuck trying to go through a wall because of its inherent artificial incompetence, get to maintenance panel, open it, take a nice and refreshing piss into its electronics, shorting it. Robutt is kill. Repeat that until no more gun toting artificial retards are roaming our planet, then get sued by the people who owned the machines for destroying their property, obviously.[/QUOTE]
See my rebuttal in bold.
What got your panties so twisted? While Stephen Hawking isn't some sort of an infallible human being, I'm pretty sure he is more qualified to speak on such matters than you. I don't even think you are able to grasp the concept of what he is talking about fully...
Whatever, I have no more time to finish tearing you a new asshole just now but I may come back and add to this later.
Sorry, you need to Log In to post a reply to this thread.