[QUOTE=Mr. Someguy;22626364]They should have a voice-recognised off switch that is hardwired into the system and can not be overridden without a complete replacement of the CPU / Motherboard. Thatway if it ever gets out of hand, a simple command said in a stern tone (in a similar manner to how you address a bad dog; for example: "OFF!") will force it to shut down if for any reason it decides to do harm.[/QUOTE]
Laputan Machine
i say we digitize a human brain and make that the ai
absolutely nothing will go wrong. :downs:
Artificial intelligence wouldn't need to be programmed with laws. Artificial intelligence makes decisions for itself.
[QUOTE=ProboardslolV2;22638818]Artificial intelligence wouldn't need to be programmed with laws. Artificial intelligence makes decisions for itself.[/QUOTE]
You're kind of missing the point, to guard ourselves from any "bad" (bad for us mind you) decisions, they need laws and rules to stop them from deciding that the human race is inferior and starting to kill all of us off or assimilating us etc.
We'll have a Borg/Cybermen situation.
except there will be no captain picard or doctor to save our asses
[QUOTE=Lebowski;22638876]You're kind of missing the point, to guard ourselves from any "bad" (bad for us mind you) decisions, they need laws and rules to stop them from deciding that the human race is inferior and starting to kill all of us off or assimilating us etc.
We'll have a Borg/Cybermen situation.
except there will be no captain picard or doctor to save our asses[/QUOTE]
We would technically be inferior physically and mentally. They would enslave us.
[QUOTE=zombiefreak;22619208]This is such an important topic that I have no idea why politicians aren't bringing this up. We are going to have artificial intelligence in 20-50 years and this is an important issue.[/QUOTE]
Umm.. yeah. 20-50 years is a long fucking time, are you seriously wondering why it's not being discussed?
[QUOTE=catch33;22644523]Umm.. yeah. 20-50 years is a long fucking time, are you seriously wondering why it's not being discussed?[/QUOTE]
That is artificial intelligence that meets and or exceeds our level of intelligence. Robots that can learn and behave are already here and may be integrated in to society within the next 10 (Wild guess).
Star Wars?! FUCKING EPISODE 1!!!!!!!!!! AI owns those fucking frog like 'Gungans'. ^^ Tanks and shit'z!!!
If robots became common I'd get an uncanny valley one just to fuck with people.
[QUOTE=zombiefreak;22619208]This is such an important topic that I have no idea why politicians aren't bringing this up. We are going to have artificial intelligence in 20-50 years and this is an important issue. We have no regulation or basis on what our future robots are going to have programed. Should we allow them complete free will? Should we restrict them? Should we enslave them?
Great Sci-Fi writers, such as Isaac Asmov, addressed such issues by introducing the three laws of robotics.
However great these laws are, they had loopholes such as the robot attempting to help a situation such as him going in to someone who appears to the robot as endangering himself (such as someone who is staying in a radioactive chamber for too long), but really is in no harm at all. Then modified it to read:
However, this caused an even greater problem. What if the robot decided not to help a human currently in danger? This is where he put in the Zeroth Law
This is saying that the robot will not harm the human if the robot judges that the situation permits.
Some people made later slight modifications, such as a robot must identify itself as a robot, or a robot is a specific class of person, ect.
Now even the final rules have a major loophole in it. What if the robot does not know that it is harming humanity?
So facepunch, what do you think of the issue?[/QUOTE]
[i]Someone[/i] just finished [i]I, Robot[/i].
The Three Rules is completely flawed. The series says they are "perfect", but in reality they are far from it. Should I bring forward the test from Little Lost Robot, I think it was.
The test was used to find an experimental robot that had escaped and had hidden within other, normal robots. What would happen was a man would be strapped into a chair, a weight dropped on him (though the weight would be knocked away at the last section); the robots would sit in front of the man, and when the weight was dropped, they would jump at it. A radiation field was put in front of the man, so if a robot crossed it, they would die. Normal robots would have to cross it because "they could not allow a human to come to harm through inaction". The experimental robot didn't have this part, though, so he wouldn't have to jump to help.
When the weight was dropped, not a single robot moved. Why? Because the robots had realized that if they jumped to try to save this one man, they would die, and couldn't save him; they also thought: what if someone else in the future would need help, and I would have to help them? If I die here, they would also get hurt for no reason.
The problems with artificial intelligence are great, because in creating it, we would give the robot its own free will, and in that, we would give it humanity. Thus, it would be impossible to control it the way that we want it to be controlled, so it wouldn't be a a positronic man, it would be a positronic [i]human[/i]. We can't even control our own humanity, how would we be able to make a [i]new[/i] humanity and expect it to act how we want it to?
The variables completely destroy the purpose.
[editline]04:20PM[/editline]
[QUOTE=Soul-Chicken;22619474]i see you watched iRobot last night[/QUOTE]
He actually read [i]I, Robot[/i], because it goes much more in-depth into what the OP was saying.
[QUOTE=DudeGuyKT;22645113]
He actually read [I]I, Robot[/I], because it goes much more in-depth into what the OP was saying.[/QUOTE]
Actually I've been thinking about this ever since I read the entire "The Robot" series.
[QUOTE=wonkadonk;22619593]there should be a rule saying that the rules cannot be extrapolated[/QUOTE]
Well man that just depends on your perception of what is extrapolating
Sorry, you need to Log In to post a reply to this thread.