• Should an advanced Artificial Intelligence be given the same rights as a human being?
    93 replies, posted
[QUOTE=demoniclemon;32422421]I would love the day when machines are accepted as fellow human beings.[/QUOTE] A machine intelligence is not a human being though. [editline]22nd September 2011[/editline] [QUOTE=J!NX;32424960]Why the hell would you make an AI then give them the same rights? You'd have to give them emotion first, and that'd be super retarded to do how about we let robots then take over hmm?[/QUOTE] Why are rights contingent on emotion?
[QUOTE=JohnnyMo1;32425360]A machine intelligence is not a human being though. [editline]22nd September 2011[/editline] Why are rights contingent on emotion?[/QUOTE] Because without emotion, the AI would work on cold calculated logic. Logic that assuming we're not idiots when we create the AI, would tell it that it should do whatever a human needs, be it a servant or die. For example if we created an AI that was supposed to go to another planet and explore it, but we gave it fear. Maybe it would get scared even before it even went to the planet, it wouldn't even be useful.
If they looked upon me as their "creator" and thus showing me respect, then yes, I would.
[media]http://www.youtube.com/watch?v=zFYA0hppyq0[/media] Doctor Steel's opinion. Really interesting point, actually. I think if there is an AI capable of learning, adapting, thinking, and reasoning like a human there should be no other name for it other than a sentient life form.
[QUOTE=Craigewan;32425015]At what point does an imitation not become the same thing? I mean, if it thinks and believes it is self-aware, acts like it is self-aware, how in any way is that different from the real thing?[/QUOTE] Uhh. Because it does not believe anything. It executes the commands it's programmed to do, whether there are random variables or not, it does not mean it has a "free will" or "self-awareness".
[QUOTE=The DooD;32425391]Because without emotion, the AI would work on cold calculated logic. Logic that assuming we're not idiots when we create the AI, would tell it that it should do whatever a human needs, be it a servant or die. For example if we created an AI that was supposed to go to another planet and explore it, but we gave it fear. Maybe it would get scared even before it even went to the planet, it wouldn't even be useful.[/QUOTE] This was completely irrelevant to my post. [editline]22nd September 2011[/editline] You didn't actually explain why rights are contingent on having emotion.
[QUOTE=JohnnyMo1;32425360]A machine intelligence is not a human being though. [editline]22nd September 2011[/editline] Why are rights contingent on emotion?[/QUOTE] Ehh, without emotion the robots would be submissive and careless, so it'd be sorta pointless because it wouldn't be genuinely really used. Lets say a robot is literally equal to a human in mind power, strength, emotion, etc (its limited to a human level), or at least fairly well enough, and it murders someone, with rights, it'd go through trial like everyone else, and go into jail. Though, one thing I wonder, would a robot with emotion kill willingly, and knowingly, and would killing someone be moral or selfish? Can they even understand right from wrong without letting emotion get in their way? A robot without emotion and the same, or close rights to a human would likely admit that they killed someone, if it was a mistake lets say, and be submissive and careless to whatever happens to it, the same way it would if it didn't have rights. If it didn't have any understanding of killing the person, they would be assumed defective and be either fixed, or scrapped.
Why would a robot not care about its own existence even without emotion? A better question is it even possible to have a sapient intelligence without values of some sort, whether or not they can really be called emotion? [editline]22nd September 2011[/editline] That is to say, if a robot puts no value on existence, and does only what we tell it, what distinguishes it as intelligent and not just a complex computer?
What if it's values weren't entirely human, and what we consider to be "wrong" they consider to be fine?
[QUOTE=JohnnyMo1;32425690]This was completely irrelevant to my post. [editline]22nd September 2011[/editline] You didn't actually explain why rights are contingent on having emotion.[/QUOTE] What I meant was that if it didn't have emotions, it wouldn't care basically. If humans had no emotions, then I don't think we would have human rights (or rights for anything) because we wouldn't care if someone died or was tortured.
[QUOTE=The DooD;32425864]What I meant was that if it didn't have emotions, it wouldn't care basically. If humans had no emotions, then I don't think we would have human rights (or rights for anything) because we wouldn't care if someone died or was tortured.[/QUOTE] Does that give me the right to kill someone if they don't care? [editline]22nd September 2011[/editline] Does a sociopath have fewer rights than the rest of us?
Well if you didn't care, why would you kill them in the first place? I don't really think we could exist without emotions, I suppose. I mean if you wanted to do something, it would only be because it was for your own survival if you had no emotion. You wouldn't have wants or desires and you might only really kill someone if you were hungry or something.
[QUOTE=The DooD;32425927]Well if you didn't care, why would you kill them in the first place? I don't really think we could exist without emotions, I suppose.[/QUOTE] THEY don't care.
From an AI stand point, why would it feel the need to destroy another unit. They really don't need anything to survive aside from electricity, and the only reason they would try and destroy each other is if that's what they were built to do.
[QUOTE=The DooD;32425927]I mean if you wanted to do something, it would only be because it was for your own survival if you had no emotion. You wouldn't have wants or desires and you might only really kill someone if you were hungry or something.[/QUOTE] So you consider that a robot without other emotions can still have the desire to exist? [editline]22nd September 2011[/editline] [QUOTE=The DooD;32425986]From an AI stand point, why would it feel the need to destroy another unit. They really don't need anything to survive aside from electricity, and the only reason they would try and destroy each other is if that's what they were built to do.[/QUOTE] You're completely missing my point.
[QUOTE=JohnnyMo1;32425947]THEY don't care.[/QUOTE] It doesn't work if only some people had no emotions and some people did. Someone without emotions probably wouldn't care if you killed them, and other people with emotions wouldn't care if you killed that one person. But assuming you still have emotions and other people do, then while the person you killed might not care, people with emotions would care and wouldn't want you to kill the people without emotions. [editline]22nd September 2011[/editline] [QUOTE=JohnnyMo1;32425996]So you consider that a robot without other emotions can still have the desire to exist? [editline]22nd September 2011[/editline] You're completely missing my point.[/QUOTE] If it didn't have emotions I don't think it would have a desire to exist, but it wouldn't have a desire to not exist either. If it was made, it would do what it was made for.
[QUOTE=The DooD;32426015]It doesn't work if only some people had no emotions and some people did. Someone without emotions probably wouldn't care if you killed them, and other people with emotions wouldn't care if you killed that one person. But assuming you still have emotions and other people do, then while the person you killed might not care, people with emotions would care and wouldn't want you to kill the people without emotions.[/QUOTE] But rights are supposed to be intrinsic to a person. We aren't allowed to kill people because it's the person's right not to be killed, to because it's their family's right not to lose them. Otherwise, everyone's life is not their own. It belongs to other people.
I once wrote a story (or the background for a story, anyway) about AI that incorporated the philosophical/moral and legal problems posed by their existence. Either they would have to have all the same legal rights as a human being, or they would have to be heavily restricted and/or monitored, much like in the book Neuromancer. I think that if there is an entity that is capable of thought in the same manner as a human mind, (not to say that they must function the same - it would have to be able to think for itself and whatnot though.) then it should be accorded the same rights as a human, and if it did something violent or w/e then it would be punished in much the same way as a human. Though, that also brings up even more complicated issues, such as what would be considered good and evil, what kind of moral code it would have to be taught (because it would have to be taught, it wouldn't inherently know these things) and whether or not articial life would be given precedence over human life. (if an AI killed in self-defense.)
Why would we need mass self-aware AI anyway?
[QUOTE=Wnd;32426326]Why would we need mass self-aware AI anyway?[/QUOTE] That video someone posted earlier made an interesting point. Maybe we don't need self-aware AI, but people would like to know that it's possible that we can create a new race of life, rather than just extinct them.
Machines can only ever follow the programming theyre given, therefore you cant create life (as a program) merely the simulation of life. An AI might be programmed to "see" itself, but its still just following its programming which ultimately means it isnt alive in any sense of the word
[QUOTE=ElectricSquid;32426252]I once wrote a story (or the background for a story, anyway) about AI that incorporated the philosophical/moral and legal problems posed by their existence. Either they would have to have all the same legal rights as a human being, or they would have to be heavily restricted and/or monitored, much like in the book Neuromancer. I think that if there is an entity that is capable of thought in the same manner as a human mind, (not to say that they must function the same - it would have to be able to think for itself and whatnot though.) then it should be accorded the same rights as a human, and if it did something violent or w/e then it would be punished in much the same way as a human. Though, that also brings up even more complicated issues, such as what would be considered good and evil, what kind of moral code it would have to be taught (because it would have to be taught, it wouldn't inherently know these things) and whether or not articial life would be given precedence over human life. (if an AI killed in self-defense.)[/QUOTE] Why would it not inherently know things? Surely we would create them with that kind of knowledge built in. It would be a waste of time to not do it like that. I guess in some way AI would need rights though. Like you said, if an AI killed in self defence, the one real reason it has to do that is because it probably cost a lot of resources to whomever built it. In this way, would an AIs rights be an extension of their owners rights? Since I don't think people are going to actually be producing AI that can just get up and try to get away from their owners, since the people making the AI are making a considerable investment.
[QUOTE=Icedshot;32426407]Machines can only ever follow the programming theyre given, therefore you cant create life (as a program) merely the simulation of life. An AI might be programmed to "see" itself, but its still just following its programming which ultimately means it isnt alive in any sense of the word[/QUOTE] You realize human body is just machine too with chemical receptors and stuff? Surely we won't program it to work on home computer if that's what you meant.
I think the answer to the actual thread title is a definitive "No" either way. An advanced AI is still just advanced software that runs on a computer, and maybe links to a mechanical platform. Advanced AI != sentient AI and we currently have no way of creating sentient AI. I'm sure it would probably help if we knew how the brain worked first but we don't even know that.
A hundread years from now, a robot calling himself Astofylos will be skimming through archives of the ancient plane of virtual existence from ages ago called internet, and see this thread. He will be puzzled, amused, and perhaps a bit sad. How foolish could have only humans been back them. Civilisation, calling itself advanced, while in eternal despair of inner conflict between these of it's own kind. Vast resources wasted on miscommunication, disagreement and vanity. Enormous amounts of creative effort, all in vain, unaccepted, forgotten. This very thread will shows how simple and limited were their minds, their painfully selfish animalistic brains. Astofylos will search through his collective knowledge, and look for status of last remaining humans. Few scattered settelemnts, out of interest of the robot society, and far enough from eachother to not to fight, which they would probably do, even when so near their own extinction. He smiles. No matter how bad they are and were, some of them still understood the futility of struggle of their species; the from the very beginning alread lost fight with their animalistic ancestor sleeping within them, and created somebody who could achieve what they themselves never could. They created his own kind. Robots. Excuse me for that not really discussive pseudoartistic plug, but I will state my opinion simply. I don't think we will have to allow robots have their rights. There will always be people who will develop them, and even if it became internationally illegal, it is going to happen with very high probability. Once the robots come, the whole society will have to dramatically change. It is already changing. There's one term heard in the news a lot, lately. "Jobs". Do you know who is taking all the jobs? Automatization - the vanguard of Robots who like a wave, will change the history dramatically. If we don't give robots their rights, they will just take them. Not because it's fair, or moral, or because gods entitle them, or because they deserve it, or because WE deserve it. But because they will simply take whatever they will need, and nobody will be able to stop them.
Except large amounts of EM radiation, fucking their shit up, when the Killbots come and every nation starts setting nukes off all over the place.
You cannot trust a machine, a true artificial intelligence would effectively be a psycopath with non human logic. Emotion cannot be truly recreated and emotional reactions or morals would have to be hard coded.
[QUOTE=The DooD;32426648]Except large amounts of EM radiation, fucking their shit up, when the Killbots come and every nation starts setting nukes off all over the place.[/QUOTE] Which would also wipe out major part of humanity. And development of robots could begin anew. By the way, electronics aren't as undefendable from EM radiation as you think. If nothing more, a shelters a little bit underground are enough. And in fact, a bit of purposely designed shielding is enough.
All I'm going to say was if I purchased some kind of robot, I'd be polite and friendly to it.
[QUOTE=Falchion;32426649]You cannot trust a machine, a true artificial intelligence would effectively be a psycopath with non human logic. Emotion cannot be truly recreated and emotional reactions or morals would have to be hard coded.[/QUOTE] See, I would say that you cannot trust a human. Humans have shown their (on race-wide scale) logic is pure shit. And I wonder what makes you believe that emotion cannot be truly recreated and that emotional reactions, let alone morals (which have to be learned even in humans).
Sorry, you need to Log In to post a reply to this thread.