[QUOTE=Killuah;34467770]Actually they wouldn't.
We are using genetic algorithms in high performance computation already.
The most andvanced A.I.s nowadays are not "born" with knowledge, they learn. [/QUOTE]
Actually they wouldn't what, be sophisticated? Of course a robot would be a highly sophisticated one if it had a pair of 3d-sensing eyes like ours, as well as smell/touch and other senses with its mobile limbs and some elastic muscles.
I haven't seen one yet.. Maybe you have, and maybe that's some "basic stuff" we've already been doing. I don't fucking know.
And neither are humans born with knowledge. We are just exposed to a whole lot of it, we learn.
Can you guys maybe condense what you're arguing to a few key points?
I can't even tell if you are arguing for or against an AI any more.
We give animals rights, Albeit not the same as humans but rights nonetheless. I can see this being what happens with self-aware or sentient AI. We give them rights but they may bare no resemblance to human rights.
A person and a human are different things in my eyes.
Which is why I'd say yes.
[QUOTE=kidwithsword;34461903]This is recently a question I was pondering due to my playing Fallout 3 and watching Blade Runner. Let's say there is a sentient AI.
-It is assembled by human beings using various mechanical parts that humans manufacture (usually with the help of machines)
-Human beings write vast amounts of instructions that tell the AI what to do under various conditions
-If it is sentient, the AI is most likely designed to write its own instructions based on the inputs given it
Now we must consider how this sentience is different from human sentience. Human beings' consciousnesses are a product of mostly chemical processes that occur in the brain. The AI uses electrical impulses in much the same way, however they use circuitry and software to handle those inputs while the human mind is purely biological.
I am of the opinion that sentient AI cannot have rights because they are simply machines. Any emotion or longing to be human would just be a simulation of these thoughts running on high sophisticated piece of software that someone designed.
And I did see iRobot.
And in Fallout 3 [sp]I did save the android.[/sp] It just wouldn't make sense in real life.[/QUOTE]
What are human emotions though? they're just chemical processes, and varying concentrations of different chemicals. Why is having a program decide what goes on less 'human'
[QUOTE=Andokool12;34469112]That's this thread. This thread is hypothetical. It is based in a time in which not only do we understand the nature of our own consciousness, but we know how to create consciousness, or sentience in other words, for AI.[/QUOTE]
No, that's not the point. [B]Consciousness is not ontologically basic.[/b] You don't flip a switch between "sentient" and "non-sentient" without massively changing the structure of the machine.
[editline]31st January 2012[/editline]
[QUOTE=matsta;34473888]We have to identify something "out there" with consciousness. This step is where most things can go wrong, since a human being doesn't find consciousness "out there" with the rest of all things (like, for example, a brain). Once we find some brain process that is related to consciousness, it does take some faith to be sure that consciousness lies on that and only that specific process.[/QUOTE]
so basically you believe in souls
ahahahahah wow
Meh... First of all, we need a robot who is willing to listen and is able to understand the rights we want to give him. A robot you could talk to basically.
Not gonna happen in a long time, if ever.
[QUOTE=Gekkosan;34480668]Meh... First of all, we need a robot who is willing to listen and is able to understand the rights we want to give him. A robot you could talk to basically.
Not gonna happen in a long time, if ever.[/QUOTE]
Do you not understand what hypothetically means?
[QUOTE=mobrockers2;34474729]How do you propose creating a sentient, learning, capable of expanding his own programming AI that will ever only fulfill one purpose. A 'machine' that can think for itself, can learn on it's own. By limiting the machine to fit a purpose you are limiting it's sentience. You people keep talking about limiting the machine, the machine only made to fit a purpose, programming the machine so that it cannot attack humans. This makes no sense, an AI can not be sentient if we limit it in such a way. We're talking about the hypothetical situation that some day we might create an AI that thinks and acts, learns the same way we do.[/QUOTE]
I'm not taking about limiting the machine (like programming it to avoid attacking humans), that's stupid. I'm saying that a machine (even one that HAS A.I.) is different from a human being because, if we ever create a machine like that, it would be to fulfil certain purpose.
Human being, in creating something, always things about its purpose and we would (even if we don't want to) give the machine a purpose. The machine would be designed intentionally, unlike us. Essentially we would be 'like god' for the machine, meaning that, IF god does exist and created us for some purpose we exist only to fulfil that purpose (actually, this is what most religious people believe).
I don't believe that and that's why I think human rights should be respected. But if it isn't true then human rights are nothing compared to the sole purpose of our existence and could be violated if necessary to fulfil that purpose. Apply that to robots now with 'us' like 'god'.
[editline]31st January 2012[/editline]
[QUOTE=DainBramageStudios;34480533]so basically you believe in souls
ahahahahah wow[/QUOTE]
So basically you didn't understand anything of what I said. Ahahahaha, wow.
[QUOTE=matsta;34484307]I'm not taking about limiting the machine (like programming it to avoid attacking humans), that's stupid. I'm saying that a machine (even one that HAS A.I.) is different from a human being because, if we ever create a machine like that, it would be to fulfil certain purpose.
Human being, in creating something, always things about its purpose and we would (even if we don't want to) give the machine a purpose. The machine would be designed intentionally, unlike us. Essentially we would be 'like god' for the machine, meaning that, IF god does exist and created us for some purpose we exist only to fulfil that purpose (actually, this is what most religious people believe).
I don't believe that and that's why I think human rights should be respected. But if it isn't true then human rights are nothing compared to the sole purpose of our existence and could be violated if necessary to fulfil that purpose. Apply that to robots now with 'us' like 'god'.
[editline]31st January 2012[/editline]
So basically you didn't understand anything of what I said. Ahahahaha, wow.[/QUOTE]
How would you propose giving these machines this purpose without restriction what they can and cannot do, and thus restricting their sentience?
[QUOTE=mobrockers2;34484422]How would you propose giving these machines this purpose without restriction what they can and cannot do, and thus restricting their sentience?[/QUOTE]
I'm not talking of restricting their sentience. The debate is about them having 'human rights' or not. I am assuming that they have sentience already.
Just think about how the world is for religious people. If 'God' tells Abraham to kill his son then he feels he has the moral obligation of doing it, because he exist for the sole purpose of fulfilling god's will. Every human right pales is comparison with god's will because human is god's design. God is the 'architect' of human being, he created us intentionally.
Now change 'god' with 'us' and 'human' with 'sentient machine'.
[QUOTE=matsta;34484560]I'm not talking of restricting their sentience. The debate is about them having 'human rights' or not. I am assuming that they have sentience already.
Just think about how the world is for religious people. If 'God' tells Abraham to kill his son then he feels he has the moral obligation of doing it, because he exist for the sole purpose of fulfilling god's will. Every human right pales is comparison with god's will because human is god's design. God is the 'architect' of human being, he created us intentionally.
Now change 'god' with 'us' and 'human' with 'sentient machine'.[/QUOTE]
You're taking a story and projecting it on our society.
Just think about how in battlestar galactica the machines revolted because they were denied basic rights, driving their creators from their home planets and commit almost complete genocide.
Now change battlestar galactica with us and machines with machines.
Two can play your game.
[QUOTE=mobrockers2;34484673]You're taking a story and projecting it on our society.
Just think about how in battlestar galactica the machines revolted because they were denied basic rights, driving their creators from their home planets and commit almost complete genocide.
Now change battlestar galactica with us and machines with machines.
Two can play your game.[/QUOTE]
Actually, I'm taking a reality. When someone asks if A.I. machines should have human right's or not I take that question as a moral one. I don't care if they later 'get upset' because they don't have them.
I think that 'being sentient' isn't everything that makes a human being. It isn't even the reason why human beings have 'human rights' and fight for them. (I posted what i belive about human rights in my first post.)
[QUOTE=matsta;34484745]Actually, I'm taking a reality. When someone asks if A.I. machines should have human right's or not I take that question as a moral one. I don't care if they later 'get upset' because they don't have them.[/QUOTE]
We're not talking about human rights, that would make no sense. Most human rights won't apply to androids.
And what do you mean by you're taking a reality? You keep talking about god.
[QUOTE=mobrockers2;34484784]We're not talking about human rights, that would make no sense. Most human rights won't apply to androids.
And what do you mean by you're taking a reality? You keep talking about god.[/QUOTE]
Omg, I use god as an example, then my argument is invalid?
[QUOTE=matsta;34484817]Omg, I use god as an example, then my argument is invalid?[/QUOTE]
Yes.
[QUOTE=mobrockers2;34484833]Yes.[/QUOTE]
Ok, it seems you have some problems understanding so i'll keep it simple.
We have human rights because we perceive ourselves as 'ends' and not as 'means'. If you are religious (I'm not btw) you may accept that, as we are an intentional design of God, we are 'means' for him and he may use us to fulfil his will and we must obey and respect him (for the reasons I gave in my third post). This is not the case when two people interact. When they do, it is immoral to use the other as 'means' for something.
BUT, for A.I. machines this is quite different. In fact, it resembles the 'god example' (EXAMPLE -.-). If we created sentient machines intentionally and we use them as 'means' it wouldn't be immoral because it is our design, they DO have a definite purpose (or many), unlike us. We would probably create them TO USE THEM AS MEANS, thinking we would create a sentient machine 'for the lulz' and then not use it for anything is just...stupid.
[QUOTE=matsta;34485018]Ok, it seems you have some problems understanding so i'll keep it simple.
We have human rights because we perceive ourselves as 'ends' and not as 'means'. If you are religious (I'm not btw) you may accept that, as we are an intentional design of God, we are 'means' for him and he may use us to fulfil his will and we must obey and respect him (for the reasons I gave in my third post). This is not the case when two people interact. When they do, it is immoral to use the other as 'means' for something.
BUT, for A.I. machines this is quite different. In fact, it resembles the 'god example' (EXAMPLE -.-). If we created sentient machines intentionally and we use them as 'means' it wouldn't be immoral because it is our design, they DO have a definite purpose (or many), unlike us. We would probably create them TO USE THEM AS MEANS, thinking we would create a sentient machine 'for the lulz' and then not use it for anything is just...stupid.[/QUOTE]
There is no reason for us to create fully sentient machines other than because we can (some day). There is no reason at all that we would require machines that can think and act exactly as we can. Sure sophisticated AI is very very useful, but true sentience isn't needed.
The thing is that we you're still seeing it as a thing, while you should see it as a new being, a new species. One that we created. You cannot claim ownership over a truly sentient being any more than that you can over a human being, they need rights to ensure this.
[QUOTE=mobrockers2;34485229]There is no reason for us to create fully sentient machines other than because we can (some day). There is no reason at all that we would require machines that can think and act exactly as we can. Sure sophisticated AI is very very useful, but true sentience isn't needed.
The thing is that we you're still seeing it as a thing, while you should see it as a new being, a new species. One that we created. You cannot claim ownership over a truly sentient being any more than that you can over a human being, they need rights to ensure this.[/QUOTE]
You can't claim ownership over a sentient being but, somehow, for unknown and mysterious reasons, we own dogs and cat's and all sorts of sentient beings. And in many countries many animals are used as means (take cows for example).
uhhhh
all animals have a purpose
to reproduce
That's what you're built to do. That's how evolution designs things. Everything about you is designed to make it more likely for you to reproduce.
You do have a purpose. Just because you don't feel like that should be your purpose doesn't mean it isn't.
[editline]31st January 2012[/editline]
[QUOTE=matsta;34485297]You can't claim ownership over a sentient being but, somehow, for unknown and mysterious reasons, we own dogs and cat's and all sorts of sentient beings. And in many countries many animals are used as means (take cows for example).[/QUOTE]
because they're incapable of functioning socially with humans
a self aware AI would be capable of functioning socially with humans
next
[QUOTE=Killuah;34465439]Yes.
go read some Phillip K. Dick
[editline]30th January 2012[/editline]
Machines manufactured by humans and humans only are not life. A big part in the definition oflife is that it recreates itself.
A machine capable of gathering the necessary ressources from the system it is located in and recreating itself by using those would be called "life" for the given system.
So a robot in a lab that finds parts you throw into his box and recreates itself, alas with "mutations" , can be called "living" within the system of the box.
Conclusion:
A robot on earth can be called "living" as soon as it can be self dependent (its "lifeform" of course, the entire "robotnity". just as not every human is not self dependent but humanity is)
Self awareness comes with "life" to a certain extend. Even bacteria is "self aware" in a way that there is an "in" and an "out" and the bacteria takes ressources in and out.[/QUOTE]
The only way at a robot would be a part of life is if it was capable of evolving enough intelligence (either from reading human studies or by itself) to recreate and advance similiar robots, basically evolve even further, it'd technically be alive and self-aware.
If so, the AI would've proven that it has some sorth of value for advancing life as we know it, and should be threated as equal.
If however, the AI's unable to advance itself, it won't have a value besides what value we've given it, and it won't be a seperate lifeform, but simply a creation.
Yes/no?
[QUOTE=Mr. Scorpio;34485342]uhhhh
all animals have a purpose
to reproduce
That's what you're built to do. That's how evolution designs things. Everything about you is designed to make it more likely for you to reproduce.
You do have a purpose. Just because you don't feel like that should be your purpose doesn't mean it isn't.
[editline]31st January 2012[/editline]
because they're incapable of functioning socially with humans
a self aware AI would be capable of functioning socially with humans
next[/QUOTE]
We are designed to reproduce, but the design is not 'intentional'. (as I also mentioned in one of my posts)
[QUOTE=Tools;34485368]The only way at a robot would be a part of life is if it was capable of evolving enough intelligence (either from reading human studies or by itself) to recreate and advance similiar robots, basically evolve even further, it'd technically be alive and self-aware.
If so, the AI would've proven that it has some sorth of value for advancing life as we know it, and should be threated as equal.
If however, the AI's unable to advance itself, it won't have a value besides what value we've given it, and it won't be a seperate lifeform, but simply a creation.
Yes/no?[/QUOTE]
how can you have a listhp when you're typing
how does that even work
[editline]31st January 2012[/editline]
[QUOTE=matsta;34485382]We are designed to reproduce, but the design is not 'intentional'. (as I also mentioned in one of my posts)[/QUOTE]
And? I give two tugs of a dead dog's cock if it's intentional or not why exactly?
[QUOTE=Mr. Scorpio;34485384]And? I give two tugs of a dead dog's cock if it's intentional or not why exactly?[/QUOTE]
You should also give two tugs of a dead dog's cock about rights in a moral sense. If you are talking about rights in a 'practical' sense then you're not talking about rights.
If you think morally, we should do things for a reason. If we do have a purpose (we we were designed in a kind of 'masterplan' instead of being just an accident) what we SHOULD do is to fulfil that purpose.
[QUOTE=matsta;34485623]You should also give two tugs of a dead dog's cock about rights in a moral sense. If you are talking about rights in a 'practical' sense then you're not talking about rights.
If you think morally, we should do things for a reason. If we do have a purpose (we we were designed in a kind of 'masterplan' instead of being just an accident) what we SHOULD do is to fulfil that purpose.[/QUOTE]
And our purpose is to reproduce. Which we clearly accomplish given the state of humanity and it's prestigious nature.
What does this have to do with rights?
You guys also gotta think about the fact that if the robot knows it's thinking and that it excists, in a way that it wasn't programmed to force the thoughs, then it's self-awareness could cause it to have major depressions regarding it's excistance if not dealt with properly.
[QUOTE=Mr. Scorpio;34485737]And our purpose is to reproduce. Which we clearly accomplish given the state of humanity and it's prestigious nature.
What does this have to do with rights?[/QUOTE]
Rights are supposed to be founded on moral principles.
[QUOTE=matsta;34487953]Rights are supposed to be founded on moral principles.[/QUOTE]
So?
You either pretty slow relating things or you're trolling. I won't respond either way.
[QUOTE=matsta;34488941]You either pretty slow relating things or you're trolling. I won't respond either way.[/QUOTE]
Or maybe you're just terrible at English because it's a second language for you and it just might be conceivable that you aren't capable of perfectly communicating your ideas clearly because of that.
or im just trolling you
Sorry, you need to Log In to post a reply to this thread.