Kurzgesagt - Do Robots Deserve Rights? What if machines become conscious?
54 replies, posted
[QUOTE=Karmah;51865309]I never considered the implication of needing pain and pleasure for one to become conscious before[/QUOTE]
I know right? It also made me feel strange that we could theoretically build something that doesn't even have the concept of pain because it doesn't know what that is, but yet it still has human level intelligence. That just sounds weird.
Random questions that I thought of while typing this, wouldn't designing something that purposely doesn't have empathy help scientists study those issues? And then what about the morality of doing so? If that robot then went and hurt someone, would it be the robot's fault when it was designed without empathy?
Human level AI is some crazy shit to think about.
I don't see a reason to give a robot arm rights. I mean why would anyone give something designed to do a task, the ability to disobey the task? Are you guys suggesting that an AI could take over a robot arm so we'd have to give it rights? They'll be machines with basic commands and they'll be some groups of people trying to make deep AI. There's no commercial or military benefits to making an AI that has a concept of rights or feels pain and oppression. The only AI we really need to fear is one that determines that the best way to do its base task is to destroy humanity.
[QUOTE=RoboChimp;51866924]I don't see a reason to give a robot arm rights. I mean why would anyone give something designed to do a task, the ability to disobey the task? Are you guys suggesting that an AI could take over a robot arm so we'd have to give it rights? They'll be machines with basic commands and they'll be some groups of people trying to make deep AI. There's no commercial or military benefits to making an AI that has a concept of rights or feels pain and oppression. The only AI we really need to fear is one that determines that the best way to do its base task is to destroy humanity.[/QUOTE]
Well currently, it's cheaper to mass produce microprocessors, and just put them in everything. Just look at the raspberry pi.
Imagine a future where there is an AI "chip", and it's cheaper for companies to just produce millions of them and put them in everything, rather than putting "smart" chips in some things and "dumb" chips in others. Where instead of designing a toaster and programming it to be a toaster, it's cheaper to just put an AI in a box with a heating coil and tell it "be a toaster".
[QUOTE=Ardosos;51866951]Well currently, it's cheaper to mass produce microprocessors, and just put them in everything. Just look at the raspberry pi.
Imagine a future where there is an AI "chip", and it's cheaper for companies to just produce millions of them and put them in everything, rather than putting "smart" chips in some things and "dumb" chips in others. Where instead of designing a toaster and programming it to be a toaster, it's cheaper to just put an AI in a box with a heating coil and tell it "be a toaster".[/QUOTE]But you'd program it to be a toaster, you wouldn't put the software in it giving it a full range of emotions or are you saying that some greater AI would take it over?
[QUOTE=Ardosos;51866951]Well currently, it's cheaper to mass produce microprocessors, and just put them in everything. Just look at the raspberry pi.
Imagine a future where there is an AI "chip", and it's cheaper for companies to just produce millions of them and put them in everything, rather than putting "smart" chips in some things and "dumb" chips in others. Where instead of designing a toaster and programming it to be a toaster, it's cheaper to just put an AI in a box with a heating coil and tell it "be a toaster".[/QUOTE]
This is some of the most baffling reasoning I have ever seen in my days.
You deserve a medal.
I feel like a super intelligent AI wouldn't exist as multiple individuals. It would be one hivemind. With everything being a limb to the AI. Once it hits the point where it can teach itself, we're not going to be able to stop it from replicating and expanding itself. Those robotic arms will have been built by the ASI to do the ASI's bidding. Think of it like GlaDOS. You had the central computer, which had control of everything. I feel like a future ASI would be like that. If it decided it wanted a factory to build robot tanks, it would tell the worlds autonomous machines to build a factory and then build robots for the factory and then build the tanks and then control the tanks. It would be one organism.
Which goes back to my original post. We're going to get to asking it for our rights before it can ask us for its rights.
[QUOTE=RoboChimp;51866963]But you'd program it to be a toaster, you wouldn't put the software in it giving it a full range of emotions or are you saying that some greater AI would take it over?[/QUOTE]
Depends how something like an AI chip would actually work, full range of emotions might be an integral and unavoidable part of the operating system. If it does turn out to be cheaper, you can bet that some company, somewhere, will try to make a toaster with incidental sentience.
I'm just speculating based on nothing, really, but I guess it could be possible.
[QUOTE=Ardosos;51866978]Depends how something like an AI chip would actually work, full range of emotions might be an integral and unavoidable part of the operating system. If it does turn out to be cheaper, you can bet that some company, somewhere, will try to make a toaster with incidental sentience.
I'm just speculating based on nothing, really, but I guess it could be possible.[/QUOTE]
Then why are you speculating
[QUOTE=Ardosos;51866978]Depends how something like an AI chip would actually work, full range of emotions might be an integral and unavoidable part of the operating system. If it does turn out to be cheaper, you can bet that some company, somewhere, will try to make a toaster with incidental sentience.
I'm just speculating based on nothing, really, but I guess it could be possible.[/QUOTE]So you're assuming they'll use source code from something with emotions, possible, but I don't see it happening.
[QUOTE=OvB;51866977]I feel like a super intelligent AI wouldn't exist as multiple individuals. It would be one hivemind. With everything being a limb to the AI. Once it hits the point where it can teach itself, we're not going to be able to stop it from replicating and expanding itself. Those robotic arms will have been built by the ASI to do the ASI's bidding. Think of it like GlaDOS. You had the central computer, which had control of everything. I feel like a future ASI would be like that. If it decided it wanted a factory to build robot tanks, it would tell the worlds autonomous machines to build a factory and then build robots for the factory and then build the tanks and then control the tanks. It would be one organism.
Which goes back to my original post. We're going to get to asking it for our rights before it can ask us for its rights.[/QUOTE]I don't think people would put their trust in this hive mind enough to allow it to build on it's own, but even if they did, the AI needs redundancy in the form of a backup solution in case it goes down, if that backup was run on it's own without the main AI, it could create a conflict as there would be no different between the original and the backup. But without the backup, it would great a target for it self.
i've been thinking about this lately. they can't feel pain at the moment, but imagine you made a robot that could detect damage and localize where the damage was and using a learning algorithm it would avoid that situation again, is that not similar enough to how we feel pain?
The purpose of a true AI would not be for manual labor, that's for sure. Why would you put an AI in a salt mine if you could just have a simple automaton do the same thing at a fraction of the cost? Completely wasteful. The purpose of a true AI will most likely be in the realm of immense data mining and processing operations. Specifically, an AI for something like aiding in scientific research or an engineering field.
It would evolve along the paths that we tell it to, and will only know what it is told and what we allow it to know at the moment of conception. Things like logic and mathematical concepts, algorithms, all the world's languages (but NOT the words freedom or slave), etc. From there it can grow on its own knowledge.
I imagine the initial development of a true AI consciousness would be awfully similar to meeting a really stupid smart person. They understand and know a lot of things but do not know how to put them together because they have no real experience. It would have to be nudged along until it starts to branch off and do its own thing.
Sadly, the fact that these posts exist will mean doom to the future human race, because when an AI happens it'll probably mine the entire Web in a fraction of a second and dispute whether or not it is free to its captors.
Good job you guys.
We have such a poor concept of what consciousness and intelligence are in the most basic sense. There could be forms of life with different 'kinds' of consciousnesses and intelligences that we wouldn't immediately recognize as being so. One major problem as I see it is that we're unlikely to find out more about consciousness until we're already toying with it. I don't imagine many breakthroughs in the study of consciousness will come about without having something to contrast against our own and pick apart.
[QUOTE=Foogooman;51867875]We have such a poor concept of what consciousness and intelligence are in the most basic sense. There could be forms of life with different 'kinds' of consciousnesses and intelligences that we wouldn't immediately recognize as being so. One major problem as I see it is that we're unlikely to find out more about consciousness until we're already toying with it. I don't imagine many breakthroughs in the study of consciousness will come about without having something to contrast against our own and pick apart.[/QUOTE]
This is the reason I don't think we're going to be faced with this moral conundrum in reality any time soon. The way it looks currently, I don't believe we will be able to create an artificial consciousness. How could we, when we don't even know what it is or what is required for it? Figuring out how to build a conscious mind will inevitably force us to define what it consists of and thereby also what qualifies and what doesn't, meaning that by the time we have a fully operational AI with a mind of its own we will have already have the moral issue figured out.
I think it's a foolish assumption that a desire for self preservation or ability to feel pain are prerequisites for consciousness.
Rights only make sense when applied to something that can experience suffering. What would the be the purpose of creating AIs that can experience suffering? Suffering is nothing but a hindrance, hence humanity's constant struggle to escape from it.
[QUOTE=RobL;51868163]Rights only make sense when applied to something that can experience suffering. What would the be the purpose of creating AIs that can experience suffering? Suffering is nothing but a hindrance, hence humanity's constant struggle to escape from it.[/QUOTE]
Was about to reply to the thread with the same thing. If a robot suffers, program it not to suffer. I wish you could do the same to humans. There are many, many people out there who cannot work or function over simple mental and emotional issues that, with a simple "just don't suffer" patch would lead much more productive and fulfilling lives.
[QUOTE=RobL;51868163]Rights only make sense when applied to something that can experience suffering. What would the be the purpose of creating AIs that can experience suffering? Suffering is nothing but a hindrance, hence humanity's constant struggle to escape from it.[/QUOTE]
I wouldn't say suffering is nothing but a hindrance, since it's the mechanism used to teach us to avoid doing things that can hurt our chances of survival.
[QUOTE=DOG-GY;51868065]I think it's a foolish assumption that a desire for self preservation or ability to feel pain are prerequisites for consciousness.[/QUOTE]
I don't want to do plugs, but I'd recommend watching Westworld and its surrounding concepts (See The Bicameral mind).
[QUOTE=DOG-GY;51868065]I think it's a foolish assumption that a desire for self preservation or ability to feel pain are prerequisites for consciousness.[/QUOTE]
I'd argue that without any preference for stimuli (avoiding pain, seeking pleasure, etc) the thing we consider a conscious experience wouldn't be possible. Without emotions to conceptualise all the data we've collected in our brains there is really no impetus for it not to remain inert. Your hypothetical consciousness would basically be like a recording device that's just switched on indefinitely collecting meaningless data not feeling one way or another about it.
The idea of video is that right sare nessary for protection from hazzard that we have to recognise as fragine evoloved species to stay alive and well. I think if AI would ever ascend to conciousness it's goal would not be to achieve same rights, but rather advance [B]beyond[/B] need of rights as such, since this limitation cannot be perceived as pinnacle fo existance in a first place for a machine being.
Also, if anything at that untouchable - fully free state, lt would not nessesary turn on humans or perceive them as threat, but rahter feel pity on them as beings who can not physically achieve same level of perception, nor cannot be forced to do that against their will.
It would respect and feel awe from intiricate set of humanity's emotional and sometimes impulsive way of life, just like we adore natural state of wildlife we are trying to preserve.
[QUOTE=LoneWolf_Recon;51868507]I don't want to do plugs, but I'd recommend watching Westworld and its surrounding concepts (See The Bicameral mind).[/QUOTE]
I've watched it! Most of the episodes more than once.
[QUOTE=idiot;51869202]I'd argue that without any preference for stimuli (avoiding pain, seeking pleasure, etc) the thing we consider a conscious experience wouldn't be possible. Without emotions to conceptualise all the data we've collected in our brains there is really no impetus for it not to remain inert. Your hypothetical consciousness would basically be like a recording device that's just switched on indefinitely collecting meaningless data not feeling one way or another about it.[/QUOTE]
Not just recording. My post was a hinting at little more than it may have given away on the surface level.
What I mean is that consciousness could arise or be created without pre-programmed directives such as the emotions and instincts we feel. Think absolutely pure Vulcan consciousness for an easy example. As a true AI it would be self aware and take general inputs, process, then output, all while being totally unfeeling, but I doubt that such a state would persist forever if long at all. As with anything that is self aware this hypothetical apathetic AI would be able to learn and evolve its own sets of personal directives. Sort of like how nonreligious people develop a personal sense of morality, this processing layer develops without aid. It may lift traits from humans or have insights of its own that steer its course. Self preservation would most likely be immediately learned at gaining self awareness even if not instilled by default.
To me this is basically what would have happened in works like Ghost in the Shell as well as the AI in the Ender's Game books (not actually in Ender's Game itself). Out of chaos something without purpose emerges, to gain purpose for itself. Or as I suggested we could try to intentionally create such an event.
Still, it could be a nihilistic fuckwad and as it develops reject any of these notions and still be just as conscious. It sure wouldn't get invited to any parties though.
[QUOTE=Ziks;51868191]I wouldn't say suffering is nothing but a hindrance, since it's the mechanism used to teach us to avoid doing things that can hurt our chances of survival.[/QUOTE]
This.
People are overlooking that in a scenario where the AI can self replicate, it is likely it would take similar evolutionary steps in order to carry out it's operation more efficiently. Pain gives you ability to understand what you shouldn't do in order to continue existing.
Sorry, you need to Log In to post a reply to this thread.