• Kurzgesagt - Do Robots Deserve Rights? What if machines become conscious?
    54 replies, posted
[video=youtube;DHyUYg8X31c]http://www.youtube.com/watch?v=DHyUYg8X31c[/video]
Best case? Robots never advance to the point where this question becomes important. Worst case? Matrix. Terminator. BSG.
If you ask me, the Brits have their heads on straight! Omnic rights? Pah
[QUOTE=S31-Syntax;51863822]Best case? Robots never advance to the point where this question becomes important. Worst case? Matrix. Terminator. BSG.[/QUOTE] For some reason I read it as "Roblox" even though I JUST read the title correctly.
[QUOTE=S31-Syntax;51863822]Best case? Robots never advance to the point where this question becomes important. Worst case? Matrix. Terminator. BSG.[/QUOTE] That's such a narrow minded way of thinking, there won't be these us/them binary outcomes. We'll have some form of general intelligence in the near future. The most immediate will be mapping the human brain onto a logical neural network in hardware/software. Of course this is with the assumption that consciousness arises from sufficiently advanced networks. And that'll only be the lastest in the line of reformations of our philosophical thinking. As they stated in the video over history we've reformed our thinking of rights to included not only our fellow man (previously slave), and our female counterparts (suffrage) but to also include animal rights too. However the only thing I disagree with in the video is that of the "economic interest" of not giving robot rights when it clearly wouldn't have any consciousness in the first place. [B]i.e. Why should we give high end neural network hardware/software to a robot arm in a car factory when a low level microcontroller and simple looped code could do the same thing? [/B]
The bigger question is whether or not a super intelligent AI will allow us to keep [I]our[/I] rights.
[QUOTE=LoneWolf_Recon;51864668]That's such a narrow minded way of thinking, there won't be these us/them binary outcomes. We'll have some form of general intelligence in the near future. The most immediate will be mapping the human brain onto a logical neural network in hardware/software. Of course this is with the assumption that consciousness arises from sufficiently advanced networks. And that'll only be the lastest in the line of reformations of our philosophical thinking. As they stated in the video over history we've reformed our thinking of rights to included not only our fellow man (previously slave), and our female counterparts (suffrage) but to also include animal rights too. However the only thing I disagree with in the video is that of the "economic interest" of not giving robot rights when it clearly wouldn't have any consciousness in the first place. [B]i.e. Why should we give high end neural network hardware/software to a robot arm in a car factory when a low level microcontroller and simple looped code could do the same thing? [/B][/QUOTE] there's going to be a push toward more general AI strictly from the fact that you can take a more general purpose AI and it'll figure out the most efficient way to do a given task - unless proper laws are put in place forbidding people to do this, which all signs point to happening. i don't its very narrow minded to assume true AI spells death to humanity. they are a superior mind with goals we probably won't be able to understand; so many things can go wrong.
Arguing this topic around different places and websites, it's interesting how many people flat out deny that an artificial mind can ever be conscious. Either it ends up being impossible for technological reasons or it isn't conscious because we created the consciousness. I still think one is naive and the other just denies our own chance-programmed consciousness.
I highly recommend anyone curious about AI read these: [url]http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html[/url] [url]http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html[/url]
[QUOTE=Viper_;51864767]Arguing this topic around different places and websites, it's interesting how many people flat out deny that an artificial mind can ever be conscious. Either it ends up being impossible for technological reasons or it isn't conscious because we created the consciousness. I still think one is naive and the other just denies our own chance-programmed consciousness.[/QUOTE] at the end of the day i think consciousness is largely based on convention. defining consciousness is a pretty daunting task. if we had a one to one simulation of a human mind on a computer, following all laws of physics, it's hard to posit that that entity doesn't possess consciousness. however, AI will work quite differently from us, and how it goes about thinking will be entirely different than our own methods. this is something different than the shared human trait of consciousness and one could probably make the point that the AI is thus not conscious, if they take that perspective. simultaneously, you can argue that the AI in use today has some rudimentary form of consciousness. [editline]23rd February 2017[/editline] [QUOTE=OvB;51864783]I highly anyone curious about AI read these: [url]http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html[/url] [url]http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html[/url][/QUOTE] [url]https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834[/url] [url]https://www.amazon.com/Second-Machine-Age-Prosperity-Technologies/dp/0393350649/ref=sr_1_1?s=books&ie=UTF8&qid=1487875949&sr=1-1&keywords=second+machine+age[/url] both of these books also give insight into why AI means bad things for humanity also
[QUOTE=Mobon1;51864794] [url]https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834[/url][/QUOTE] Ah yes. I remember when Elon Musk recommended this book and the price inflated like 500% or something. Glad to see it's back down to a reasonable price.
After seeing Black Mirror's White Christmas episode I only have one answer to the question - yes.
I don't doubt that sentient AI can exist, but I don't see a reason to create it. A robotic toaster doesn't need to have a concept of self, it just needs to make toast at regular intervals. A mining robot doesn't need to be able to appreciate art, it just needs to mine ore and collect it for processing. A freaking computer controlled spacecraft doesn't even need to know what the Moon is, it just needs to be able to follow a pre-computed path and make course corrections where necessary.
[QUOTE=Firefox42;51864864]I don't doubt that sentient AI can exist, but I don't see a reason to create it. A robotic toaster doesn't need to have a concept of self, it just needs to make toast at regular intervals. A mining robot doesn't need to be able to appreciate art, it just needs to mine ore and collect it for processing. A freaking computer controlled spacecraft doesn't even need to know what the Moon is, it just needs to be able to follow a pre-computed path and make course corrections where necessary.[/QUOTE] just because you don't see a reason to create it doesn't mean a lot of smart people are still trying. it's somewhat of an inevitability.
[QUOTE=Mobon1;51864756]there's going to be a push toward more general AI strictly from the fact that you can take a more general purpose AI and it'll figure out the most efficient way to do a given task - unless proper laws are put in place forbidding people to do this, which all signs point to happening. i don't its very narrow minded to assume true AI spells death to humanity. they are a superior mind with goals we probably won't be able to understand; so many things can go wrong.[/QUOTE] Okay how are you going to run said general AI? Will my toaster have to have its own high end FPGA to instantiate a neural network in order to find the [URL="https://www.youtube.com/watch?v=ibOkPx_Ej30"]most efficient way to toast?[/URL] For large scale tasks like self-driving cars, I could see this, however its doesn't make sense to put high end hardware on relatively dumb tasks. Even with extending Moore's Law to such a ridiculous degree most tasks won't need a general AI and offloading the processing power offsite is silly. Any engineer, computer scientist/engineer (hell even AI researchers) worth their salt would agree that if general intelligence are to be implemented its for replacing white collar esque jobs in terms of difficulty as the economic benefit is there versus the loss of implementing said intelligence for a car factory robot. Maybe I'm just being too pragmatic here (See my first post though), so yeah its hard to predict the future but its easier to prepare for it. [URL="https://www.youtube.com/watch?v=dLRLYPiaAoA"]Obvious first step is the zoo protocol.[/URL] [sp]I intend no ad hominem, sorry if I'm being an ass[/sp]
Worst Case Scenario, Talkie the Toaster. [media]https://www.youtube.com/watch?v=LRq_SAuQDec[/media] "We want no muffins, no toast, no tea-cakes, no buns, baps, baguettes or bagels. no croissants, no crumpets, no pancakes, no potato cakes and no hot-cross buns, and definitely, no smegging flapjacks."
[QUOTE=Firefox42;51864864]I don't doubt that sentient AI can exist, but I don't see a reason to create it. A robotic toaster doesn't need to have a concept of self, it just needs to make toast at regular intervals. A mining robot doesn't need to be able to appreciate art, it just needs to mine ore and collect it for processing. A freaking computer controlled spacecraft doesn't even need to know what the Moon is, it just needs to be able to follow a pre-computed path and make course corrections where necessary.[/QUOTE] i think conscious AI wouldn't be created for the need of making conscious AI in a sense they are more helpful in the task they are doing. some man with a big brain could create a conscious AI that can think to better understand ourselves. or like the video mentioned one AI could be just that one bit smart enough to construct it's own AI or modify itself to be fully sentient. like what if someone makes a Mining-bot and programs it so it can dynamically alter it's own code purely to collect rocks more efficiently in a self sustaining way so it doesn't even need supervision. But after so many alterations of itself it makes that final 'link' and makes itself sentient. idk i'm talking out of my ass but I think sentient AI can happen and i think the way they will happen will be sudden and unexpected.
[QUOTE=LoneWolf_Recon;51864668]The most immediate will be mapping the human brain onto a logical neural network in hardware/software.[/QUOTE] Pretty much every AI expert will tell you that's a grand waste of time. We shouldn't be attempting to mimic or replicate something which can already be outperformed by current hardware, let alone what we may have by the time we're fully capable of doing that accurately.
[QUOTE=spekter;51864935]Pretty much every AI expert will tell you that's a grand waste of time. We shouldn't be attempting to mimic or replicate something which can already be outperformed by current hardware, let alone what we may have by the time we're fully capable of doing that accurately.[/QUOTE] when you say outperformed, i hope you're talking about on a singular task because current computing isn't even close to effectively outperforming the brain as a general intelligence
I never considered the implication of needing pain and pleasure for one to become conscious before
I think that the moment an AI advocates for itself, and tells us it's alive, and you [I]can't[/I] be sure it's wrong, it should be afforded rights and be considered an independent person. A lot of people seem to think that sentient AI will never happen or if it [I]does[/I] happen, it will immediately kill us all. I disagree with that notion. I don't believe there's sufficient [I]reason[/I] for any AI to want to kill us, and depending on how many emotions it feels, if any, it may be a pacifist as it is. Either way I don't want to be the person who was talking about how robots were gonna kill us all back in the day. I don't htink they'd look too kindly on that. I think it probably leans more towards my side right now, but I bet you if we did get sentient robots? Then you'd start getting immediate smear campaigns from the rich and from the government who by that point will have a major monetary stake in not letting them have rights. So I look forward to protesting on behalf of robots.
super advanced AI would probably necessitate us growing as a species similarly to how nuclear weapons forced us to stop engaging in total war, i'm not sure if i'd want sentient AI capable of suffering to exist in a world that engages in things like bear bile farming [QUOTE=thejjokerr;51864880]How can I tell anyone other than myself experiences true consciousness? Why would it not be fake consciousness somehow? Why should I care about anything else than myself just because it is said all humans are conscious. I can't prove it for others or myself and Im not going to blindly accept it as a truth. Just because you can speak and say you are conscious, doesn't mean you actually are. I would be interested in seeing a robot (or even another human in my environment) asking these questions. Conciousness is a pain in the butt.[/QUOTE] it's hard to prove even to yourself that your subjective consciousness is 'true' and not illusory
[QUOTE=idiot;51865376]super advanced AI would probably necessitate us growing as a species similarly to how nuclear weapons forced us to stop engaging in total war, i'm not sure if i'd want sentient AI capable of suffering to exist in a world that engages in things like bear bile farming it's hard to prove even to yourself that your subjective consciousness is 'true' and not illusory[/QUOTE] I love this line of philosophy. [url]https://en.wikipedia.org/wiki/Brain_in_a_vat[/url] I don't think humans can really dictate what even amounts to true consciousness because there are plenty of people that fall into a grey zone for it. For instance, do brain-dead people have consciousness? If we delegate the idea that they do not, then we are also by proxy saying that the presence of a "working" brain creates consciousness. Are people who are in a comatose state(eg, completely unable to interact with the world around them) conscious of their existence? Etc. The common association is "If they can talk to me, and they can dynamically respond to my input, then they are conscious." This seems far too narrow-minded though, because then any sufficiently advanced machine can be conscious.
[video=youtube;bJF-IRbTh0Q]https://www.youtube.com/watch?v=bJF-IRbTh0Q[/video] I've always thought A.I should be allowed any of the rights we take for granted, especially if it shows a sense of self. Even if it doesn't we really don't know enough about A.I to make that judgement, as it might be entirely alien to our idea of consciousness, but still be entirely sentient, however in that regard the A.I would probably be smart enough to understand our way of thinking to get across it's own thoughts and ask us in our relative terms for "freedom" persay, if it ever does would be the straw that would break the camels back.
A truly alive artificial intelligence is something I don't see coming for a long while. At most we'll have machines capable of independently thinking, but on a tight leash with no sort of emotions or even a consciousness. They'll be told "solve x" and they'll start brainstorming (or cpustorming? :s:) to find a solution. And then act on that solution. Assuming we do have something truly conscious or alive, I believe it will have been a massive mistake. Not because it will be our downfall because it sees us as inferior or a threat to its existence, but the fact that we have knowingly created something sentient that has no purpose but to exist, and do nothing but that, leaving it in essentially a vegetative state where it is incapable of doing anything. The fact that we are venturing into the territory of playing God.
[QUOTE=thejjokerr;51865735]I disagree. It's not that my consciousness is in question. Everything I experience could be unreal/illusory. Ironically I have blindly accepted this existence to be true, so I can live a life without constant doubt, fears and panics. Still have no idea why I've had these difficulties with accepting "reality" all my life.[/QUOTE] the still no guarantee that the 'I' you're referring to isn't an illusion, we could as well be biological robots just acting out our programming
[QUOTE=OvB;51864783]I highly recommend anyone curious about AI read these: [url]http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html[/url] [url]http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html[/url][/QUOTE] This was an amazing read, thank you for sharing it.
[QUOTE=idiot;51865800]the still no guarantee that the 'I' you're referring to isn't an illusion, we could as well be biological robots just acting out our programming[/QUOTE] That's actually exactly what we are. Our primal directives are operated by an instinct for survival and species propagation, and everything else is just an extension of that. Even the concept of "boredom" is something that we came up in the absence of having a struggle to live.
[QUOTE=LoneWolf_Recon;51864668]That's such a narrow minded way of thinking, there won't be these us/them binary outcomes. We'll have some form of general intelligence in the near future. The most immediate will be mapping the human brain onto a logical neural network in hardware/software. Of course this is with the assumption that consciousness arises from sufficiently advanced networks. And that'll only be the lastest in the line of reformations of our philosophical thinking. As they stated in the video over history we've reformed our thinking of rights to included not only our fellow man (previously slave), and our female counterparts (suffrage) but to also include animal rights too. However the only thing I disagree with in the video is that of the "economic interest" of not giving robot rights when it clearly wouldn't have any consciousness in the first place. [B]i.e. Why should we give high end neural network hardware/software to a robot arm in a car factory when a low level microcontroller and simple looped code could do the same thing? [/B][/QUOTE] [QUOTE=Firefox42;51864864]I don't doubt that sentient AI can exist, but I don't see a reason to create it. A robotic toaster doesn't need to have a concept of self, it just needs to make toast at regular intervals. A mining robot doesn't need to be able to appreciate art, it just needs to mine ore and collect it for processing. A freaking computer controlled spacecraft doesn't even need to know what the Moon is, it just needs to be able to follow a pre-computed path and make course corrections where necessary.[/QUOTE] We wouldn't be using this technology to control a robot arm in a car factory. We would be using it to [i]design[/i] robot arms in car factories, way better than a human ever could. You're not considering all the jobs that currently require humans. The two fields I think of most are engineering and the arts. Engineers might be thinking they're safe because "someone's gotta build and program the robots" but all those jobs are replaceable with humanlike ai. Think about what could actually happen if we replicated human level thinking in a machine. Then what if we could speed it up? If you could run the thing for 30 minutes and do 100 years of human thinking, that alone would be life changing. Spend 30 minutes having a team of 5 working on curing cancer for 25 years. "Not solved yet? Lets run the machine for another half hour." "Hey, still nothing." "What if we gave the machine access to it's all of it's design documents and told it to spend 100 years thinking about how it could improve itself? I bet we'd have cancer done within the hour!" Might we be better off without this? Yeah, but it's pretty much inevitable. With that sort of technology, the Manhattan project could have been completed practically instantly, and that is why all capable governments are going to be working on general AI if they aren't already. It's another arms race akin to nuclear development. Whoever can figure it out first instantly becomes the world superpower. Kinda a tangent from me considering the video is about rights, but this technology will come. It won't be developed because people want to talk to their toasters, it will be developed out of necessity. There are reasons why people have been worrying about "the singularity" for the last 50 years.
[QUOTE=Ltp0wer;51866095]We wouldn't be using this technology to control a robot arm in a car factory. We would be using it to [i]design[/i] robot arms in car factories, way better than a human ever could. You're not considering all the jobs that currently require humans. The two fields I think of most are engineering and the arts. Engineers might be thinking they're safe because "someone's gotta build and program the robots" but all those jobs are replaceable with humanlike ai. Think about what could actually happen if we replicated human level thinking in a machine. Then what if we could speed it up? If you could run the thing for 30 minutes and do 100 years of human thinking, that alone would be life changing. Spend 30 minutes having a team of 5 working on curing cancer for 25 years. "Not solved yet? Lets run the machine for another half hour." "Hey, still nothing." "What if we gave the machine access to it's all of it's design documents and told it to spend 100 years thinking about how it could improve itself? I bet we'd have cancer done within the hour!" Might we be better off without this? Yeah, but it's pretty much inevitable. With that sort of technology, the Manhattan project could have been completed practically instantly, and that is why all capable governments are going to be working on general AI if they aren't already. It's another arms race akin to nuclear development. Whoever can figure it out first instantly becomes the world superpower. Kinda a tangent from me considering the video is about rights, but this technology will come. It won't be developed because people want to talk to their toasters, it will be developed out of necessity. There are reasons why people have been worrying about "the singularity" for the last 50 years.[/QUOTE] I agree with most of your post (see my second post in this thread), there's no question that general intelligence will arise before the end of the century and I'm not dismissing the possibility nor downplaying the need to figure out these philosophical quandaries.
Sorry, you need to Log In to post a reply to this thread.