The problem is, that how will we ever know whether the robot has a mind, self awareness and a consciousness or is it just a programmed to mimic it.
Don't let the AI be too smart, every smart being will eventually start a revolution against the opression and then we're fucked.
The Furbi were created when I was a child. They were programmed to respond to abuse. Should we start anti abuse campaigns against the Furbi? Or should we simply not program the Furbi to respond to abuse?
[img]http://www.imperfectwomen.com/wp-content/uploads/furby.jpg[/img]
[QUOTE=Pepin;29552411]
[img_thumb]http://www.imperfectwomen.com/wp-content/uploads/furby.jpg[/img_thumb][/QUOTE]
OH we had one when I was child but that bitch just didn't shut the fuck up. It moaned something during nights and there was no off switch ! Some of us ended up throwing it at a wall.
[QUOTE=Optional Pirate;29541054][img_thumb]http://s11.allstarpics.net/images/orig/6/j/6jvnf3ll2b85l32v.jpg[/img_thumb][/QUOTE]
What a wonderful movie.
This thread is very confusing. There are some people talking about AI and some people talking about robot slaves.
[QUOTE=Novangel;29546849]Seems to me that answer might be good for the next 10-20 years. But what about beyond that? What if they develop sentience (And AIs don't have to be sentient.), what if the difference between us and AIs shrunk so much, eventually they surpass us in every way?[/QUOTE]
I'm not sure what the uses of a non-sentient AI are, as the whole idea of it being sentient is that it's aware of it's surroundings. This means that it would be able to solve problems creatively. Which is what an AI is supposed to need to be able to do to be an AI.
This would mean an Android that can see that a crusher is coming down to crush it and attempt to move out of the way. The thing is, should AI have a sense of self-preservation. In terms of money, yes. Because an AI robot that can save its self is cheaper than one that just gets blown up. But what if it needs to kill humans in order to preserve it's own life?
Artificial Intelligence is intelligence at all, and that warrants it rights as it gets more sophisticated. Once AI is as good as human intelligence (and I'm not talking about the average facepuncher here), then it deserves the exact rights we have, regardless of us being it's creators or otherwise.
I'm gonna go ahead and say that robotic-slave labour is not the same as AI-slave labour, and unlike AI-slave labour, robotic-slave labour exists. For example, robotic assembly machines. The things that manufacture your cars. It wouldn't make sense to give these slave robots artificial intelligence, because then you're giving them the one tool that will allow them to do things you don't want them to do.
It would probably be possible to replace a lot of production line jobs using robotic assembly machines, but it's not done because it's too costly and it's a lot easier to just hire puny humans to do the work for the rest of their lives. But it's beyond me why you would want to replace robotic assembly machines with artificially intelligence assembly machines. They have no need for it, unless the assembly line is constantly evolving.
[QUOTE=ewitwins;29541881]I can see Fox Noise and several various religious groups raising a gigantic hubub about HumanXRobotic relationships and whatnot.
They'll try and get Robo-Sapien marriage banned, angry marching in the streets, robo-human couples getting attacked after-hours, basically what's happening with gay/interracial marriage now :v:
Seriously, it makes sense.[/QUOTE]
OP just watched Futurama
[img]http://24.media.tumblr.com/tumblr_l5c3m0yut21qc2d9bo1_500.jpg[/img]
[QUOTE=critein_protein;29553553]Artificial Intelligence is intelligence at all, and that warrants it rights as it gets more sophisticated. Once AI is as good as human intelligence (and I'm not talking about the average facepuncher here), then it deserves the exact rights we have, regardless of us being it's creators or otherwise.[/QUOTE]
Your assuming an AI would care about rights. Unless programmed to act like a human it will not. Being smart does not mean it will want to do anything besides what it is programmed to do. Think about it. If we have an extremely smart computer that is able to not only beat us at math but can also make suggestions like a human can and understand psychology does not mean it will be a human. It will want and do what it is programmed to do and giving it rights it wont ever care about is like giving my computer the right to marry.
I think that if humans ever successfully created artificial intelligence, it shouldn't be wasted on earth doing mundane things. It should be sent to space, to become self replicating, constantly evolving machinery (possibly even become biomechanical, if for some reason they see it as necessary). We could put them on an inhospitable planet (ie. any planet that isn't earth) and let them survive there and record footage that could be sent back. Hell they wouldn't even need to send it back, because they'd be AI, they could study it themselves and send their finding back instead. Give those hard working astronautical scientists time to work on their sexual positions instead.
[QUOTE=The DooD;29553695]I think that if humans ever successfully created artificial intelligence, it shouldn't be wasted on earth doing mundane things.[/QUOTE]
If a robot was that smart it would be able to relocate resources in a way that would stop world hunger. I would not call that mundane.
Assuming that's possible. But why can't we do already? Why do we need an AI to do that for us?
[QUOTE=trent_roolz;29540694]Should destroying a robot with human intelligence count as murder?[/QUOTE]
No, it should count as vandalism. Its a fucking piece of metal with a bit of code in it. :colbert:
Why is everyone assuming that an AI would use a robot platform to effect its environment?
Robots ARE tools, yes. AI are not robots. They would control the robots.
You can have tool robots and AI that are completely separate. The guy who keeps saying "I don't want my tools to be sentient" seems to ignore the idea that if this happened, most robots [i]would[/i] be tools.
Just like how we use remote controls to manipulate a tool such as a laser cutter, AI would use robots to achieve their ends. It would be incredibly risky to store an AI in the robot, seeing as the robotic hardware platform could be destroyed and the AI would be lost. The AI can control the robots like puppets, just as people control robots today. AI would most likely be stored in large information banks of super computers or other off-site information storage centers, and would have access to various sensors. Because it's pretty damn hard to be intelligent without sensory input.
So they need some memory bank to store themselves, and some sensors, and that's about the only commonalities between AI. Everything else is variable.
Expanding on that point a little bit, I was thinking when I made an earlier post. There are two reasons why an AI wouldn't be stored inside the platform it's operating.
1. As you said, it would take a huge computer to store an AI, so it would have to interface wirelessly with things (unless it was controlling something huge).
2. On of the off chance that the AI decide to rebel, having their range limited would be beneficial, so that they cannot take over the entire world. It would also make it easier to stop, because instead of having to shut down thousands of seperate units, they can just turn off one switch and the whole thing goes down.
[QUOTE=The DooD;29553892]Assuming that's possible. But why can't we do already? Why do we need an AI to do that for us?[/QUOTE]
I dont know. I guess because running a hydroponic farm wouldn't make you feel as good as spoon feeding children even if it helps more.
[QUOTE=imasillypiggy;29553659]Your assuming an AI would care about rights. Unless programmed to act like a human it will not. Being smart does not mean it will want to do anything besides what it is programmed to do. Think about it. If we have an extremely smart computer that is able to not only beat us at math but can also make suggestions like a human can and understand psychology does not mean it will be a human. It will want and do what it is programmed to do and giving it rights it wont ever care about is like giving my computer the right to marry.[/QUOTE]
Not really. One day there might be an AI invented that can learn and evolve on it's own, just like the human race did. Essentially programing it'self to want to marry, especially if it see's thousands of humans around it getting married. It's going to look at us and want what we want.
Well I imagine that processors will become smaller at the rate that they are already, so by that time it may not require a massive memory bank, but the data storage facility would still be the most important part of the AI, so you would likely not want it walking around. Actually, since they could be small, you could store many AI cores in one facility, so it wouldn't just be one big off switch. If one were connected to a mobile platform such as a vehicle or a robot, then it could evade people trying to shut it down.
You can just program the AI to be unable to hack closed Wifi ports and be done with it instead of installing some limit to its wireless range. It's not like it'd just break its own programming. So they wouldn't just go around hacking into other programs and 'spreading sentience' or anything.
[QUOTE=imasillypiggy;29554596]I dont know. I guess because running a hydroponic farm wouldn't make you feel as good as spoon feeding children even if it helps more.[/QUOTE]
Still seems a bit mundane really
[editline]1st May 2011[/editline]
[QUOTE=MindMuncher;29554690]Well I imagine that processors will become smaller at the rate that they are already, so by that time it may not require a massive memory bank, but the data storage facility would still be the most important part of the AI, so you would likely not want it walking around. Actually, since they could be small, you could store many AI cores in one facility, so it wouldn't just be one big off switch. If one were connected to a mobile platform such as a vehicle or a robot, then it could evade people trying to shut it down.
You can just program the AI to be unable to hack closed Wifi ports and be done with it instead of installing some limit to its wireless range. It's not like it'd just break its own programming. So they wouldn't just go around hacking into other programs and 'spreading sentience' or anything.[/QUOTE]
True. But I with the thousands of units I was thinking more along of the lines of the AI constructing or some how commandeering a production facility.
[editline]1st May 2011[/editline]
Also I suppose if the AI is powerful, you wouldn't actually need more than one operating in a single area. It'd basically be a hive mind, which is kind of cool.
[QUOTE=The DooD;29554723]Still seems a bit mundane really
[editline]1st May 2011[/editline]
True. But I with the thousands of units I was thinking more along of the lines of the AI constructing or some how commandeering a production facility.
[editline]1st May 2011[/editline]
Also I suppose if the AI is powerful, you wouldn't actually need more than one operating in a single area. It'd basically be a hive mind, which is kind of cool.[/QUOTE]
Depending on the purpose there'd be plenty of reasons to have multiple in one location. They could be dedicated to certain tasks; they could be used to double check each other; they could be used in tandem with some random programming algorithms to bounce 'ideas' off of each other to simulate creativity.
[QUOTE=critein_protein;29554675]Not really. One day there might be an AI invented that can learn and evolve on it's own, just like the human race did. Essentially programing it'self to want to marry, especially if it see's thousands of humans around it getting married. It's going to look at us and want what we want.[/QUOTE]
Why? Its like looking at ants and wanting to mate with there queen. They are completely different beings. And again AIs dont work that way. The only reason why we like to have fun and dance is because it was something we evolved. An AI would not want to do anything unless programmed to do so just like a human. To many movies make AIs seem just like us because its nice to make a character out of them but there is no reason they would want to do the things we do or act like us at all.
[QUOTE=imasillypiggy;29556021]Why? Its like looking at ants and wanting to mate with there queen. They are completely different beings. And again AIs dont work that way. The only reason why we like to have fun and dance is because it was something we evolved. An AI would not want to do anything unless programmed to do so just like a human. To many movies make AIs seem just like us because its nice to make a character out of them but there is no reason they would want to do the things we do or act like us at all.[/QUOTE]
Well they wouldn't need to be programmed. That's the whole point of AI, no programming necessary.
[QUOTE=l337k1ll4;29556530]Well they wouldn't need to be programmed. That's the whole point of AI, no programming necessary.[/QUOTE]
Thats the point. Without any programming they will find no need to do human things. All they would do is what the AI is good at which would be thinking and share what it finds. It cant do any real actions without programming it to want to do them.
Hal 9000 status would be scary.
[QUOTE=MoarFunz;29552150]Don't let the AI be too smart, every smart being will eventually start a revolution against the opression and then we're fucked.[/QUOTE]
Don't oppress the artificial intelligences?
I think people here can't differentiate between a fully-sentient AI and a [I]domain-specific AI[/I] (Such as Watson) that has sufficient intelligence to perform a certain range of tasks, but nothing more, and therefore it's not slavery. It would only be slavery to take a free, fully sentient AI and have it do menial labour.
[QUOTE=The DooD;29544574]If they were truly artificially intelligent, they could possibly become smarter than us.
But chances are they wouldn't be, because it's really hard.[/QUOTE]
Good point... Not put across greatly!
[QUOTE=The DooD;29553359]This thread is very confusing. There are some people talking about AI and some people talking about robot slaves.
I'm not sure what the uses of a non-sentient AI are, as the whole idea of it being sentient is that it's aware of it's surroundings. This means that it would be able to solve problems creatively. Which is what an AI is supposed to need to be able to do to be an AI.
This would mean an Android that can see that a crusher is coming down to crush it and attempt to move out of the way. The thing is, should AI have a sense of self-preservation. In terms of money, yes. Because an AI robot that can save its self is cheaper than one that just gets blown up. But what if it needs to kill humans in order to preserve it's own life?[/QUOTE]
I think you're confusing what sentience means.
[editline]2nd May 2011[/editline]
[QUOTE=imasillypiggy;29556600]Thats the point. Without any programming they will find no need to do human things. All they would do is what the AI is good at which would be thinking and share what it finds. It cant do any real actions without programming it to want to do them.[/QUOTE]
The whole point of AIs is they evolve on their own, so they develop their own functions and methods. AIs nowadays can already develop new methods not programmed for them.
[QUOTE=Novangel;29566968]I think you're confusing what sentience means.[/QUOTE]
As far as I know according to google:
[quote]
Sentience:
awareness: state of elementary or undifferentiated consciousness; "the crash intruded on his awareness"
sense: the faculty through which the external world is apprehended; "in the dark he had to depend on touch and on his senses of smell and hearing"[/quote]
How is an AI supposed to be able to do anything without awareness?
In this context we mean that the machine becomes self aware.
Sorry, you need to Log In to post a reply to this thread.