• Robot passes self-awareness test
    149 replies, posted
[QUOTE=Swebonny;48244841]Jeez what an incredibly lame way of determining something as complex as self-awareness. It basically just solved a puzzle which requires some [B]awareness of the entity that is yourself[/B]. Doesn't mean it's[B] self-aware[/B].[/QUOTE] I'm not entirely sure I get that wording of it. But yeah this is sort of sketchy, but cool nonetheless.
We already have robot workers.... Now self-aware robots? Soon enough they'll be walking beside us.
[QUOTE=Ziks;48245129]I'm not sure that there is a difference.[/QUOTE] It's really not any form of consciousness that observes the name. It's being able to determine the source of a sound that was produced as part of a basic command. It's no more complicated that robot 1 determining that the sound came from robot 2. [editline]19th July 2015[/editline] A cool piece of robotics but only tenuously even related to self-awareness.
Man I fucking want one of these. [video=youtube;YdaEUJLArKs]https://www.youtube.com/watch?v=YdaEUJLArKs[/video]
[QUOTE=JohnnyMo1;48245233]It's really not any form of consciousness that observes the name. It's being able to determine the source of a sound that was produced as part of a basic command. It's no more complicated that robot 1 determining that the sound came from robot 2.[/QUOTE] Right, so what would make it self awareness? Is it a linguistic difference (referring to the internal model it has of itself as "I" instead of "Robot 2"), or does there need to be something fundamentally different about how it processes information relating to itself? Can that be well defined?
I think the most scariest part about this test was that they didn't program it to recognize itself. So in other words, the robot recognized himself on it's own... creepy.
[QUOTE=Ziks;48245256]Right, so what would make it self awareness? Is it a linguistic difference (referring to the internal model it has of itself as "I" instead of "Robot 2"), or does there need to be something fundamentally different about how it processes information relating to itself? Can that be well defined?[/QUOTE] I'd say fundamentally different information processing. I couldn't tell you how it should be done, but certainly if this constitutes self-awareness, self-awareness is boring. Is a recursive function self-aware in any interesting sense? A set that contains itself? Certainly it can be well-defined. We have an obvious example of a self-aware thing (a person) and a non-self aware thing (a rock). Defining the steps in between might be difficult, but the distinction can obviously be made clear between certain amounts of self-awareness.
[i]The NAO were created by man.....[/i]
[QUOTE=JohnnyMo1;48245297]I'd say fundamentally different information processing. I couldn't tell you how it should be done, but certainly if this constitutes self-awareness, self-awareness is boring. Is a recursive function self-aware in any interesting sense? A set that contains itself? Certainly it can be well-defined. We have an obvious example of a self-aware thing (a person) and a non-self aware thing (a rock). Defining the steps in between might be difficult, but the distinction can obviously be made clear between certain amounts of self-awareness.[/QUOTE] I think a decent enough self-awareness test would involve mirrors, and the robot being able to discern that it's looking at a reflection of itself, and not just another robot that happens to be mimicking everything it does. It seems to be a popular one with animals, at any rate.
I find its voice really annoying. Why couldn't it be something robotic instead of that of a child?
[QUOTE=JohnnyMo1;48245297]I'd say fundamentally different information processing. I couldn't tell you how it should be done, but certainly if this constitutes self-awareness, self-awareness is boring. Is a recursive function self-aware in any interesting sense? A set that contains itself?[/QUOTE] I'm okay with saying those examples are self aware, because I see it as a continuous range between simple systems like that at one end, then the robots in this article being a bit further up, and human beings being further up still. Naturally we aren't necessarily at the absolute high end of the scale of self awareness, and different individuals will be at different places. [QUOTE]Certainly it can be well-defined. We have an obvious example of a self-aware thing (a person) and a non-self aware thing (a rock). Defining the steps in between might be difficult, but the distinction can obviously be made clear between certain amounts of self-awareness.[/QUOTE] But defining exactly at what point something becomes self aware is exactly what I wanted you to define. I think a continuous scale of self awareness makes more sense, because I really don't think there is a objective way to define what is and is not self aware with no ambiguity.
[QUOTE=Ziks;48245342]I'm okay with saying those examples are self aware, because I see it as a continuous range between simple systems like that at one end, then the robots in this article being a bit further up, and human beings being further up still. Naturally we aren't necessarily at the absolute high end of the scale of self awareness, and different individuals will be at different places. But defining exactly at what point something becomes self aware is exactly what I wanted you to define. I think a continuous scale of self awareness makes more sense, because I really don't think there is a objective way to define what is and is not self aware with no ambiguity.[/QUOTE] No, I completely agree that awareness is a continuous scale. I'm simply trying to make the point that this is not an interesting step up (or perhaps not a step up at all) in self-awareness.
Wow is this the robot from Ex Machina?
[QUOTE=JohnnyMo1;48245348]No, I completely agree that awareness is a continuous scale. I'm simply trying to make the point that this is not an interesting step up (or perhaps not a step up at all) in self-awareness.[/QUOTE] Okay, I think I must have misunderstood the point you were making. Fundamentally I don't think there's anything particularly special about self awareness, since all I think is required is for an agent to include a representation of itself in its model of the world. What matters most is the sophistication of the agent's ability to reason, which can only really be judged by its behaviour when presented with problems to solve. So yeah, I agree that this isn't a huge leap compared to what has been achieved before, but I don't think you can objectively say this robot isn't self aware (mainly because self awareness is subjective).
I think what makes it seem trivial is that we can design robots from the ground up to be capable of solving certain problems while animals need the tools required to solve these problems to emerge naturally through evolution. Maybe self awareness just isn't all that complicated. It's just uncommon in nature because of how inefficient nature is at creating tools to solve these sorts of problems.
[QUOTE=itisjuly;48245334]I find its voice really annoying. Why couldn't it be something robotic instead of that of a child?[/QUOTE] this robot is used a lot in education, often when working with small kids. thats why it is kind of small and childish in general. however, this robot is actually quite capable, it's no toy
[QUOTE=Limed00d;48244168]holy shit they warned us they knew they're coming for our jobs[/QUOTE] "Dey terk er jerbs!" Lol but really this is freakin awesome and a move forward for robotics and AI
[QUOTE=Zero-Point;48245333]I think a decent enough self-awareness test would involve mirrors, and the robot being able to discern that it's looking at a reflection of itself, and not just another robot that happens to be mimicking everything it does. It seems to be a popular one with animals, at any rate.[/QUOTE] There's a bit of a leap between self-recognition and self-awareness. Awareness, at least on a human level, requires experiencing qualia. Machines don't do that, they only process what is given to them. I'll be excited when a machine straight-up does something that is beyond its recursive abilities, isn't a software glitch, and denotes the presence of some complex human or animal characteristic.
[QUOTE=U.S.S.R;48245703]There's a bit of a leap between self-recognition and self-awareness. Awareness, at least on a human level, requires experiencing qualia. Machines don't do that, they only process what is given to them.[/QUOTE] I don't see why we should assume that experiencing qualia is anything more than processing information.
[QUOTE=JohnnyMo1;48245233]It's really not any form of consciousness that observes the name. It's being able to determine the source of a sound that was produced as part of a basic command. It's no more complicated that robot 1 determining that the sound came from robot 2. [editline]19th July 2015[/editline] A cool piece of robotics but only tenuously even related to self-awareness.[/QUOTE] But it's not just that. The robot determined that his own action was changing the parameters of the task and rules he was given despite the rules or parameters not even stating that there was such a possibility. From the point of "before he answered" the problem was unsolvable. The point is not that he can differentiate the entities including himself, the point is that he made the logical deduction involving his own actions that were not i the set of his possibilities before and introduced by himself.
[QUOTE=U.S.S.R;48245703]There's a bit of a leap between self-recognition and self-awareness. Awareness, at least on a human level, requires experiencing qualia. Machines don't do that, they only process what is given to them. I'll be excited when a machine straight-up does something that is beyond its recursive abilities, isn't a software glitch, and denotes the presence of some complex human or animal characteristic.[/QUOTE] "Qualia" is a meaningless buzz word.
[I]Then #1 spoke, "i don't know", bu he did. Now, we all do.[/I]
This is still a very specific, very precise set of technicalities that have been met. What it proves is more along the lines of what the robot itself is capable of doing -- via objective/subjective learning -- to improve on its own and solve fundamental perspective problems without being programmed to execute the solutions. 'Self Consciousness Test' is a slightly misleading way to put it -- this isn't really a closer step towards genuinely living, feeling, and conscious artificial intelligence -- just a new step in having robots that can act and react on a more independent scale of cognition. Which, unfortunately, has never been a very far-fetched goal in robotics. If anything at all, this should just conjure a question as to how trivially we measure precise consciousness -- and how a well-made machine can easily 'meet' those goals and requirements without being what we'd recognize as alive.
Holy shit, I saw these robots at a thing they did for my school. It follows motion tracking, can recognise faces, and talked, it was pretty awesome
[QUOTE=Ziks;48245256]Right, so what would make it self awareness? Is it a linguistic difference (referring to the internal model it has of itself as "I" instead of "Robot 2"), or does there need to be something fundamentally different about how it processes information relating to itself? Can that be well defined?[/QUOTE] It seems to me that the difference isn't so much linguistic as it is qualitative. The robot can identify actions taken by itself, but it cannot hold itself as a purely abstract concept. In other words, it does not conceive of its actions, it simply preforms them. There is likely no part of its programming which allows it to FEEL itself as an entity. All its programming does is add a variable that is unconsciously processed into an action. So in a nutshell I think that that is the major difference; Conscious self-awareness versus unconscious self-awareness. That is to say that whereas a person can consciously hold the concept of themselves in mind for no other purpose, this robot cannot, and furthermore it does not have the qualitative experience of "The theatre of the mind". As someone said earlier, just because you program the robot to say ouch when it is struck does not mean that it FEELS pain. [editline]19th July 2015[/editline] [QUOTE=Ziks;48245766]I don't see why we should assume that experiencing qualia is anything more than processing information.[/QUOTE] Qualia is a set of information above and beyond simple cognition. Qualia makes it so that every thought not only holds "objective" information, but also has a judgment regarding each of those thoughts. Although not always dwelt upon, the judgments that come from Qualia provide for illogical thought such as likes and dislikes, the understanding of ineffable things such as color and emotion. I cannot see how we can hope to create a machine that experiences qualia when we can hardly solve the problem for ourselves.
[QUOTE=Zenreon117;48246529]It seems to me that the difference isn't so much linguistic as it is qualitative. The robot can identify actions taken by itself, but it cannot hold itself as a purely abstract concept. In other words, it does not conceive of its actions, it simply preforms them. There is likely no part of its programming which allows it to FEEL itself as an entity. All its programming does is add a variable that is unconsciously processed into an action. So in a nutshell I think that that is the major difference; Conscious self-awareness versus unconscious self-awareness. That is to say that whereas a person can consciously hold the concept of themselves in mind for no other purpose, this robot cannot, and furthermore it does not have the qualitative experience of "The theatre of the mind". As someone said earlier, just because you program the robot to say ouch when it is struck does not mean that it FEELS pain.[/QUOTE] Isn't abstraction just a way to simplify concepts so that they're easier to process? Why would a robot want or need to abstract something? Feeling is just a sensation derived from a series of processes. If you gave a robot all of the same processes and senses as humans, who's to say they wouldn't "feel"?
[QUOTE=ferrus;48244178]I don't really see what is so impressive about this. Surely it is not difficult to program a bot to hear itself and react accordingly?[/QUOTE] Here's a comment ripped straight from the youtube video to help better explain. [QUOTE]The robot is self aware. Nao robot waves it's arm to catch a human's attention. Rensselaer Polytechnic Institute professor Selmer Bringsjord programmed the three robots to think that two of them were given a "dumbing pill." In reality, that pill's a button on top of their heads that can be pressed by the tester. When the tester asked the robots which pill they received, their processors crunched data in order to provide the right answer. Since two of them were unable to talk, only one answered out loud. "I don't know," the third robot replied, realizing the truth a short while later. "Sorry, I know now," the third Nao waved at the tester. "I was able to prove that I was not given a dumbing pill." After all, it could speak! That means the machine was able to recognize and differentiate itself from the other two -- it was self-aware at that particular point in time. That test is a simpler version of a puzzle called The King's Wise Me[/QUOTE]
[QUOTE=Zenreon117;48246529]Qualia is a set of information above and beyond simple cognition. Qualia makes it so that every thought not only holds "objective" information, but also has a judgment regarding each of those thoughts. Although not always dwelt upon, the judgments that come from Qualia provide for illogical thought such as likes and dislikes, the understanding of ineffable things such as color and emotion. I cannot see how we can hope to create a machine that experiences qualia when we can hardly solve the problem for ourselves.[/QUOTE] Likes, dislikes, and emotion are all far from ineffable. They are all relatively consistent processes with tangible and measurable benefits and functions. I see no reason why those processes couldn't be reproduced in a robot.
[QUOTE=Mr. Scorpio;48246570]Isn't abstraction just a way to simplify concepts so that they're easier to process? Why would a robot want or need to abstract something? Feeling is just a sensation derived from a series of processes. If you gave a robot all of the same processes and senses as humans, who's to say they wouldn't "feel"?[/QUOTE] I am not making the argument that a robot in principle could not feel. I am making the argument that a robot, without qualitative experience and judgments upon not only his experiences but himself, cannot hope to be Self-conscious in any meaningful sense of the word. Given that we understand consciousness as something akin to our own experience - an internal theatre, then it seems that without that element a thinking being is simply that, thought with no experience of feelings. The thing is, and this is crucial, that we are not purely rational beings. Our consciousness often imposes illogical things upon our thoughts which could not otherwise be reasoned. I suppose this bleeds into the point about how you can always say what something is, but there is always the further question of "But, is it good?". This is not something logical, or at least we have not come up with arguments for it that are not circular. [editline]19th July 2015[/editline] [QUOTE=Mr. Scorpio;48246599]Likes, dislikes, and emotion are all far from ineffable. They are all relatively consistent processes with tangible and measurable benefits and functions. I see no reason why those processes couldn't be reproduced in a robot.[/QUOTE] Ok then, tell me what anger is. Explain to me in detail that is objectively reproducible what exactly anger feels like.
[QUOTE=Zenreon117;48246613]I am not making the argument that a robot in principle could not feel. I am making the argument that a robot, without qualitative experience and judgments upon not only his experiences but himself, cannot hope to be Self-conscious in any meaningful sense of the word. Given that we understand consciousness as something akin to our own experience - an internal theatre, then it seems that without that element a thinking being is simply that, thought with no experience of feelings. The thing is, and this is crucial, that we are not purely rational beings. Our consciousness often imposes illogical things upon our thoughts which could not otherwise be reasoned. I suppose this bleeds into the point about how you can always say what something is, but there is always the further question of "But, is it good?". This is not something logical, or at least we have not come up with arguments for it that are not circular.[/QUOTE] You are using words that describe abstract concepts. Of course a robot will never have an "internal theatre" if you cannot accurately describe what an "internal theatre" is. [editline]19th July 2015[/editline] [QUOTE=Zenreon117;48246613]Ok then, tell me what anger is. Explain to me in detail that is objectively reproducible what exactly anger feels like.[/QUOTE] Anger is a process that prepares an organism for physical exertion and incentivizes defensive behavior for the purpose of ensuring the survival of the organism. like, it serves a very clear purpose, I don't see what's so complicated about it
Sorry, you need to Log In to post a reply to this thread.