[QUOTE=Zenreon117;48246529]It seems to me that the difference isn't so much linguistic as it is qualitative. The robot can identify actions taken by itself, but it cannot hold itself as a purely abstract concept.[/QUOTE]
It will maintain a data structure in memory that represents an abstraction of itself, is that not enough?
[QUOTE]In other words, it does not conceive of its actions, it simply preforms them.[/QUOTE]
I'm not sure what you mean by this. It decides on what action to perform based on a set of internal rules that define how it reasons, just like we do.
[QUOTE]There is likely no part of its programming which allows it to FEEL itself as an entity.[/QUOTE]
Can you define what you mean by "Feel"?
[QUOTE]All its programming does is add a variable that is unconsciously processed into an action. So in a nutshell I think that that is the major difference; Conscious self-awareness versus unconscious self-awareness. That is to say that whereas a person can consciously hold the concept of themselves in mind for no other purpose, this robot cannot, and furthermore it does not have the qualitative experience of "The theatre of the mind".[/QUOTE]
How would you define conscious experience vs unconscious experience?
[QUOTE]As someone said earlier, just because you program the robot to say ouch when it is struck does not mean that it FEELS pain.[/QUOTE]
Right, I would say it would also need to update its behaviour in a way that is expected to avoid repeating the experience that triggered a pain response. Then it would meet all meaningful requirements for "feeling" pain.
[QUOTE]Qualia is a set of information above and beyond simple cognition. Qualia makes it so that every thought not only holds "objective" information, but also has a judgment regarding each of those thoughts. Although not always dwelt upon, the judgments that come from Qualia provide for illogical thought such as likes and dislikes, the understanding of ineffable things such as color and emotion.[/QUOTE]
Alright, all that so far sounds like information processing, specifically assigning relations of varying weights between different qualia, and between qualia and emotional responses.
[QUOTE]I cannot see how we can hope to create a machine that experiences qualia when we can hardly solve the problem for ourselves.[/QUOTE]
You're assuming there is a problem in the first place.
until theres more tests to confirm it, i pass this up to code trickery.
we aint playin god yet
i woudnt mind havin a robot friend in like 20 yrs tho if this is true
[QUOTE=uchiha2727;48246586]Here's a comment ripped straight from the youtube video to help better explain.[/QUOTE]
That does not explain anything about how this was achieved. That explanation fits straight up programming the robot to do so as well.
[QUOTE=Killuah;48246017]But it's not just that. The robot determined that his own action was changing the parameters of the task and rules he was given despite the rules or parameters not even stating that there was such a possibility. From the point of "before he answered" the problem was unsolvable.
The point is not that he can differentiate the entities including himself, the point is that he made the logical deduction involving his own actions that were not i the set of his possibilities before and introduced by himself.[/QUOTE]
Source? The article doesn't really say anything like that.
[QUOTE=paul simon;48245155]I'm not entirely sure I get that wording of it.
But yeah this is sort of sketchy, but cool nonetheless.[/QUOTE]
[QUOTE=Ziks;48245129]I'm not sure that there is a difference.[/QUOTE]
Actually, I realized I'm wrong here. I'm mixing up self-awareness with consciousness. So if they robots did solve the puzzle by taking into consideration of themselves when solving it, then I guess they are technically speaking self-aware. IF we go by the definition that self-awareness is:
"an awareness of one's own personality or individuality"
But by going with this definition it feels like we'd be able to call many animals and robots/AI-systems self-aware.
i have no doubts that it is self-aware but i don't believe that it is close to being conscious, which in my own view stems from irrationality and unpredictability. creativity has these roots, and i consider that an important distinction between artificial intelligences and real intelligences.
[QUOTE=Ninja Gnome;48247147]i have no doubts that it is self-aware but i don't believe that it is close to being conscious, which in my own view stems from irrationality and unpredictability. creativity has these roots, and i consider that an important distinction between artificial intelligences and real intelligences.[/QUOTE]
So would throwing a pseudo-random number generator in there help?
[QUOTE=Ziks;48247197]So would throwing a pseudo-random number generator in there help?[/QUOTE]
No? That's not what a conscience is.
[QUOTE=itisjuly;48247241]No? That's not what a conscience is.[/QUOTE]
He's talking about consciousness, not a conscience. They claimed the important part of consciousness is irrationality and unpredictability, which you'll get if you use a PRNG when making decisions.
However I completely disagree with those being what consciousness stems from.
[QUOTE=Mr. Scorpio;48246570]Isn't abstraction just a way to simplify concepts so that they're easier to process? Why would a robot want or need to abstract something?
Feeling is just a sensation derived from a series of processes. If you gave a robot all of the same processes and senses as humans, who's to say they wouldn't "feel"?[/QUOTE]
Programming is full of abstraction.
[QUOTE=Ziks;48246679]
Can you define what you mean by "Feel"?
You're assuming there is a problem in the first place.[/QUOTE]
Problem is, we still cannot define yet what is "Feel".
[QUOTE=Fourier;48247266]Problem is, we still cannot define yet what is "Feel".[/QUOTE]
Right, and maybe that's because there is no meaningful definition. It could be a name we try to give to something that doesn't really exist.
[QUOTE=Ziks;48247311]Right, and maybe that's because there is no meaningful definition. It could be a name we try to give to something that doesn't really exist.[/QUOTE]
How can something we feel can't exist? But also dreams do feel real yet they do not exist.
Weird huh.
[QUOTE=Fourier;48247350]How can something we feel can't exist? But also dreams do feel real yet they do not exist.
Weird huh.[/QUOTE]
By not exist I mean there is no distinct phenomena that we can point to and say "that's a feels", in the same way that the grey dots at the intersections of this grid don't exist:
[img]http://cdn.instructables.com/F81/M1FI/FRD807F2/F81M1FIFRD807F2.LARGE.gif[/img]
[B][U][I]THIS IS PURE TECHNO-HERESY![/I][/U][/B]
"Bringsjord doesn’t believe that any of the artificial creatures featured in the present paper are actually self-conscious. He has explained repeatedly [e.g., see [14], [15]] that genuine phenomenal consciousness [16] is impossible for a mere machine to have, and true self-consciousness would require phenomenal consciousness. Nonetheless, the logico-mathematical structure and form of self-consciouness can be ascertained and specified, and these specifications can then be processed computationally in such a way as to meet clear tests of mental ability and skill. This test-based approach, dubbed Psychometric AI, thankfully, avoids endless philosophizing in favor of determinate engineering aimed at building AIs that can pass determinate tests. In short, computing machines, AIs, robots, and so on are all “zombies,” but these zombies can be engineered to pass tests. A not-small body of work lays out and establishes this position; e.g., [14], [17], [18], [19], [5], [4]. Some of Bringsjord’s co-authors in the present case may well reject his position, but no matter: engineering to tests is fortunately engineering, not a matter of metaphysics." [url=http://kryten.mm.rpi.edu/SBringsjord_etal_self-con_robots_kg4_0601151615NY.pdf] Bringsjord, Page 2[/url]
We're done here.
[QUOTE=ExplodingGuy;48247410]"Bringsjord doesn’t believe that any of the artificial creatures featured in the present paper are actually self-conscious. He has explained repeatedly [e.g., see [14], [15]] that genuine phenomenal consciousness [16] is impossible for a mere machine to have, and true self-consciousness would require phenomenal consciousness. Nonetheless, the logico-mathematical structure and form of self-consciouness can be ascertained and specified, and these specifications can then be processed computationally in such a way as to meet clear tests of mental ability and skill. This test-based approach, dubbed Psychometric AI, thankfully, avoids endless philosophizing in favor of determinate engineering aimed at building AIs that can pass determinate tests. In short, computing machines, AIs, robots, and so on are all “zombies,” but these zombies can be engineered to pass tests. A not-small body of work lays out and establishes this position; e.g., [14], [17], [18], [19], [5], [4]. Some of Bringsjord’s co-authors in the present case may well reject his position, but no matter: engineering to tests is fortunately engineering, not a matter of metaphysics." [url=http://kryten.mm.rpi.edu/SBringsjord_etal_self-con_robots_kg4_0601151615NY.pdf] Bringsjord, Page 2[/url]
We're done here.[/QUOTE]
Well no, Bringsjord's personal opinion doesn't conclude the debate of what consciousness is. He also believes he has a valid argument for P=NP.
[QUOTE=ExplodingGuy;48247410]"Bringsjord doesn’t believe that any of the artificial creatures featured in the present paper are actually self-conscious. He has explained repeatedly [e.g., see [14], [15]] that genuine phenomenal consciousness [16] is impossible for a mere machine to have, and true self-consciousness would require phenomenal consciousness. Nonetheless, the logico-mathematical structure and form of self-consciouness can be ascertained and specified, and these specifications can then be processed computationally in such a way as to meet clear tests of mental ability and skill. This test-based approach, dubbed Psychometric AI, thankfully, avoids endless philosophizing in favor of determinate engineering aimed at building AIs that can pass determinate tests. In short, computing machines, AIs, robots, and so on are all “zombies,” but these zombies can be engineered to pass tests. A not-small body of work lays out and establishes this position; e.g., [14], [17], [18], [19], [5], [4]. Some of Bringsjord’s co-authors in the present case may well reject his position, but no matter: engineering to tests is fortunately engineering, not a matter of metaphysics." [url=http://kryten.mm.rpi.edu/SBringsjord_etal_self-con_robots_kg4_0601151615NY.pdf] Bringsjord, Page 2[/url]
We're done here.[/QUOTE]
i want to believe
[QUOTE=elevate;48247265]Programming is full of abstraction.[/QUOTE]
I mean human abstraction. Robots have different capabilities than humans, I don't see why we should expect them to handle cognitive tasks the exact same way we do.
[QUOTE=Ziks;48247471]Well no, Bringsjord's personal opinion doesn't conclude the debate of what consciousness is. He also believes he has a valid argument for P=NP.[/QUOTE]So, Bringsjord isn't allowed to extrapolate on the data gathered by his own test?
[QUOTE=Ziks;48247471]Well no, Bringsjord's personal opinion doesn't conclude the debate of what consciousness is. He also believes he has a valid argument for P=NP.[/QUOTE]
Certainly it'd be more becoming to address the perspective given by Bringsjord point-by-point?
[QUOTE=ExplodingGuy;48247598]So, Bringsjord isn't allowed to extrapolate on the data gathered by his own test?[/QUOTE]
My main issue is this bit:
[QUOTE]He has explained repeatedly [e.g., see [14], [15]] that genuine phenomenal consciousness [16] is impossible for a mere machine to have, and true self-consciousness would require phenomenal consciousness.[/QUOTE]
It looks like this book he wrote contains his arguments for that belief: [url]http://www.cogsci.ecs.soton.ac.uk/cgi/psyc/newpsy?5.59[/url]
I don't have the book, but considering the flaws in his "proof" that P=NP I don't hold much hope in his arguments being particularly solid. I'm game if you would like to argue on his behalf though.
[editline]19th July 2015[/editline]
[QUOTE=Warriorx4;48247641]Certainly it'd be more becoming to address the perspective given by Bringsjord point-by-point?[/QUOTE]
It's in a book that I don't have.
[QUOTE=Ziks;48247667]My main issue is this bit:
It looks like this book he wrote contains his arguments for that belief: [url]http://www.cogsci.ecs.soton.ac.uk/cgi/psyc/newpsy?5.59[/url]
I don't have the book, but considering the flaws in his "proof" that P=NP I don't hold much hope in his arguments being particularly solid. I'm game if you would like to argue on his behalf though.[/QUOTE]Well, I'm not arguing that P=NP (and I haven't read that book, either.) I'm arguing that these robots don't demonstrate self-conscious behavior citing the paper created by the team who tested them. Should have elaborated that in my first post.
^Is it not so that there exists a scale of self-conscious behavior? A rat doesn't have the same awareness as a human. Couldn't it be said that the robot in the OP is low on the scale of consciousness?
[QUOTE=ExplodingGuy;48247724]Well, I'm not arguing that P=NP (and I haven't read that book, either.) I'm arguing that these robots don't demonstrate self-conscious behavior citing the paper created by the team who tested them. Should have elaborated that in my first post.[/QUOTE]
His argument that they aren't self-conscious relies on the his assumption that machines can't be conscious, which is hardly a well supported position.
[QUOTE=Warriorx4;48247753]^Is it not so that there exists a scale of self-conscious behavior? A rat doesn't have the same awareness as a human. Couldn't it be said that the robot in the OP is low on the scale of consciousness?[/QUOTE]
Exactly.
[QUOTE=Swebonny;48247028]Actually, I realized I'm wrong here. I'm mixing up self-awareness with consciousness. So if they robots did solve the puzzle by taking into consideration of themselves when solving it, then I guess they are technically speaking self-aware. IF we go by the definition that self-awareness is:
"an awareness of one's own personality or individuality"
But by going with this definition it feels like we'd be able to call many animals and robots/AI-systems self-aware.[/QUOTE]
I wouldn't consider something as 'passing' the test unless it comes from the robot itself. I mean, if all the robot is doing is going through its programming, then the programmer or programmers wrote a program that passes the test.
It's like saying a computer beat a human playing chess. Unless that computer learned to play chess on its own then it didn't. The people who wrote the program beat the human player, the computer is just the tool they used to do it.
[QUOTE=cecilbdemodded;48247942]I wouldn't consider something as 'passing' the test unless it comes from the robot itself. I mean, if all the robot is doing is going through its programming, then the programmer or programmers wrote a program that passes the test.
It's like saying a computer beat a human playing chess. Unless that computer learned to play chess on its own then it didn't. The people who wrote the program beat the human player, the computer is just the tool they used to do it.[/QUOTE]
Some of the best chess engines teach themselves how to play, they're given huge libraries of prior chess games and learn which moves were successful and which are to be avoided.
[editline]20th July 2015[/editline]
From an earlier thread:
[QUOTE]I used to believe that conscious experience was a non-physical phenomenon too. But then I extended the observation that we are incapable of proving to others that we "truly" experience things to myself; I cannot record any internal memories that can prove to future instances of me that I "truly" experienced things. Any memories of my prior experiences are indistinguishable from the memories I would expect from an impostor automaton with the same neural configuration as myself, had I somehow inherited its body. But the words I am typing now were formulated by that automaton, this expression of self awareness that was naturally produced by a complex neural network following learnt behaviour that encodes, among other things, a desire to translate internally generated models of its own structure into text on an internet forum. To believe otherwise requires the existence of some means of information exchange between the hypothetical non-physical element of me that "truly" experiences things, and the physical part of me that is pressing keys on my keyboard. We have not yet observed such communication, so I can't see a reason for assuming its existence. The simpler explanation is that there is no distinction between the way a physical automaton would experience things and "true" conscious experience.[/QUOTE]
[QUOTE=Warriorx4;48247753]^Is it not so that there exists a scale of self-conscious behavior? A rat doesn't have the same awareness as a human. Couldn't it be said that the robot in the OP is low on the scale of consciousness?[/QUOTE]No, because the robot isn't responding because it made a [somewhat] conscious decision, it's responding because it was programmed to. If this constitutes to self-consciousness, any computer capable of self reporting when prompted by another computer or user is self-conscious.
But Ziks, does the free will exists then? We are all just biological machines in this case.
Where does consciousness comes from? Who is I that decides? I mean we humans feel the thinking process. (Not excepting you to know the answer).
Machine AI is just a code. It doesn't feels.
Just a joke that one of my friends said what free will is: "Punching a scientist who claims free will doesn't exists in a face".
Sorry, you need to Log In to post a reply to this thread.