• Robot passes self-awareness test
    149 replies, posted
[QUOTE=Ziks;48247963]Some of the best chess engines teach themselves how to play, they're given huge libraries of prior chess games and learn which moves were successful and which are to be avoided. [editline]20th July 2015[/editline] From an earlier thread:[/QUOTE] Bringsjord up there contends that machines can never have a phenomenal consciousness. According to him, an imposter automation of yourself is an impossible feat altogether, no? Or am I misinterpreting. [editline]19th July 2015[/editline] [QUOTE=ExplodingGuy;48248030]No, because the robot isn't responding because it made a [somewhat] conscious decision, it's responding because it was programmed to. If this constitutes to self-consciousness, any computer capable of self reporting when prompted by another computer or user is self-conscious.[/QUOTE] Isn't it that it wasn't programmed to what makes it so impressive? [QUOTE=Ziks;48244202]The impressive part is that they didn't program it to specifically solve this problem.[/QUOTE] [QUOTE=Teddybeer;48244210]It processed a question, understood what it meant and then acted on it finding out that it should not have been able to do it. Would be great tech for mars rovers so it can report things that shouldn't happen and was never programmed to detect.[/QUOTE] [QUOTE=RaptorJGW;48244219]Obviously that wouldn't be self-awarness then. You could also program a robot to say "I have feelings" and to say "ouch" whenever it gets hit. That wouldn't mean it's actually sentient like a human. The AI ended up realising on its own that it could speak. That is the interesting part.[/QUOTE]
[QUOTE=Ziks;48247963]Some of the best chess engines teach themselves how to play, they're given huge libraries of prior chess games and learn which moves were successful and which are to be avoided.[/QUOTE] That's statistics analysis. This is a huge thing in stock market. Bots learning how to trade better based on various stats. It's very technical and "cold". Now if an AI can observe chess and then dynamically apply what it learned from chess to something else, now that is impressive.
[QUOTE=Warriorx4;48248082]Isn't it that it wasn't programmed to what makes it so impressive?[/QUOTE]No, it was programmed, and the details on how said program behaves is in the paper I posted; which is why I chimed in the first place.
[QUOTE=Fourier;48248078]But Ziks, does the free will exists then? We are all just biological machines in this case. Where does consciousness comes from? Who is I that decides? I mean we humans feel the thinking process. (Not excepting you to know the answer). Machine AI is just a code. It doesn't feels. Just a joke that one of my friends said what free will is: "Punching a scientist who claims free will doesn't exists in a face".[/QUOTE] free will can exist even if your decisions are the result of mechanical processes
I've actually seen a demo of these in person and it's pretty cool. My CS professor had a colleague in Iowa who used these to do a "Robotics Dance Class" which was programming these guys to dance. He had his colleague come up to give us a live demo and it was pretty cool. They did a dance number and read Green Eggs and Ham back and forth. It takes a lot of effort to get them to seem "alive" but when they do it's really cool.
[QUOTE=ExplodingGuy;48248160]No, it was programmed, and the details on how said program behaves is in the paper I posted; which is why I chimed in the first place.[/QUOTE] The problem is that you have yet to define the difference between a "conscious" and a "non conscious" decision.
[QUOTE=ExplodingGuy;48248160]No, it was programmed, and the details on how said program behaves is in the paper I posted; which is why I chimed in the first place.[/QUOTE] I skimmed over it earlier and instead opened multiple tabs on phenomenal consciousness instead of reading the rest of it. In fact the paper itself acknowledges that their robot has satisfied only some of the requirements for self-consciousness, and that this whole thing is a step forward for robots able to "to take English sentences in as input, and yield logicist representations as output" more than anything else and holds that future robots can only improve on this. [quote=paper]Given a formal regimentation of this test previously formulated by Bringsjord, it can be proved that, in theory, a future robot represented by R3 can answer provably correctly (which for plausible reasons, explained by Floridi, entails that R3 has satisfied some of the structural requirements for self-consciousness). In this paper we explain and demonstrate the engineering that now makes this theoretical possibility actual, both in the simulator known as ‘PAGI World’ (used for testing AIs), and in real (= physical) robots interacting with a human tester. These demonstrations involve scenarios that demand the passing of Floridi’s test for self-consciousness, where for us, passing such a test is required for an agent to be regarded morally competent[/quote] So yes, the news is just running with this sensationalized shtick.
[QUOTE=cecilbdemodded;48247942]I wouldn't consider something as 'passing' the test unless it comes from the robot itself. I mean, if all the robot is doing is going through its programming, then the programmer or programmers wrote a program that passes the test. It's like saying a computer beat a human playing chess. Unless that computer learned to play chess on its own then it didn't. The people who wrote the program beat the human player, the computer is just the tool they used to do it.[/QUOTE] If we put such a requirement it'd be probably impossible for any AI to pass any kind of tests. Even in machine learning you have to fit your algorithms around the problem you want to solve, often manually picking out the best parameters, tweaking and setting rules, before actually being able to solve the problem. So human hands play a large part in how an AI will solve a problem and if it will succeed in solving the problem.
[QUOTE=Fourier;48248078]But Ziks, does the free will exists then? We are all just biological machines in this case.[/QUOTE] Depends on how you define free will, since it is a subjective term. Personally I don't think free will is a meaningful concept. [QUOTE]Where does consciousness comes from?[/QUOTE] I think an agent is conscious if it records a (partial) representation of its cognitive state to be utilised in the future. So your consciousness comes from the feedback loop of you remembering prior thoughts and using them to generate new thoughts. [QUOTE]Who is I that decides?[/QUOTE] The decisions you make are a function of your current neural state and the percepts you receive. This influences your next neural state, and so also affects the decisions you make in the future. There's probably a small amount of non-determinism involved too because your brain is subject to quantum mechanics (as is everything else in this universe that's physical). [QUOTE]I mean we humans feel the thinking process. (Not excepting you to know the answer). Machine AI is just a code. It doesn't feels.[/QUOTE] I don't think we're anything more than machines, we just convince ourselves that since we can't describe feelings in an objective way they must have some special form of existence. I think that's just because the qualia that make up feelings are defined only in relation to other qualia, which boil down to abstract packets of information. [QUOTE]Just a joke that one of my friends said what free will is: "Punching a scientist who claims free will doesn't exists in a face".[/QUOTE] Hopefully I'll be safe from your friend because I don't claim it doesn't exist, just that the question of free will isn't meaningful.
[QUOTE=Mr. Scorpio;48248215]The problem is that you have yet to define the difference between a "conscious" and a "non conscious" decision.[/QUOTE]A conscious decision is one made in a state of awareness. A non-conscious decision isn't a decision at all and is just a response not unlike a reflex
[QUOTE=ExplodingGuy;48248371]A conscious decision is one made in a state of awareness. A non-conscious decision isn't a decision at all and is just a response not unlike a reflex[/QUOTE] So a conscious decision is one that is made after deliberately weighing the risks and benefits of your options, and then proceeding? Cause computers can do that. That isn't even particularly difficult.
[QUOTE=Warriorx4;48248082]Bringsjord up there contends that machines can never have a phenomenal consciousness. According to him, an imposter automation of yourself is an impossible feat altogether, no? Or am I misinterpreting.[/QUOTE] I don't think he claims that an impostor automaton is impossible, rather that an impostor is the best you can get. I kind of agree with him in that sense, since I don't think phenomenal consciousness exists, even in us.
[QUOTE=Ziks;48247197]So would throwing a pseudo-random number generator in there help?[/QUOTE] implemented correctly, yes, but more of as a seed. i don't believe true intelligence and consciousness is impossible for us to engineer, just that this is not yet there, not close, although it is a great step forwards for artificial intelligence. now i should define what i mean by true consciousness. while i am not the most informed on these matters, i would consider something having true consciousness is that it has the illusion of consciousness, the same as we do. if the machine does not believe that it itself is conscious, then i don't think of it as anything more than an imposter intelligence. that isn't to say that making a machine which convinces us that it is conscious would not be a feat in and of itself, i just think a machine which believes is the end-goal kf artificial intelligence, for i would consider that to no longer be artificial but real intelligence, new life so to speak. [editline]19th July 2015[/editline] the belief that you are conscious, the belief that you have free will, is the ultimate in irrationality but because the illusion is so convincing even people who know that free will is just an illusion cannot help but continue to fall for it every time. if a machine falls for the illusion of free will, then i believe true consciousness and true intelligence has been created [editline]19th July 2015[/editline] i believe it is possible for us to create a machine like that, we are evidence enough that it is physically possible, however i don't believe this is that large of a step towards that, although as i said previously it is definitely a big step in creating better thinking machines
[QUOTE=Ziks;48248441]I don't think he claims that an impostor automaton is impossible, rather that an impostor is the best you can get. I kind of agree with him in that sense, since I don't think phenomenal consciousness exists, even in us.[/QUOTE] I still don't understand how you can hold that view. Insofar as you are experiencing things right now there is phenomenology. Even if qualia is an illusion the point is that there is still a coming together of things which produces that. Even illusions need screens upon which to be projected. I am not sure what exactly would constitute "true qualia" then in your eyes.
And this is where Skynet begins.
[QUOTE=U.S.S.R;48245703]There's a bit of a leap between self-recognition and self-awareness. Awareness, at least on a human level, requires experiencing qualia. Machines don't do that, they only process what is given to them. I'll be excited when a machine straight-up does something that is beyond its recursive abilities, isn't a software glitch, and denotes the presence of some complex human or animal characteristic.[/QUOTE] Self-recognition is its own form of self-awareness, as it requires one to make the connection between then moving their arm and observing the reflection performing the same task, but flipped around so it appears it's moving its left as the robot moves its right. It's kind of the same weird process human babies go through when they realize "Holy shit, this lump of meat with wriggly meaty bits sticking out is a [i]part[/i] of me, and I can make it DO things!"
[QUOTE=Zenreon117;48248813]I still don't understand how you can hold that view. Insofar as you are experiencing things right now there is phenomenology. Even if qualia is an illusion the point is that there is still a coming together of things which produces that. Even illusions need screens upon which to be projected. I am not sure what exactly would constitute "true qualia" then in your eyes.[/QUOTE] My point is I don't think there's a secret sauce that makes us observing things fundamentally different from a digital camera observing things. The important distinction between me and the camera is I have the additional ability to generate strings of natural language to describe to myself what I am experiencing. In other words I can think about what I experience.
Wait, it knows it's not supposed to hear its own voice, so we know it's self aware because it apologised of its 'own will' or something? It's 10am and I haven't had breakfast yet, what am I missing? :v:
isn't the turing test the best way to find out if an AI has conciousness? This is merely one logic puzzle, sure it's impressive to see a robot do it but this doesn't prove complete self conciousness.
[QUOTE=KlaseR;48251482]isn't the turing test the best way to find out if an AI has conciousness? This is merely one logic puzzle, sure it's impressive to see a robot do it but this doesn't prove complete self conciousness.[/QUOTE] Nah, passing the Turing test would just prove that someone has created a chatbot complex enough to fool a human into thinking it's a human. That can still be achieved with a large enough database of what constitutes natural language and a good parser to go along with it. As far as I know, there has already been one instance where a chatbot passed the Turing test. It didn't really pass, since it convinced only one of the judges, according to what I heard, and I can't find any sources anywhere, so it's probably not true. edit: wait, there it is: [quote=The Guardian]Computer simulating 13-year-old boy becomes first to pass Turing test 'Eugene Goostman' fools 33% of interrogators into thinking it is human, in what is seen as a milestone in artificial intelligence [/quote] [URL]http://www.theguardian.com/technology/2014/jun/08/super-computer-simulates-13-year-old-boy-passes-turing-test[/URL]
[QUOTE=geogzm;48251343]Wait, it knows it's not supposed to hear its own voice, so we know it's self aware because it apologised of its 'own will' or something? It's 10am and I haven't had breakfast yet, what am I missing? :v:[/QUOTE] 2 of the 3 robot were muted, but still programmed identically. The guy asks the robots if they know if they were muted or not. One of the bots answers that he doesn't know, but then hears and recognizes his own voice so he corrects himself. The big thing is that the robot can figure out that the audio he hears is what he said, thus being "self-aware".
[QUOTE=judgeofdeath;48251576]2 of the 3 robot were muted, but still programmed identically. The guy asks the robots if they know if they were muted or not. One of the bots answers that he doesn't know, but then hears and recognizes his own voice so he corrects himself. The big thing is that the robot can figure out that the audio he hears is what he said, thus being "self-aware".[/QUOTE] that doesn't seem like the kind of self awareness that could lead to 'real' self awareness
Okay how much longer until we have the same situation in Mass Effect happens and really starts to think of it's self in a conscious manner. Now that would be kinda awesome.
[QUOTE=rsa1988;48251968]Okay how much longer until we have the same situation in Mass Effect happens and really starts to think of it's self in a conscious manner. Now that would be kinda awesome.[/QUOTE] Not if we end up being the Quarians..
Well, what IS Self-consciousness? The ability to know you exist? The ability to make a decision based on information you gathered on your own? Understanding the fact that you managed to do these two things? But then how does anyone know that they or someone else understands that fact? Just because they say they do? It's all qualia A robot can tell if something is orange because it was programmed to know what orange is, big woop. But then again, weren't we all when we were in school? How is artificial intelligence [b]actually[/b] different than our own. We're both programmed. We can both make decisions based on information. We can both set out to work out a solution for instructions we're given. One is just narrow while the other is wider Maybe instead of self consciousness, we throw that term out, and just use the ol' I think, therefor I am quote in a new way, and stack it up against other shit with a brain. I think less than a cat, Therefor I am lower on the rung
[QUOTE=TheTalon;48252226]Well, what IS Self-consciousness? The ability to know you exist? The ability to make a decision based on information you gathered on your own? Understanding the fact that you managed to do these two things? But then how does anyone know that they or someone else understands that fact? Just because they say they do? It's all qualia A robot can tell if something is orange because it was programmed to know what orange is, big woop. But then again, weren't we all when we were in school? How is artificial intelligence [b]actually[/b] different than our own. We're both programmed. We can both make decisions based on information. We can both set out to work out a solution for instructions we're given. One is just narrow while the other is wider[/QUOTE] I implicitly agree with this argument, but I think it's stupid when people extrapolate to essentially say that if you program a robot to say 'ouch' when struck, it actually feels pain and should be treated like a person. It's straightforward to design a machine to emulate human behavior, and people seem to evaluate personhood on the basis of superficial human-like characteristics. Just look at this thread- designing a robot to recognize its own synthetic voice is not a huge feat of artificial intelligence, but it's being called 'self consciousness'. IBM's Watson has incredible emergent complexity, information association, and problem-solving skill, but people recognize it as just a computer. Give a far dumber robot a face and artificial personality and suddenly people think it's a person.
I've always wanted to be a parent to a robot, teaching it how things work and such. Hope this would be possible in my lifetime.
[QUOTE=Ziks;48244202]The impressive part is that they didn't program it to specifically solve this problem.[/QUOTE] But how would you program something to do something that it shouldnt even know to do? Wouldnt you have to give it some reason or context of shit so it couldnt just come up with its own whatever. I mean how would you program and AI that just knows the words and understands what it says, even knowing they exist and how to say them. So someone must have programmed it with all that stuff but not everything. So to me it just seems like it has a set of words it knows and functions it can do and it just acts out on how those things were meant to be from the programming. It doesnt really seem like it self-aware to me, or even doing stuff on its own. Like in the video it seems like it knows that a dumbing pill doesnt let them speak. It hears itself and says the words it was meant to know and can say, it knows that if it can talk then the dumbing pill wasn't taken because the dumbing pill doesnt allow talking. It wouldnt know these things or what they meant without being programmed and meant to be used a certain way. I think the people who made the video probably just want their shit to sound better and cooler than it really is.
Like all other 'robot stories' this sounds incredibly sensationalized. I am not skeptical that the robot passed the supposed test, but to claim it is the mark of sentience is bullshit. For one, this is not even an actual test of self-awareness but a folk story. Secondly, any robot could easily be programmed to recognize things like voice ques or simple reasoning. The AI in a video game can be programmed to make simple judgements for fuck sake, but suddenly it is amazing because they made a cute robot do it. If you honestly think we are even close to being at the level of making sentient -or even slightly self-aware- robots, you are naive. The brain is incredibly complex and replicating that inorganically is just too far beyond us.
[QUOTE=BananaFoam;48253316]Like all other 'robot stories' this sounds incredibly sensationalized. I am not skeptical that the robot passed the supposed test, but to claim it is the mark of sentience is bullshit. For one, this is not even an actual test of self-awareness but a folk story. Secondly, any robot could easily be programmed to recognize things like voice ques or simple reasoning. The AI in a video game can be programmed to make simple judgements for fuck sake, but suddenly it is amazing because they made a cute robot do it. If you honestly think we are even close to being at the level of making sentient -or even slightly self-aware- robots, you are naive. The brain is incredibly complex and replicating that inorganically is just too far beyond us.[/QUOTE] No kidding, even something like a god damn master chess AI vs a master chess player is way more impressive than this. That shits been around for who knows how long, I think legit decades. The thing in the video reminds me of just an overhyped robotic dog toy they sell.
Sorry, you need to Log In to post a reply to this thread.