• Do You Believe in 'Life after Death'?
    681 replies, posted
[QUOTE=bIgFaTwOrM12;44614445]Similarly, the automaton cannot state "I perceived..." in any meaningful way, at least no more than a digital camera can say "I am perceiving a face and will duly center in on it" in a meaningful way. Even if it had information stored about itself, it would be simply an awareness, in the third person as opposed to actual self awareness. So what it effectively means by "I perceived..." is "The automaton observed...". Likewise it could not actually say "...my visual sensor." as all it could actually mean is "...the visual sensors it is equipped with".[/QUOTE] Equally, I could claim that you are deliberately restricting the language used when encoding thoughts to strip any notion of individuality from the automaton. [QUOTE=bIgFaTwOrM12;44614445]Actual awareness of self is vital, just because it can mimic a self aware being does not mean it has an actual conception of what the self is from its perspective is. True intentionality would be another factor as well, given that it can not actually reference another object, it can only reference a chunk of information that it has stored within itself.[/QUOTE] What is "actual awareness", some property you intuitively feel humans must possess but an automaton cannot? How does an entity with "actual awareness" differ in functionality to one that only has the awareness I described in my automaton? If they both have the scope to produce the same behaviour, how can we be sure that "actual awareness" is a distinct kind of awareness, rather than just a label we attach to our own experience to make us as humans feel special? [editline]22nd April 2014[/editline] The way that we think and the scope of our thoughts depends a great deal on the language used to encode those thoughts. If you restrict the language of a sentient entity to not be able to express individuality or self-reflective concepts then those kinds of thoughts are out of its reach. If you afford it a language that lets it describe and extrapolate from its own state I would claim that is equivalent to self awareness.
[QUOTE=Ziks;44614594]Equally, I could claim that you are deliberately restricting the language used when encoding thoughts to strip any notion of individuality from the automaton.[/QUOTE] I am deliberately stripping any illusion of individuality in the language used. Instead of anthropomorphising the "thoughts" of the automaton, I am shaving them down to exactly how its "mind" works as described by you. After-all, a strictly natural "mind" would have thoughts that at their core would resemble the mechanisms conceiving of them perfectly. [QUOTE]What is "actual awareness", some property you intuitively feel humans must possess but an automaton cannot? How does an entity with "actual awareness" differ in functionality to one that only has the awareness I described in my automaton? If they both have the scope to produce the same behaviour, how can we be sure that "actual awareness" is a distinct kind of awareness, rather than just a label we attach to our own experience to make us as humans feel special?[/QUOTE] Actual awareness of self* is what the automaton lacks, even if one could construct it so that it appears to be self-aware. It can know about the automaton, what it is doing, what attributes it has, but there is no mechanisms allowing it to view it as a self, instead it is simply another configuration of information referred to in the third person. Where in the workings of the automaton's brain can you see the information of "I" as a distinct and enduring entity while it is storing information about stimulus? Instead of itself seeing something, things are simply being seen as that is all its brain allows it to think. [QUOTE]The way that we think and the scope of our thoughts depends a great deal on the language used to encode those thoughts. If you restrict the language of a sentient entity to not be able to express individuality or self-reflective concepts then those kinds of thoughts are out of its reach. If you afford it a language that lets it describe and extrapolate from its own state I would claim that is equivalent to self awareness.[/QUOTE] I agree that language has an effect on the scope of our thoughts, but not the way in-which we think. A person will always feel that they themselves are in pain when experiencing pain, or that they are experiencing pleasure when experiencing pleasure. Even if language restricts them of the ability to conceive of those states of mind in words, they have the states of mind nonetheless. Those states being that they themselves are experiencing something, nobody has to actually process the thought, "I am in pain" to know that they themselves are in pain.
[QUOTE=bIgFaTwOrM12;44615138]Actual awareness of self* is what the automaton lacks, even if one could construct it so that it appears to be self-aware. It can know about the automaton, what it is doing, what attributes it has, but there is no mechanisms allowing it to view it as a self, instead it is simply another configuration of information referred to in the third person. Where in the workings of the automaton's brain can you see the information of "I" as a distinct and enduring entity while it is storing information about stimulus? Instead of itself seeing something, things are simply being seen as that is all its brain allows it to think.[/QUOTE] How is that objectively different to our own experience? My internal representation of myself is just another model of an entity in my mind, albeit one I am culturally trained to use language such as "self" to refer to. My neural network reinforces behaviour that is expected to benefit this representation of myself through positive emotional feedback, and discourages behaviours that are projected to harm that internal model through negative emotional feedback. What else is there, objectively speaking? [QUOTE]I agree that language has an effect on the scope of our thoughts, but not the way in-which we think. A person will always feel that they themselves are in pain when experiencing pain, or that they are experiencing pleasure when experiencing pleasure. Even if language restricts them of the ability to conceive of those states of mind in words, they have the states of mind nonetheless.[/QUOTE] I guess I'm claiming that if they couldn't express an internal model of themselves as an entity using internally stored sequences of some language they would not be self aware. [QUOTE]Those states being that they themselves are experiencing something, nobody has to actually process the thought, "I am in pain" to know that they themselves are in pain.[/QUOTE] I think they do. Obviously not that literal thought of "I am in pain", but they need to be able to express an awareness of themselves as an entity otherwise it is just the more reactive aspects of their central nervous system responding using learnt behaviours to minimise the stimulation of pain receptors. Does a flatworm know when it is in pain? It doesn't (as far as we know) maintain an internal model of itself to attach attributes to, so I don't think it does.
[QUOTE=Ziks;44615327]How is that objectively different to our own experience? My internal representation of myself is just another model of an entity in my mind, albeit one I am culturally trained to use language such as "self" to refer to. My neural network reinforces behaviour that is expected to benefit this representation of myself through positive emotional feedback, and discourages behaviours that are projected to harm that internal model through negative emotional feedback. What else is there, objectively speaking?[/QUOTE] It's objectively different that the automaton can only refer to "itself" in the third person and not as a specific entity, but purely as information. Self awareness allows us to conceive of a distinct self and enduring self that is wholly different in perspective from everything else, essentially in first person. Simply having information of what you are is not enough to bring about this perspective as it is still essentially just another concept to be referred to. So in the automaton's brain it references itself like it would a table or any other automaton, it is aware of its attributes, but not of any possession of them or the presence of a self within them. It also cannot actually reference anything external to itself, it can only "think" the stored data labelled "chair", it cannot think about a chair as any kind of entity beyond that. None of this at all resembles how we think. [QUOTE]I guess I'm claiming that if they couldn't express an internal model of themselves as an entity using internally stored sequences of some language they would not be self aware.[/QUOTE] How does language affect whether someone is aware of a distinct self? Whether they can express that awareness to other is inconsequential. [QUOTE]I think they do. Obviously not that literal thought of "I am in pain", but they need to be able to express an awareness of themselves as an entity otherwise it is just the more reactive aspects of their central nervous system responding using learnt behaviours to minimise the stimulation of pain receptors. Does a flatworm know when it is in pain? It doesn't (as far as we know) maintain an internal model of itself to attach attributes to, so I don't think it does.[/QUOTE] What do you mean by "express"? Do you mean that only an entity that can formulate the word self (or others like it) and understand its meaning is actually self aware? Or is it simply knowledge of the word that makes one self aware? Is there a self in all things, but they are simply unable to express it? Also I would agree that a flat worm is not self aware, but because it inherently lacks the existence of a an enduring self as opposed to the ability to express it.
[QUOTE=bIgFaTwOrM12;44615897]It's objectively different that the automaton can only refer to "itself" in the third person and not as a specific entity, but purely as information.[/QUOTE] Is that not what we do also? [QUOTE]Self awareness allows us to conceive of a distinct self and enduring self that is wholly different in perspective from everything else, essentially in first person.[/QUOTE] Different because we use different language to describe it, and because it is tied more closely to our behaviour evaluation mechanisms than other internally modelled entities? Or is it different in some other objective way that cannot be expressed purely as information? [QUOTE]Simply having information of what you are is not enough to bring about this perspective as it is still essentially just another concept to be referred to.[/QUOTE] Why is it not enough? What special distinction do I have for the internal model of myself that transcends what is possible with pure information? [QUOTE]So in the automaton's brain it references itself like it would a table or any other automaton, it is aware of its attributes, but not of any possession of them or the presence of a self within them.[/QUOTE] Why can't it be aware of a possession of its own body? If it has pattern-matched an awareness that it receives percepts that its camera receives, experiences negative stimuli when its chassis is damaged, and is able to directly manipulate its end effectors, surely if given the definition of "possession" it would be able to match that concept to itself and its body. What else is required? [QUOTE]It also cannot actually reference anything external to itself, it can only "think" the stored data labelled "chair", it cannot think about a chair as any kind of entity beyond that. None of this at all resembles how we think.[/QUOTE] It would think "chair", then pattern match the previously learnt attributes of a chair to its internal representation of the object, then evaluate to see if the object can satisfy any of its internal intentions like a desire to rest, and so on. What more do we do? [QUOTE]How does language affect whether someone is aware of a distinct self? Whether they can express that awareness to other is inconsequential.[/QUOTE] It's not about being able to express the awareness to another, but being able to express that awareness to itself internally. Also, make sure you don't confuse "language" with a natural language like English. By language I mean any set of symbols that represent concepts. [QUOTE]What do you mean by "express"? Do you mean that only an entity that can formulate the word self (or others like it) and understand its meaning is actually self aware? Or is it simply knowledge of the word that makes one self aware? Is there a self in all things, but they are simply unable to express it?[/QUOTE] Not quite. I mean they need to be able to represent an internal model of themselves in some symbolic language, and also encode the relationships between actions they can perform and how they are expected to influence their physical self. [QUOTE]Also I would agree that a flat worm is not self aware, but because it inherently lacks the existence of a an enduring self as opposed to the ability to express it.[/QUOTE] I suppose that because my model of consciousness is purely information based, expression (or symbolic representation) is the only kind of existence within a mind.
I haven't read the thread yet, because it's near midnight and I'm tired, so I apologise if I'm butting in to any discussion. Here's my view: I want to believe in life after death, but there's no strong evidence to support its existence. Thus my logical and scientific side debates with my emotional and 'human' side. What I also want to believe is that whatever form of life after death we believe in (be it a heaven, or reincarnation, or something else) is what happens to us. For example, one person may believe in a heaven, so when they die, they will go to heaven; whereas another person may believe in reincarnation, and thus when they pop their clogs, they will be reincarnated. These are, however, things that I only WANT to believe in.
[QUOTE=Ziks;44616516]Is that not what we do also?[/QUOTE] I certainly don't, when I am eating I do not think, "this individual is eating." rather I think, "I am eating". Of course not thinking those specific words, I am simply using them to describe how I think about the experience. I see myself from a wholly different perspective than things that I interact with and though I can't be sure, I have the suspicion that others do too. [QUOTE]Different because we use different language to describe it, and because it is tied more closely to our behaviour evaluation mechanisms than other internally modelled entities? Or is it different in some other objective way that cannot be expressed purely as information? Why is it not enough? What special distinction do I have for the internal model of myself that transcends what is possible with pure information?[/QUOTE] It is different because it is from a perspective that stored information cannot convey, you can't really see things from a first person perspective if your thoughts are limited to the retrieval and storage of information. So I suppose the latter, the physical process of retaining information does not allow for anything other than a third person perspective of things. [QUOTE]Why can't it be aware of a possession of its own body? If it has pattern-matched an awareness that it receives percepts that its camera receives, experiences negative stimuli when its chassis is damaged, and is able to directly manipulate its end effectors, surely if given the definition of "possession" it would be able to match that concept to itself and its body. What else is required?[/QUOTE] The fact that it could have no conception of what a self is. In the automaton's brain, it is just another entity, its perspective of "itself" is in the third person as well. If it were to experience negative stimulus it would not think, "I am experiencing negative stimulus" it would think, "the automaton is experiencing negative stimulus" (of course it would have to be stimulus defined as negative instead of just negative stimulus as nothing is objectively negative in the naturalistic sense). Also it would be aware of the automaton, though how could it differentiate the automaton from its body if the automaton is the body? What self is there to possess the body then? The automaton is just the automaton. [QUOTE]It would think "chair", then pattern match the previously learnt attributes of a chair to its internal representation of the object, then evaluate to see if the object can satisfy any of its internal intentions like a desire to rest, and so on. What more do we do?[/QUOTE] So you do not regard the chair as a separate thing? When you see it you think of information defined as "chair" rather than just a chair that's sitting there? All I can say is that I certainly don't think like that, when I see a chair I think of an actual chair sitting in front of me. [QUOTE]It's not about being able to express the awareness to another, but being able to express that awareness to itself internally. Also, make sure you don't confuse "language" with a natural language like English. By language I mean any set of symbols that represent concepts. Not quite. I mean they need to be able to represent an internal model of themselves in some symbolic language, and also encode the relationships between actions they can perform and how they are expected to influence their physical self. I suppose that because my model of consciousness is purely information based, expression (or symbolic representation) is the only kind of existence within a mind.[/QUOTE] So you think some kind of sensory syntax is required in order for an entity to be self-aware, yet you do state that they also need the ability to conceive of a self in the first place? What would this ability be if a purely natural mind cannot conceive of anything beyond the third person information that it contains? Is it possible it is not the language that makes them self-aware, but the presence of a concept of self that can potentially be expressed through that language? [editline]23rd April 2014[/editline] [QUOTE=Ziks;44616516]Is that not what we do also?[/QUOTE] I certainly don't, when I am eating I do not think, "this individual is eating." rather I think, "I am eating". Of course not thinking those specific words, I am simply using them to describe how I think about the experience. I see myself from a wholly different perspective than things that I interact with and though I can't be sure, I have the suspicion that others do too. [QUOTE]Different because we use different language to describe it, and because it is tied more closely to our behaviour evaluation mechanisms than other internally modelled entities? Or is it different in some other objective way that cannot be expressed purely as information? Why is it not enough? What special distinction do I have for the internal model of myself that transcends what is possible with pure information?[/QUOTE] It is different because it is from a perspective that stored information cannot convey, you can't really see things from a first person perspective if your thoughts are limited to the retrieval and storage of information. So I suppose the latter, the physical process of retaining information does not allow for anything other than a third person perspective of things. [QUOTE]Why can't it be aware of a possession of its own body? If it has pattern-matched an awareness that it receives percepts that its camera receives, experiences negative stimuli when its chassis is damaged, and is able to directly manipulate its end effectors, surely if given the definition of "possession" it would be able to match that concept to itself and its body. What else is required?[/QUOTE] The fact that it could have no conception of what a self is. In the automaton's brain, it is just another entity, its perspective of "itself" is in the third person as well. If it were to experience negative stimulus it would not think, "I am experiencing negative stimulus" it would think, "the automaton is experiencing negative stimulus" (of course it would have to be stimulus defined as negative instead of just negative stimulus as nothing is objectively negative in the naturalistic sense). Also it would be aware of the automaton, though how could it differentiate the automaton from its body if the automaton is the body? What self is there to possess the body then? The automaton is just the automaton. [QUOTE]It would think "chair", then pattern match the previously learnt attributes of a chair to its internal representation of the object, then evaluate to see if the object can satisfy any of its internal intentions like a desire to rest, and so on. What more do we do?[/QUOTE] So you do not regard the chair as a separate thing? When you see it you think of information defined as "chair" rather than just a chair that's sitting there? All I can say is that I certainly don't think like that, when I see a chair I think of an actual chair sitting in front of me. [QUOTE]It's not about being able to express the awareness to another, but being able to express that awareness to itself internally. Also, make sure you don't confuse "language" with a natural language like English. By language I mean any set of symbols that represent concepts. Not quite. I mean they need to be able to represent an internal model of themselves in some symbolic language, and also encode the relationships between actions they can perform and how they are expected to influence their physical self. I suppose that because my model of consciousness is purely information based, expression (or symbolic representation) is the only kind of existence within a mind.[/QUOTE] So you think some kind of sensory syntax is required for an entity to be self-aware, yet you do state that they also need the ability to conceive of a self in the first place? What would this ability be if a purely natural mind cannot conceive of anything beyond the third person information that it contains? Is it possible it is not the language that makes them self-aware, but the presence of a concept of self that can potentially be expressed through that language?
[QUOTE=bIgFaTwOrM12;44622872]I certainly don't, when I am eating I do not think, "this individual is eating." rather I think, "I am eating".[/QUOTE] Sure, you have a specially reserved symbol for referring to the entity that houses and is operated by your mind, the word "I". Semantics. [QUOTE]Of course not thinking those specific words, I am simply using them to describe how I think about the experience. I see myself from a wholly different perspective than things that I interact with and though I can't be sure, I have the suspicion that others do too.[/QUOTE] Of course, so why can't that special distinction be represented purely through information? What are your reasons for assuming it can't be, because you can't intuitively imagine how it would work? [QUOTE]It is different because it is from a perspective that stored information cannot convey, you can't really see things from a first person perspective if your thoughts are limited to the retrieval and storage of information. So I suppose the latter, the physical process of retaining information does not allow for anything other than a third person perspective of things.[/QUOTE] But this is exactly the point we are debating, you can't just assert it as being undeniably true as part of your argument. Why is this true? Please explain it to me. [QUOTE]The fact that it could have no conception of what a self is. In the automaton's brain, it is just another entity, its perspective of "itself" is in the third person as well.[/QUOTE] Why must this be the case? What is the difference between concepts being "truly" first person, and concepts stored using symbols reserved for first person with associated relations specific to being first person? [QUOTE]If it were to experience negative stimulus it would not think, "I am experiencing negative stimulus" it would think, "the automaton is experiencing negative stimulus" (of course it would have to be stimulus defined as negative instead of just negative stimulus as nothing is objectively negative in the naturalistic sense).[/QUOTE] For the automaton a negative stimulus is one that discourages the behaviour that triggered it using something like backpropagation. Just like for biological neural networks, where a negative stimulus is one that causes the emission of neurotransmitters that modify synaptic connections in a similar way as in the ANN to discourage the behaviour that triggered it. Again, you are deliberately using a restricted language for the automaton's thoughts to dispel any notion of first person awareness. [QUOTE]Also it would be aware of the automaton, though how could it differentiate the automaton from its body if the automaton is the body? What self is there to possess the body then? The automaton is just the automaton.[/QUOTE] So it would believe itself is its body? I'm not sure where you are going with this. [QUOTE]So you do not regard the chair as a separate thing? When you see it you think of information defined as "chair" rather than just a chair that's sitting there? All I can say is that I certainly don't think like that, when I see a chair I think of an actual chair sitting in front of me.[/QUOTE] I hope you aren't deliberately misinterpreting what I am trying to describe whenever I don't explicitly include every single detail. Were you unable to imagine that it would also record concepts representing the chair's position in its environment? [QUOTE]So you think some kind of sensory syntax is required in order for an entity to be self-aware, yet you do state that they also need the ability to conceive of a self in the first place?[/QUOTE] Almost, I think sensory syntax is required for an entity to be aware of its environment, and self awareness is when the entity includes an internal representation of itself. [QUOTE]What would this ability be if a purely natural mind cannot conceive of anything beyond the third person information that it contains?[/QUOTE] Why can't information encode first person concepts? [QUOTE]Is it possible it is not the language that makes them self-aware, but the presence of a concept of self that can potentially be expressed through that language?[/QUOTE] That's pretty much what I've been saying, where that concept of self is an internal representation encoded in information. I would say that's the bare minimum for self awareness, although simply possessing an internal representation of the self doesn't automatically make them as sophisticated as us. I see sentience and self-awareness as a subjective sliding scale of sophistication, where simple reactionary entities like protozoa and assembly line robots are on one end, things like mice and AI using AANs that encode internal models of themselves are a bit further up, we're further up still, but without employing the hubris to assume that we are at the other end of the scale without any scope for improvement. It doesn't seem right to think of self awareness as a binary "that thing isn't self aware" and "that thing is self aware" because it's quite a subjective concept to deal with and so the existence of a hard universal threshold seems unlikely. It's more helpful to think of the degree to which something is aware of itself and its environment. Would you say the issue here is that you believe your intuitive sense of being a conscious being that can actually experience things instead of being a soulless automaton is something that cannot be implemented through the processing and generation of information in a specific way?
Something critical to consider is that the only things you are aware of having experienced are the things that have been stored in your memories (by definition). Memories are information, so experiences must be encoded as information. You remember experiencing a first person awareness of being a conscious being? That experience has been stored as information in a memory that you are currently recalling.
Sorry for the sudden halt, just finished with my last final exam. [QUOTE=Ziks;44623995]Sure, you have a specially reserved symbol for referring to the entity that houses and is operated by your mind, the word "I". Semantics.[/QUOTE] That word is not simply placed on top of a third person perspective of the individual that is me though, I use the word to represent the inherent self that I am. [QUOTE]Of course, so why can't that special distinction be represented purely through information? What are your reasons for assuming it can't be, because you can't intuitively imagine how it would work? But this is exactly the point we are debating, you can't just assert it as being undeniably true as part of your argument. Why is this true? Please explain it to me. Why must this be the case? What is the difference between concepts being "truly" first person, and concepts stored using symbols reserved for first person with associated relations specific to being first person?[/QUOTE] Well, the point we're debating is whether natural processes can sufficiently explain the mind. However, if you disagree with my claim that information can only represent things from a third person perspective, I am not sure how else it could represent things. Even if the automaton had information stored about itself labelled "I" or "self", the information would still be referenced as any other object would be. Also what do you mean by "concepts stored using symbols reserved for first person with associated relations specific to being first person"? How does the automaton reserve something for first person or associate relations specific to being first person? How could any information about itself be anything else but a neutral list of attributes of an entity at its core? [QUOTE]For the automaton a negative stimulus is one that discourages the behaviour that triggered it using something like backpropagation. Just like for biological neural networks, where a negative stimulus is one that causes the emission of neurotransmitters that modify synaptic connections in a similar way as in the ANN to discourage the behaviour that triggered it. Again, you are deliberately using a restricted language for the automaton's thoughts to dispel any notion of first person awareness.[/QUOTE] My mistake then, I was initially inclined to believe you meant unpleasant stimulus by negative, now I see that you were using it in a different way. Also yes, I am deliberately restricting the language by which you depict the automaton's thoughts because you anthropomorphize them without explanation. If the automaton's thoughts are identical to how its brain works, then the semantics used to represent the underlying concepts are meaningless to it. I am simply giving a clear depiction of the underlying concepts being dealt with in its thought process. [QUOTE]So it would believe itself is its body? I'm not sure where you are going with this.[/QUOTE] Perhaps nitpicking a bit, you mentioned the automaton being told what possession was and that it possessed its body, when in actuality the automaton is the body and is not really in possession of it as that would suggest some self beyond the body. So there is no way it could actually come to the conclusion that it possessed its body if it was its body. While splitting hairs a bit I suppose, it still shows that you're unnecessarily anthropmorphising the automaton's perspective. [QUOTE]I hope you aren't deliberately misinterpreting what I am trying to describe whenever I don't explicitly include every single detail. Were you unable to imagine that it would also record concepts representing the chair's position in its environment?[/QUOTE] Oh, that wasn't my intention, my point was that if you see a chair, from your perspective you see an actual chair that is real. You do not think of information labelled as chair in your brain, you refer to another entity in reality. [QUOTE]Almost, I think sensory syntax is required for an entity to be aware of its environment, and self awareness is when the entity includes an internal representation of itself. That's pretty much what I've been saying, where that concept of self is an internal representation encoded in information.[/QUOTE] How can this internal representation be anything but a neutral list of attributes though? Where's the first person perspective to be found? [QUOTE]It doesn't seem right to think of self awareness as a binary "that thing isn't self aware" and "that thing is self aware" because it's quite a subjective concept to deal with and so the existence of a hard universal threshold seems unlikely. It's more helpful to think of the degree to which something is aware of itself and its environment.[/QUOTE] Well, that would make sense if the mind is purely natural and an emergent feature of a very complex brain. I think that there is something far more inherent about the brain than just physical complexity though, I don't think the self is a subjective concept in the sense that everyone has a different conception of what the self is (that's not to say that everyone doesn't have a different conception of who they are though). [QUOTE]Would you say the issue here is that you believe your intuitive sense of being a conscious being that can actually experience things instead of being a soulless automaton is something that cannot be implemented through the processing and generation of information in a specific way?[/QUOTE] My main issue with the monist perspective is that my thoughts don't seem to reflect pure data collection, storage and synthesis. There is a perspective that I have on things that is beyond that, thus why I keep bringing up first and third person perspectives. Also the presence of intentionality is unexplained by the natural mind as well, I can refer to actual existing objects as opposed to just concepts in my brain. [QUOTE=Ziks;44626264]Something critical to consider is that the only things you are aware of having experienced are the things that have been stored in your memories (by definition). Memories are information, so experiences must be encoded as information. You remember experiencing a first person awareness of being a conscious being? That experience has been stored as information in a memory that you are currently recalling.[/QUOTE] The key word here is that I am aware of having experienced things in my memory. Memory is essentially just a collection of information received from the sensory organs arranged into coherent "images" (which include visual, audio, tactile, gustatory, kinesthetic, etc. senses), where is the perspective in an image of a dog, or the sound of a dog barking? How can one really remember the pleasure of a stimulus? You don't remember states of mind, you remember what you perceived and that brings about the state of mind, memories as information are completely third person by nature.
It sounds like you're almost there, with just one hurdle left to surmount. I've got a report to finish, but in the mean time could you answer this question so I can think about how to approach the rest of my reply. Do you believe it is possible to, in principle, construct an automaton that would appear to behave to an external observer in an indistinguishable way to a human, albeit without the existence of any "true" internal experience or soul of the kind we possess? Here, "in principle" means if we had unlimited computational power and insight into how to manipulate the natural world. Basically, is a p-zombie possible? If not, could you explain why?
If I am completely honest I would like to believe what the op states. If our universe could come form nothing then what is to say that it isn't possible that our consciousness ascends death and allows us to continue on as a spirit? No one can ever know what it is truly like to die as they would have to be brought back to life after the "7 minute mark" which is the supposed point of no return from death.
[QUOTE=Satane;44640672] The universe didn't come from nothing because that would make nothingness a thing.[/QUOTE] Where do you think reality it self came from then?
[QUOTE='[GG]Super;44641049']Where do you think reality it self came from then?[/QUOTE] That's a meaningless question because time doesn't exist outside of reality. You can only consider things like cause in a system that experiences time, but time is defined within reality. Reality itself is a static structure that just exists with no before or after.
[QUOTE=Ziks;44640424]Do you believe it is possible to, in principle, construct an automaton that would appear to behave to an external observer in an indistinguishable way to a human, albeit without the existence of any "true" internal experience or soul of the kind we possess?[/QUOTE] Certainly, I suppose it would be possible to create a purely natural entity that would behave in ways indistinguishable from a human. We naturally anthropomorophize the behavior of things that do not even resemble humans, so I don't think the idea is very far-fetched.
[QUOTE=bIgFaTwOrM12;44641735]Certainly, I suppose it would be possible to create a purely natural entity that would behave in ways indistinguishable from a human. We naturally anthropomorophize the behavior of things that do not even resemble humans, so I don't think the idea is very far-fetched.[/QUOTE] Excellent, this should save us some time. I'll produce a full response to your previous post when I've got a decent amount of my report written, but in the mean time I'll probably be unable to resist with some short questions to consider / responses to your own during breaks from work. How would the entity respond if you asked it to describe its experience?
[QUOTE=Ziks;44641942]Excellent, this should save us some time. I'll produce a full response to your previous post when I've got a decent amount of my report written, but in the mean time I'll probably be unable to resist with some short questions to consider / responses to your own during breaks from work. How would the entity respond if you asked it to describe its experience?[/QUOTE] In a way that is indistinguishable from a human I suppose, it would have been designed to label what a human would consider unpleasant as "unpleasant" etc. Then it would use language that a human would use to convey this label so that they appear to be describing an experience. Of course all while applying first person pronouns to the entity that contains all this information.
[QUOTE=Ziks;44641942]Excellent, this should save us some time. I'll produce a full response to your previous post when I've got a decent amount of my report written, but in the mean time I'll probably be unable to resist with some short questions to consider / responses to your own during breaks from work. How would the entity respond if you asked it to describe its experience?[/QUOTE] By lying. [editline]25th April 2014[/editline] [QUOTE=Ziks;44641942]Excellent, this should save us some time. I'll produce a full response to your previous post when I've got a decent amount of my report written, but in the mean time I'll probably be unable to resist with some short questions to consider / responses to your own during breaks from work. How would the entity respond if you asked it to describe its experience?[/QUOTE] Counter question; Would it be possible, given the omnipotence you allowed in your question, to create a static representation of a conscious individual. In that, everything that individual could possibly do or experience can be described by reference to this static manual.
[QUOTE=bIgFaTwOrM12;44642109]In a way that is indistinguishable from a human I suppose, it would have been designed to label what a human would consider unpleasant as unpleasant etc. Then it would use language that a human would use to convey this label so that they appear to be describing an experience. Of course all while applying first person pronouns to the entity that contains all this information.[/QUOTE] Pretty much, although (obviously depending on the implementation) some things wouldn't need to be hardcoded to focus on emulating external human behaviour and would be purely emergent on how the system itself works. This is too fun to think and write about, I'll try to force myself to stay away from this thread until I have a sufficient amount of work done but in the mean time... Unpleasant experiences are a good example. Let's imagine the implementation used a thought pool system where a set of entities and relations are stored in memory representing concepts and how they can be classified and affect each other. The system would continually take existing stored concepts from the thought pool and use an artificial neural network to piece together new relations between those concepts based on previously learnt heuristics, evaluate those relations by looking for correlations with internal models or by testing them against the external environment, then storing the ones that are verified as being coherent to some degree back in the thought pool and updating the neural network's weightings accordingly based on success or failure. Essentially you have a pattern matching system, that continually applies learnt patterns to existing concepts to extrapolate new ones, while also trying out new patterns and seeing how useful they are. I don't think it's too difficult to imagine how a successful human impersonation automaton could in principle be built on top of this framework (although the computational power required for the rapid concept generation / storage space for the vast thought pool is out of our reach for the time being). Sounds it perceives would be pattern matched into their syllables, then those syllables pattern matched into words, words pattern matched based on syntactic and semantic rules into sentences, pattern matching what the subject and intention of each sentence is also. Okay, so let's think about how externally influenced directly unpleasant experiences would work. Let's give the automaton a pain receptor, that triggers a back-propagation mechanism in the neural network to affect the weightings of whatever neurons were recently active. You could then teach the automaton to avoid certain behaviours by exciting the pain receptor whenever it performs the action you wish to discourage. The neurons that lead to the behaviour would have been recently activated, so the weights of their connections to other neurons are weakened and so will require additional effort from other neurons in the network to trigger again. It would be more difficult for the automaton to repeat the behaviour. This is a pretty advanced automaton that can pattern match semantic meaning from questions you ask it, and can pattern match what information is required by that question. Your question is "What does an unpleasant experience feel like?" As you quite rightly said, it has learnt to pattern match what a human would call an unpleasant experience to types of percepts it receives. Which percepts? An effective definition the neural network may stumble upon would be percepts that proceed a behaviour that it then finds harder to repeat. It would then pattern match percepts that correlate to its pain receptor being triggered as being unpleasant, and so (after learning the definition of pain as percepts received by triggering that receptor) would correlate pain as being unpleasant. So we've asked it to describe unpleasant experiences. Unless it's learnt how its internal mechanisms function in detail, the best it can do is give synonyms for "unpleasant" because it doesn't know why it has a special set of relations for percepts that correlate to behaviour discouragement. These relations are at a far lower level than the language processing pattern matching system, and so out of scope for what it has the knowledge of to formulate sentences for. So we can expect to get replies like "Things that are bad", or "Things that I don't like" (having previously learnt that humans expect it to use the word "I" to refer to itself). What would you expect as an answer if you asked a human that didn't know much about neuroscience? Most would be stuck with synonyms too because unpleasantness is indescribable without relying on the mechanisms that cause it. You can extrapolate similar responses and their causes for things like pleasure, warmth, sharpness and so on (assuming the automaton had the receptors for the appropriate percepts). When asked to describe what it is like to experience those things (as in receive the associated percepts) it would fail to say anything useful because the implementation of the relations that define those percepts are at a lower level than the knowledge it can vocalise. I would be able to categorise things into whether they were pleasant or unpleasant, sharp or soft, warm or cold, but wouldn't be able to express how it knew without comparison to other things with those respective labels. The same is true for us. [QUOTE=Zenreon117;44642327]By lying.[/QUOTE] Useful reply. [QUOTE]Counter question; Would it be possible, given the omnipotence you allowed in your question, to create a static representation of a conscious individual. In that, everything that individual could possibly do or experience can be described by reference to this static manual.[/QUOTE] Kind of Chinese Room style? It's obvious that John Searle isn't a computer scientist. The answer is no unless one of two criteria is met; that the manual is incredibly large (exceeding the size of the universe), or the instructions in the manual allow for a decent amount of working memory to be used. The second scenario violates the requirement for the system to be completely static in terms of information processing, so would probably not qualify. For the first scenario to be completely static it would essentially require a lookup table of all possible input strings matched to responses. For the system to be a convincing human analogue it must be able to, for example, read a book and express its opinion of it. Let's say we give the system a short novel of around 800,000 bits of information as our input stream. The system must be able to distinguish between all possible inputs of that length, and so requires at least 2^800,000 entries in its lookup table. The observable universe is estimated to contain around 2^400 bits of information[URL="http://www.nature.com/nature/journal/v406/n6799/full/4061047a0.html"][1][/URL], which is pathetically minuscule compared to what is required. You would need to replace each bit in the universe with another universe and then repeat that process 2000 times with all the new resultant bits to provide enough capacity to store your lookup table. This is maybe a little infeasible. Note that consciousness isn't special in this regard, you would need the same space to encode a program that just outputs the number of times it counts the word "poops" in the input. But we're talking omnipotence here, so I guess this is allowed. So now we have a stupendous lookup table that encodes the response to each possible conversation you may have with the system. Is it conscious? Nah, I wouldn't say so. Whatever entity constructed the dictionary would need to be conscious (or at least impersonate consciousness) in order to accurately encode each response, but then after that it just statically encodes information without encoding the processes that went into generating the information (which I think we can all agree is where consciousness resides, natural or not). The alternative is to allow whatever process uses the static manual to store some working memory. What you have now is a computer as it is an isomorphism of a Turing Machine. Me being a filthy naturalist, I would of course claim that you could in principle construct a set of instructions for the manual that implement the required information manipulation processes for consciousness. I would say that the system, in execution, would subjectively experience consciousness given the right set of instructions. I really need to get my head down and finish some work now, sorry!
Perhaps much of the complexity of memories that makes them seem so personal and impossible to be created by a computer or in a purely natural world is that memories are alterable and entirely fallible. Our ability to know an object and think of abstract thoughts is great and all, but lets talk about the bio chemistry of memory. We know how memory works well enough to understand several things about it. We know that every time you remember something, you're using a network of chemicals to bring that stored set of information to the forefront. By exposing the actual information to this chemical effect has a chance to alter it, we know this. So our memory is infallible, and in a distinctly individual way. No doubt there's a chance for this to affect our very perceptions of the world and the concept of self. Self altering of the memory that we are neither in control of or desire is surely an effect on us. We know that our memory of actual objects or ideas or even places that we've been are affected by the time since that memory has been founded. Our memories are terribly inaccurate and we can show that many of the assumptions and blanks in our memories are filled in by the brain and it's other similar relatable memories. We know that memory can be played with and tricked all together too easily, we know people's memory is highly malleable. Witness testimony is considered to be a terrible method to justify criminal accusations on because we know how easy it is for those memories to alter every time they're remembered and for this process to repeat. We know that suggestion during memory recall can also change memory. The brain even creates entirely false memories, things that never happened but the person KNOWS they did, but it's just simply not true, it's a false memory. It's not uncommon, it doesn't require a form of psychosis. It's just a thing that apparently can happen with some biochemical computers being potentially faulty. We create false memories. I'd like to make a bit of a step with my next statement, what I've said so far is understood by neuroscience and psychologists work into the field to be true enough for the time being(sure we'll learn more, we always do). But I'd like to say that I think a great deal of the significant feeling of "you", the subjective sensation of self is created by this neural network, by yourself essentially. Unique bio chemistry triggered by unique events in ones life creates unique markings and unique arrays of neurons that just simply the interaction of creates a feeling of self. I'd say that if we were to speak on evolution, a method where the most effective manner for having a longer life is having more brain power, we already know relatively how we evolved to have larger brains, but what if as a side effect of having more brain power an additional element that provided us with a higher chance to live longer was the "self", a subjective concept of the self makes one more likely to be able to empathize with ones own condition and to try and improve it. So with all this in mind, I'd like to ask why the self can't be purely natural and why it would necessitate a complicated element outside of it? A mind is a computer, there's no way to argue that. You can argue there's elements we don't understand, but to argue there's elements we can't understand is limiting and makes the brain operate in a way that isn't consistent. An intelligence from a computer, whether it be biochemical or a traditional computer creating a form of sentience is the same, it just comes down to the complexity of the computer and the equations.
I can agree with all of that, and it seems pretty rational to me. All that is required for you to subjectively believe your consciousness is a special phenomenon that can't be described as a natural process is... for you to subjectively believe your consciousness is a special phenomenon that can't be described as a natural process. It doesn't need to [I]actually[/I] be true, and we can identify plenty of reasons why a person would naturally be lead to believe it is true (emotional favorability of feeling special, requirement to align with preferred religion or philosophy, etc). I think I understand your objection to naturalism bIgFaTwOrM12, if it's anything like the reason why I used to reject it. It's the intuitive feeling of actually being a conscious being rather than a soulless automaton, why [I]I'm[/I] within this body observing the things it experiences rather than it being a mindless machine reacting to stimuli. Why am I in this mind, rather than no-one? What converted me is imagining the operations of a philosophical zombie as I described in my previous post. If it is using a learning neural network you can trace why it would respond in a way identical to a human; like the experience of unpleasantness example (or all other qualia), or its awareness of self-awareness as it pattern matches the definition of self-awareness and applies it to itself. The sequences of concepts encoded in natural language it generates and stores internally while carrying out higher level cognitive tasks look exactly like the concepts encoded in natural language [I]we[/I] generate and store internally while carrying out higher level cognitive tasks. Its responses to questions about the experience of basic qualia are identical to the responses we give about basic qualia. It is implemented using a colossal neural network, just as we have a colossal neural network in our skulls. I don't think it's too much of a leap to imagine that it's feasible to implement the philosophical zombie with 85,000,000,000 neurons, implying that perhaps the human brain would be sufficient for a philosophical zombie if you just removed the soul. But if a human brain without a soul - without a subjective first person observer - appears to an external party to behave identically to a brain with a soul... can that brain prove that it possesses a subjective first person observer? Can I prove to you that I am not a philosophical zombie? Can you prove to me that you are not a philosophical zombie? Can you prove to yourself that you are not a philosophical zombie?
[QUOTE=Ziks;44642920]Unpleasant experiences are a good example. Let's imagine the implementation used a thought pool system where a set of entities and relations are stored in memory representing concepts and how they can be classified and affect each other. The system would continually take existing stored concepts from the thought pool and use an artificial neural network to piece together new relations between those concepts based on previously learnt heuristics, evaluate those relations by looking for correlations with internal models or by testing them against the external environment, then storing the ones that are verified as being coherent to some degree back in the thought pool and updating the neural network's weightings accordingly based on success or failure. Essentially you have a pattern matching system, that continually applies learnt patterns to existing concepts to extrapolate new ones, while also trying out new patterns and seeing how useful they are. I don't think it's too difficult to imagine how a successful human impersonation automaton could in principle be built on top of this framework (although the computational power required for the rapid concept generation / storage space for the vast thought pool is out of our reach for the time being). Sounds it perceives would be pattern matched into their syllables, then those syllables pattern matched into words, words pattern matched based on syntactic and semantic rules into sentences, pattern matching what the subject and intention of each sentence is also. Okay, so let's think about how externally influenced directly unpleasant experiences would work. Let's give the automaton a pain receptor, that triggers a back-propagation mechanism in the neural network to affect the weightings of whatever neurons were recently active. You could then teach the automaton to avoid certain behaviours by exciting the pain receptor whenever it performs the action you wish to discourage. The neurons that lead to the behaviour would have been recently activated, so the weights of their connections to other neurons are weakened and so will require additional effort from other neurons in the network to trigger again. It would be more difficult for the automaton to repeat the behaviour. This is a pretty advanced automaton that can pattern match semantic meaning from questions you ask it, and can pattern match what information is required by that question. Your question is "What does an unpleasant experience feel like?" As you quite rightly said, it has learnt to pattern match what a human would call an unpleasant experience to types of percepts it receives. Which percepts? An effective definition the neural network may stumble upon would be percepts that proceed a behaviour that it then finds harder to repeat. It would then pattern match percepts that correlate to its pain receptor being triggered as being unpleasant, and so (after learning the definition of pain as percepts received by triggering that receptor) would correlate pain as being unpleasant. So we've asked it to describe unpleasant experiences. Unless it's learnt how its internal mechanisms function in detail, the best it can do is give synonyms for "unpleasant" because it doesn't know why it has a special set of relations for percepts that correlate to behaviour discouragement. These relations are at a far lower level than the language processing pattern matching system, and so out of scope for what it has the knowledge of to formulate sentences for.[/QUOTE] Well of course the automaton does not know why it has predefined relations to certain percepts, but why shouldn't it know what those percepts are? What could make the relations impossible to express in language when you have been able to define them as such? It wouldn't need to learn how its brain works because its thoughts would be how its brain works, as long as it was given the proper language to define the logic behind its thoughts I don't see why it wouldn't be able to express it. Of course you could limit the automaton so that it would not speak of these processes and would stick instead to how a human would respond I suppose, but that is not changing the fact that it can still access that information. [QUOTE]So we can expect to get replies like "Things that are bad", or "Things that I don't like" (having previously learnt that humans expect it to use the word "I" to refer to itself). What would you expect as an answer if you asked a human that didn't know much about neuroscience? Most would be stuck with synonyms too because unpleasantness is indescribable without relying on the mechanisms that cause it.[/QUOTE] I am not convinced that a person with significant education in neuroscience could explain why they find it unpleasant when they are in pain either. Sure they could talk about how a foreign object touching nerve endings embedded deep within the skin causes the neurons within it to fire off action potentials that are interpreted as pain within the brain, but they can't explain why it is that pain is unpleasant to them. I suppose if you define unpleasant as a stimulus that follows a behavior that is then made harder to repeat as a result they could fully explain it, but the state of mind of something being unpleasant has far more meaning than that to us. [QUOTE]You can extrapolate similar responses and their causes for things like pleasure, warmth, sharpness and so on (assuming the automaton had the receptors for the appropriate percepts). When asked to describe what it is like to experience those things (as in receive the associated percepts) it would fail to say anything useful because the implementation of the relations that define those percepts are at a lower level than the knowledge it can vocalise. I would be able to categorise things into whether they were pleasant or unpleasant, sharp or soft, warm or cold, but wouldn't be able to express how it knew without comparison to other things with those respective labels.[/QUOTE] How can any information in the automaton's brain be inaccessible to it, in the end it just sounds like the deeper workings of its brain would just be barred from being expressed as they are in language, going through a sort of humanizing filter before spoken. That doesn't really make it any more similar to human experience beyond how it expresses its "thoughts". [QUOTE]All that is required for you to subjectively believe your consciousness is a special phenomenon that can't be described as a natural process is... for you to subjectively believe your consciousness is a special phenomenon that can't be described as a natural process. It doesn't need to [I]actually[/I] be true, and we can identify plenty of reasons why a person would naturally be lead to believe it is true (emotional favorability of feeling special, requirement to align with preferred religion or philosophy, etc).[/QUOTE] I am still not convinced that natural processes can offer the genuine first person experience that I feel, or that it can allow for the intentionality that I practice. Of course I want to be something more than just natural process, but that doesn't change the fact that I see no way my experience could be recreated as such. I think I understand your objection to naturalism bIgFaTwOrM12, if it's anything like the reason why I used to reject it. It's the intuitive feeling of actually being a conscious being rather than a soulless automaton, why [I]I'm[/I] within this body observing the things it experiences rather than it being a mindless machine reacting to stimuli. Why am I in this mind, rather than no-one? [QUOTE]What converted me is imagining the operations of a philosophical zombie as I described in my previous post. If it is using a learning neural network you can trace why it would respond in a way identical to a human; like the experience of unpleasantness example (or all other qualia), or its awareness of self-awareness as it pattern matches the definition of self-awareness and applies it to itself. The sequences of concepts encoded in natural language it generates and stores internally while carrying out higher level cognitive tasks look exactly like the concepts encoded in natural language [I]we[/I] generate and store internally while carrying out higher level cognitive tasks. Its responses to questions about the experience of basic qualia are identical to the responses we give about basic qualia.[/QUOTE] What you are saying is presuming that our subjective experience is nothing but a higher level function of the brain. I am not quite sure how one process is higher level or why we should be able to access information in one part of the brain over the other (assuming that the other is still a functioning part of the brain of course). Perhaps the reason why you are forced to create this divide is because natural cognition is not really awareness at all, so of course all of the processes that are a result of our brain function would not be conscious thoughts, instead our thoughts seem to be on a completely different level in comparison to what our brain does (from the our subjective experience). [QUOTE]It is implemented using a colossal neural network, just as we have a colossal neural network in our skulls. I don't think it's too much of a leap to imagine that it's feasible to implement the philosophical zombie with 85,000,000,000 neurons, implying that perhaps the human brain would be sufficient for a philosophical zombie if you just removed the soul. But if a human brain without a soul - without a subjective first person observer - appears to an external party to behave identically to a brain with a soul... can that brain prove that it possesses a subjective first person observer? Can I prove to you that I am not a philosophical zombie? Can you prove to me that you are not a philosophical zombie? Can you prove to yourself that you are not a philosophical zombie?[/QUOTE] Assuming you found some way of getting a "blank" brain that structurally was the equivalent to a human's and managed to turn it into the automaton(or philosophical zombie). That would still not change the fact that it would have no real first person experience as we do, perhaps we would not be able to distinguish it from a human, but it would still lack any sort of experience that you or I have. My own first person experience and intentionality discount the idea that I may be a philosophical zombie.
[QUOTE=bIgFaTwOrM12;44645624] My own first person experience and intentionality discount the idea that I may be a philosophical zombie.[/QUOTE] I really can't understand how you came to this conclusion. It's just an avoidance, it's not taking his argument on.
Sorry, I forgot to clarify that higher level cognition involves thoughts encoded in natural language, whole images and so on, and the lower level stuff is in whatever raw form of storage is used to encode atomic concepts and relations. The higher level stuff would generally have relations defined only to other higher level entities, such that new high level concept strings can only be generated from other high level concept strings. This neatly separates the low level bare metal relations attached to individual percepts from being expressible in natural language. I obviously don't know exactly how the brain works, but you would expect this to be because the higher level language processing and generation is localised to a different part of the brain than the raw percept processing stuff, so information from the lower layer stuff has to pass through a bunch of abstraction layers before reaching the stuff you can generate thoughts from. Anyway, the thing I'm trying to suggest is that I don't believe there is a "real" first person experience, we just have the illusion of it. We're soulless automatons that use language as a scaffolding to construct more involved chains of reasoning or to model our environment, and as an unfortunate side effect we have the tendency to produce effectively meaningless statements like "Why am I inside my mind?" which reduces our efficiency somewhat. Maybe you are right though, and you have some special type of first person experience that I do not possess. Is there anything you could ever say to prove that to me though? You can't relate to my experiences, for I believe I am a philosophical zombie. Could you describe the intentionality problem a bit more thoroughly?
If your consciousness isn't physical and naturalistic, how is affected and limited by the aspects of the physical brain? How is it able to interact with it in a way that is forever outside of our understanding?
And if you need any extra convincing, look at the aspects of consciousness that are affected by all the different neurological diseases. As far as I know there's nothing left uncorruptable by physical deformity. As I see it, the killer is that if a human brain without a soul exhibits identical behaviour to a human brain with a soul, what effect does the soul have? What observable predications can you make from its existence? Is there any way to prove it exists? Is there any point in assuming it exists if it only adds literally useless complexity to your ontology? Is it [i]rational[/i] to believe in a metaphysical soul?
I've made this argument before but it was dismissed without being read I felt. If the soul exists, then what role does it play? If it plays the role of being the key "youness" of who you are, then it cannot be ethereal, and is affected by the physical world. I argue that it is physical or at least, alterable by the physical world, as we know that a damaged brain presents a radically different personality in many, if not all cases. If it is ethereal, this change could not occur because it would not be affected by the physical change in the brain. If it is effected by the physical realm, then it can be measured in some sense, or discovered at some point in time. If it is supernatural, it must be affecting the brain in a measurable way surely, in some sense it's affects on the brain must be capable of being measured. If it is capable of effecting us and being ethereal, and undetectable, then what is the brain damage? What role does it play in relating to this soul and it's aspect of "you"? Does the brain damage effect the soul or does the soul ignore the brain damage but still maintain it's ability to be "us"? Do you go to heaven with that brain damage? If it is not the key "youness" that you experience, what does it do? Why should we believe in it if it does nothing and is invisible? Isn't that just non existent by all intents and purposes?
[QUOTE=Zenreon117;44642327]By lying.[/QUOTE] Could you please expand on this?
[QUOTE=Jookia;44646482]Could you please expand on this?[/QUOTE] If he were asked what do you experience, and the fact is that he does not experience anything, but rather is a p-zombie, then whatever comes out of his mouth would be fallacious. And thus, if taken to be true, would be a lie. [editline]25th April 2014[/editline] [QUOTE=HumanAbyss;44646218] If it is not the key "youness" that you experience, what does it do? Why should we believe in it if it does nothing and is invisible? Isn't that just non existent by all intents and purposes?[/QUOTE] I take it that a soul is descriptive of the person you are, not prescriptive. The soul would be your ledger of rules from the chinese room.
[QUOTE=Zenreon117;44647236]If he were asked what do you experience, and [B]the fact is that he does not experience anything[/B], but rather is a p-zombie, then whatever comes out of his mouth would be fallacious. And thus, if taken to be true, would be a lie.[/QUOTE] You're assuming this.
Sorry, you need to Log In to post a reply to this thread.