Is free will possible, or are we always affect by some level of determinism
354 replies, posted
[QUOTE=HumanAbyss;43801071]I think ethics is required.
I don't think the requirement of ethics for societal success defines free will as being possible or likely or necessary when we bend the definition of free will like you have[/QUOTE]
What, in your opinion, is the original and "correct" definition of free will?
If it is having some magical will above your brain that overrides things and makes decisions, then how exactly would it function and make decisions? Why must we further complicate the issue of making decisions by adding another entity?
that's why I don't believe in a soul or a classical definition of free will because they're entirely impossible.
and I don't have the desire to describe my own autonomy in a manner that I don't believe is accurate(your definition of free will) so I don't' end up actually needing "free will"
We bent the definition of planets, and that kicked pluto out of the list. Does that mean we shouldn't have? Definitions are created by use to assign useful understanding to concepts. The useful understanding that an entity called a person has free will is to be able to hold them responsible for their actions, but also to treat them as a means in themselves, due to their system being a critical decision maker within existence.
God damn.
If you guys want to have a philosophical debate on the validity of anything, you could at least try to use proper English punctuation. For someone who is trying to base his entire argument off language, Zenreon117, I can't understand a single thing you are saying. It's not because I'm not trying, or because your words are complex, it's because the manner in which you present your ideas is so hamstrung by your terrible use of grammar, syntax, and basic understanding of what you are writing, that your ideas suffer.
I spent more time trying to interpret everybody's terrible phrasing in this thread than I spend on comprehending the underlying arguments.
I beg everyone, not just Zenreon, because it isn't just him, to actually think about what it is you are trying to say, and then say it in a manner that makes sense. Use proper phrasing, and recognize the fact that big words and intricate phrasing do not mean that your thoughts are more valid than others. It's like a giant linguistic mystery in here, and anyone who tries to participate in the conversation has to unravel the unnecessarily difficult language in order to get anything out of it.
[QUOTE=HumanAbyss;43801115]that's why I don't believe in a soul or a classical definition of free will because they're entirely impossible.
and I don't have the desire to describe my own autonomy in a manner that I don't believe is accurate(your definition of free will) so I don't' end up actually needing "free will"[/QUOTE]
Isn't intelligent autonomy synonymous with free will? You are a system that autonomously makes 'decisions'.
definitions change, i'm not arguing that
I just don't see why you would apply something that isn't in the same ballpark
I don't see why we need to discuss free will to discuss personal responsibility.
I don't believe I have free will. I believe I have personal responsibility.
I don't see a need to complicate it
[QUOTE=wooletang;43801139]God damn.
If you guys want to have a philosophical debate on the validity of anything, you could at least try to use proper English punctuation. For someone who is trying to base his entire argument off language, Zenreon117, I can't understand a single thing you are saying. It's not because I'm not trying, or because your words are complex, it's because the manner in which you present your ideas is so hamstrung by your terrible use of grammar, syntax, and basic understanding of what you are writing, that your ideas suffer.
I spent more time trying to interpret everybody's terrible phrasing in this thread than I spend on comprehending the underlying arguments.
I beg everyone, not just Zenreon, because it isn't just him, to actually think about what it is you are trying to say, and then say it in a manner that makes sense. Use proper phrasing, and recognize the fact that big words and intricate phrasing do not mean that your thoughts are more valid than others. It's like a giant linguistic mystery in here, and anyone who tries to participate in the conversation has to unravel the unnecessarily difficult language in order to get anything out of it.[/QUOTE]
Yeah, in order to argue free will you need to know words like determinism, variance, entity, autonomy, decision, and person. I don't really know how my grammar is so atrocious you can't understand it.
If you are confused about what is being spoken of, please ask and I will try to 'use proper grammar'.
[editline]5th February 2014[/editline]
[QUOTE=HumanAbyss;43801160]definitions change, i'm not arguing that
I just don't see why you would apply something that isn't in the same ballpark
I don't see why we need to discuss free will to discuss personal responsibility.
I don't believe I have free will. I believe I have personal responsibility.
I don't see a need to complicate it[/QUOTE]
Something which is responsible needs to be considered to have been able to do otherwise, otherwise praise and blame are meaningless. You can stub your toe on the rock, but you can't blame the rock.
[editline]5th February 2014[/editline]
Can you address my question about the synonymy of 'autonomy' and 'free will'
[editline]5th February 2014[/editline]
A robot has autonomy, but due to it's lack of relative complexity, it's free will is very limited. Similarly it's autonomy is limited to it's programming.
Zenreon117 is right in the idea that as a whole person, we have the ability to make decisions that are in a sense "free," but he doesn't seem to understand that while every decision is "free," it is not without a repercussion.
That is the crux of the argument. While will itself is "free" in the sense that we, as humans, do not have a predetermined outcome to our lives, the way we conduct ourselves is determined by a need/want system that can be analyzed to a certain degree.
It is possible to determine the choices of a person, given the context of their life, and the ultimate goal they have for themselves, but that does not mean that they have no free will. It simply means that they feel a responsibility (as HumanAbyss said) to make certain choices within certain situations. Those choices change for each individual, and what may seem logical to one person is entirely illogical for another. That is where the disconnect between individuals arises, and it's where this argument is being fought. It's being fought in the gap between our own comprehensions of the world around us, and our inherent moral compass that propels our thoughts and pushes us to make certain decisions over others.
[QUOTE=Zenreon117;43801194]Yeah, in order to argue free will you need to know words like determinism, variance, entity, autonomy, decision, and person. I don't really know how my grammar is so atrocious you can't understand it.
If you are confused about what is being spoken of, please ask and I will try to 'use proper grammar'.
[editline]5th February 2014[/editline]
Something which is responsible needs to be considered to have been able to do otherwise, otherwise praise and blame are meaningless. You can stub your toe on the rock, but you can't blame the rock.
[editline]5th February 2014[/editline]
Can you address my question about the synonymy of 'autonomy' and 'free will'
[editline]5th February 2014[/editline]
A robot has autonomy, but due to it's lack of relative complexity, it's free will is very limited. Similarly it's autonomy is limited to it's programming.[/QUOTE]
so to you, a machine can never have your definition of free will? Regardless of the fact that at every level it's a really, really, REALLY similar process of physical interactions.
We don't need to define free will that way to have personal responsibility in ones own life.
Here's an interesting paper: [url]http://www.nature.com/neuro/journal/v11/n5/abs/nn.2112.html[/url]
[QUOTE=Abstract]There has been a long controversy as to whether subjectively 'free' decisions are determined by brain activity ahead of time. We found that the outcome of a decision can be encoded in brain activity of prefrontal and parietal cortex up to 10 s before it enters awareness. This delay presumably reflects the operation of a network of high-level control areas that begin to prepare an upcoming decision long before it enters awareness.[/QUOTE]
[QUOTE=HumanAbyss;43801437]so to you, a machine can never have your definition of free will? Regardless of the fact that at every level it's a really, really, REALLY similar process of physical interactions.
We don't need to define free will that way to have personal responsibility in ones own life.[/QUOTE]
A machine can have free will within it's limited range of available actions. This free will of the machine is however very limited. Similarly whatever the machine does would be causally attributed to it's creator. Atom bombs have little to no free will. The blame was placed mostly on the creators, mostly by the creators themselves. (I have becomes the destroyer of worlds etc. etc.)
[editline]5th February 2014[/editline]
[QUOTE=Ziks;43801468]Here's an interesting paper: [url]http://www.nature.com/neuro/journal/v11/n5/abs/nn.2112.html[/url][/QUOTE]
Aye, but if that output can be variable, then we have free will in the autonomous sense.
a machine can have free will limited to it's range of available actions.
Oddly enough, this is the exact definition we have agreed on for human "free will".
Atom bombs are solid non autonomous objects. They don't really count at all.
When I say autonomous, it may be a vague term.
An 'autonomous' robot vehicle built in garrysmod is referred to as autonomous, but it's autonomy is extremely limited. Variability increases the autonomy of a being incredibly, to the point where you can safely assign blame for 'not doing right' when one could have done otherwise.
[QUOTE=Zenreon117;43801484]Aye, but if that output can be variable, then we have free will in the autonomous sense.[/QUOTE]
That's fine, I can agree with that. It's interesting in the sense that the majority of our decisions appear to be made subconsciously long before we are consciously aware of them, whereas we intuitively feel that our free will stems from consciousness itself. I can appreciate that when taking the whole system as one unit we have free will (by your definition).
A machine has no free will because a machine has no choice but to take do exactly as it has been designed to do. If it's designed improperly, or there is a problem with it, and as a result it fails in doing what is was supposed, it hasn't made a choice, because it had no choice to make, and no chance to decide for itself.
A human has free will because it has the ability to see the different paths laid out before it and choose which one it will take. Just because it chooses a certain one pat time and time again does not mean that it was predestined to do so; it simply means it felt that taking that specific route was the best choice within the parameters provided. Whether they took the route that benefited them, or benefited other parties is entirely dependent upon what type of person they are, and the things that have happened in the course of their lifetime to bring them to that decision.
[QUOTE=wooletang;43801573]A machine has no free will because a machine has no choice but to take do exactly as it has been designed to do. If it's designed improperly, or there is a problem with it, and as a result it fails in doing what is was supposed, it hasn't made a choice, because it had no choice to make, and no chance to decide for itself.
A human has free will because it has the ability to see the different paths laid out before it and choose which one it will take. Just because it chooses a certain one pat time and time again does not mean that it was predestined to do so; it simply means it felt that taking that specific route was the best choice within the parameters provided. Whether they took the route that benefited them, or benefited other parties is entirely dependent upon what type of person they are, and the things that have happened in the course of their lifetime to bring them to that decision.[/QUOTE]
This in a way returns back to my definition of someone's identity.
Personal identity to me is the conglomeration of one's theoretically possible actions, in some cases variable, that collectively constitute someone's will. To judge someone is to judge their will.
[QUOTE=Zenreon117;43801535]When I say autonomous, it may be a vague term.
An 'autonomous' robot vehicle built in garrysmod is referred to as autonomous, but it's autonomy is extremely limited. Variability increases the autonomy of a being incredibly, to the point where you can safely assign blame for 'not doing right' when one could have done otherwise.[/QUOTE]
and I think we're just more complicated automatons.
[editline]5th February 2014[/editline]
[QUOTE=wooletang;43801573]A machine has no free will because a machine has no choice but to take do exactly as it has been designed to do. If it's designed improperly, or there is a problem with it, and as a result it fails in doing what is was supposed, it hasn't made a choice, because it had no choice to make, and no chance to decide for itself.
A human has free will because it has the ability to see the different paths laid out before it and choose which one it will take. Just because it chooses a certain one pat time and time again does not mean that it was predestined to do so; it simply means it felt that taking that specific route was the best choice within the parameters provided. Whether they took the route that benefited them, or benefited other parties is entirely dependent upon what type of person they are, and the things that have happened in the course of their lifetime to bring them to that decision.[/QUOTE]
what if seeing "different paths" was merely an aspect of your high level software making a predictive pattern for the world in a chemically determined manner making your "choices" just an illusion of your perspective
with what we know about how the brain works currently, this is what we can deduce.
[QUOTE=HumanAbyss;43801609]and I think we're just more complicated automatons.[/QUOTE]
With a degree of variability that may arise from either the complexity of the system, or variability in the purest sense of the word. That there exists some random functions in the universe which can be expressed in a complex system. Those functions certainly wouldn't be able to give a rock variability of choice, nor a hardwired automaton.
[QUOTE=Zenreon117;43801535]When I say autonomous, it may be a vague term.
An 'autonomous' robot vehicle built in garrysmod is referred to as autonomous, but it's autonomy is extremely limited. Variability increases the autonomy of a being incredibly, to the point where you can safely assign blame for 'not doing right' when one could have done otherwise.[/QUOTE]
I feel the reason why we can assign blame to a human that makes a mistake, but not to a robot, is because a human can use the experience of receiving blame to learn not to make the mistake again. Machine learning is still pretty rudimentary at the moment, but if you designed a robot that could interpret natural language to the extent of being able to detect when it was being blamed, and (assuming you were using an artificial neural network) used that to back-propagate an error gradient related to the severity of the blame, your robot would learn from being blamed for its mistakes.
[editline]5th February 2014[/editline]
And could then be held accountable for repeated mistakes.
[QUOTE=Ziks;43801653]I feel the reason why we can assign blame to a human that makes a mistake, but not to a robot, is because a human can use the experience of receiving blame to learn not to make the mistake again. Machine learning is still pretty rudimentary at the moment, but if you designed a robot that could interpret natural language to the extent of being able to detect when it was being blamed, and (assuming you were using an artificial neural network) used that to back-propagate an error gradient related to the severity of the blame, your robot would learn from being blamed for its mistakes.
[editline]5th February 2014[/editline]
And could then be held accountable for repeated mistakes.[/QUOTE]
Yeah, but that is just to say learning is the mechanism by which more variability is introduced.
Iirc I did say that if a machine was ENTIRELY functionally equivalent to a human, in that it produces consciousness, and it holds the same metaphysical 'will' or 'personhood' I described earlier; then yes.
The thing is that learning seems to be very closely tied to conscious process and self reflection. A dog doesn't learn so much as it trains it's instincts. It can't hold concepts in it's head and refer to them in a meta way.
[QUOTE=HumanAbyss;43801609]and I think we're just more complicated automatons.
[editline]5th February 2014[/editline]
what if seeing "different paths" was merely an aspect of your high level software making a predictive pattern for the world in a chemically determined manner making your "choices" just an illusion of your perspective
with what we know about how the brain works currently, this is what we can deduce.[/QUOTE]
That is all it is. Accepting that is obvious. That doesn't change the fact that you were able to make those predictions and output a response. The course of your life is entirely subjective, but that doesn't mean that the actions you take don't impact another person.
You have a "free will" in the sense that your brain is able to predict and evaluate different scenarios, then place a value upon each of those pathways. The pathway you take is up to you, in the sense that your chemical reactions push you to choose one of these paths over all the others.
Just because you look at this question in a fatalistic "chemicals do it, I have no control" manner, doesn't change the fact that you are alive, the paths you choose matter, and other people are impacted by your actions or inactions.
I don't know why it's so hard to reconcile knowing free will is an illusion and having personal responsibility.
[QUOTE=HumanAbyss;43801797]I don't know why it's so hard to reconcile knowing free will is an illusion and having personal responsibility.[/QUOTE]
Because saying free will is an illusion is assigning free will exclusively to one part of the system instead of the whole. Free will isn't an illusion so long as it is our system making the decision, not some exterior system.
but i don't think decisions are really being made as much as events are happening that were the only event that could happen because it did happen like that
[QUOTE=Zenreon117;43801705]Yeah, but that is just to say learning is the mechanism by which more variability is introduced.
Iirc I did say that if a machine was ENTIRELY functionally equivalent to a human, in that it produces consciousness, and it holds the same metaphysical 'will' or 'personhood' I described earlier; then yes.[/QUOTE]
I'm not sure about entire functional equivalence being required, but I pretty much agree with you here.
[QUOTE]The thing is that learning seems to be very closely tied to conscious process and self reflection. A dog doesn't learn so much as it trains it's instincts. It can't hold concepts in it's head and refer to them in a meta way.[/QUOTE]
I feel that consciousness and awareness of self / environment is more of a sliding scale than a discrete divide between human and non-human. It's pretty clear that the reason why we are so capable of expanding upon concepts and understanding systems is because of the complexity of natural language. We can conceive of extended lines of reasoning that tie our actions to consequences because we can express them in natural language. Other species such as dogs have a very limited language for interpreting the world (using language in the loose sense as a collection of symbols that have assigned meaning, so a dog's language would consist of symbols such as "what owner looks like", "what the dog food I like looks like", "what squirrels look like"), from which they can't derive useful thoughts but can only reflect on past experiences in a trivial way. They can only learn in the way you describe, tying actions with immediate expected reward or punishment, because they can't formulate the thoughts that connect actions with indirect consequences.
[editline]5th February 2014[/editline]
[QUOTE=Zenreon117;43801866]Because saying free will is an illusion is assigning free will exclusively to one part of the system instead of the whole. Free will isn't an illusion so long as it is our system making the decision, not some exterior system.[/QUOTE]
I can agree with that.
Sorry, you need to Log In to post a reply to this thread.