Elon Musk Launches Neuralink to Connect Brains With Computers
102 replies, posted
[QUOTE=_Axel;52029262]What are the proposed definitions of consciousness from a neurological standpoint? I only know of philosophical definitions and they're more often than not really vague. Maybe a scientific point of view could help ground things a bit more for me. I assume that if a science of consciousness exists it means that there is a consensus about certain aspects of its definition.[/QUOTE]
The thing is that the science of consciousness is just now taking its first wobbly steps. What we currently know about brain and consciousness is still being compiled. One of the main issues is that neuroscience itself is a study of the brain and not the mind itself. It is a form of hard science that either dismisses or forgets about consciousness as a phenomenon. It is, I'm sorry to say, disregarded as something inherently functionalistic on the touch of being epiphenomenal. It is widely accepted that consciousness arises from brain activity, not much to argue about that. The issue of disagreement, however, is regarding the actual nature and character of consciousness, and what role it plays in a naturally occurring biological system. Consciousness and subjectivity itself is metaphysical as we cannot touch, observe, or even reliably measure it. However, we all are equipped with it (unless we're getting all solipsistic up in here) and it is sort of taken for granted. Truth is, we know almost nothing about it other than that it is a product of brain activity. But why, or better yet, [I]how[/I]? There is this thing called the explanatory gap (maybe you already know of it), how something physical can transcend into the metaphysical. It's like discovering and frequently using the radio without us ever knowing about radiowaves and how they work, but even worse.
My main concern with this thing Musk is doing is that it won't be as simple as just plugging someone into a machine. Not one single point in the brain is a good enough nexus to be the centre of all our thoughts. Just the act of recalling an episodic memory utilises multiple sensory modalities across the brain. Furthermore, not only are there many different kinds of neurons, the extremely intrinsic multilayered interconnectivity and feedback loops, but recent reports have also suggested that glial cells might be a bigger player than previously though.
To be able to copy a mind today, according to current biomedical principles, would first to be able to nonintrusively make a full biological copy of the brain in question, then "scan" that one into this digitised system. With current techniques (fNIRS, MEG, EEG, PET, DTI, CAT, fMRI, etc) we aren't even fully confident in interpreting the data collected whatsoever. A recently published article by Eklund, Nichols, & Knutsson (2016) uprooted a heap of methodological issues that had been applied in published fMRI research, including buggy software. If we can't even directly record a neural network being activated with both a good enough temporal and spatial resolution, how on earth will we be able to download a mind?
Sorry for the rant. Had to stop myself from going on for too long. I love Musk, but I'm [I]extremely[/I] wary of jumping onto this hypetrain.
[QUOTE=Kazumi;52029973]The thing is that the science of consciousness is just now taking its first wobbly steps. What we currently know about brain and consciousness is still being compiled. One of the main issues is that neuroscience itself is a study of the brain and not the mind itself. It is a form of hard science that either dismisses or forgets about consciousness as a phenomenon. It is, I'm sorry to say, disregarded as something inherently functionalistic on the touch of being epiphenomenal. It is widely accepted that consciousness arises from brain activity, not much to argue about that. The issue of disagreement, however, is regarding the actual nature and character of consciousness, and what role it plays in a naturally occurring biological system. Consciousness and subjectivity itself is metaphysical as we cannot touch, observe, or even reliably measure it. However, we all are equipped with it (unless we're getting all solipsistic up in here) and it is sort of taken for granted. Truth is, we know almost nothing about it other than that it is a product of brain activity. But why, or better yet, how? There is this thing called the explanatory gap (maybe you already know of it), how something physical can transcend into the metaphysical. It's like discovering and frequently using the radio without us ever knowing about radiowaves and how they work, but even worse.[/QUOTE]
I'm still not convinced, basically what you're saying is that it is known to arise from brain activity, but nothing about its nature is known other than it arising from brain activity. That's just tautological, if we claim to know of its existence it implies that we've seen some form of evidence of it existing, and thus that evidence would yield some form of information about its nature when it comes to its influence on the measurable world.
"Wide acceptance" isn't really convincing to me. There's plenty of things that were widely accepted until someone actually questioned it.
Yeah unless we can at some point quantify consciousness I wouldn't be up for "uploading" it into a computer, but at the same time I wouldn't mind using my brain as an input device for a computer or some other machine, which would be probably easier and safer to accomplish without having to go all metaphysical about it.
[editline]30th March 2017[/editline]
Not saying that's easy either but hey, the sooner we start the sooner we'll get there
[QUOTE=Kazumi;52029973]
Sorry for the rant. Had to stop myself from going on for too long. I love Musk, but I'm [I]extremely[/I] wary of jumping onto this hypetrain.[/QUOTE]
No dude this is great, rant some more. Love hearing stuff like this, and I'm sure I'm not alone.
[QUOTE=_Axel;52031113]I'm still not convinced, basically what you're saying is that it is known to arise from brain activity, but nothing about its nature is known other than it arising from brain activity. That's just tautological, if we claim to know of its existence it implies that we've seen some form of evidence of it existing, and thus that evidence would yield some form of information about its nature when it comes to its influence on the measurable world.
"Wide acceptance" isn't really convincing to me. There's plenty of things that were widely accepted until someone actually questioned it.[/QUOTE]
what are you arguing here?
[QUOTE=Ninja Gnome;52031280]what are you arguing here?[/QUOTE]
That I don't see why something we can't even define the nature of is of any importance. Plenty of people are looking for "the source of consciousness" or making arguments about, for instance, "brain uploading" not being viable because it doesn't "transfer consciousness" when nobody can even explain what consciousness is supposed to be, seems to me we're skipping a step here. As far as I'm concerned we may be arguing about something that doesn't even exist.
[QUOTE=_Axel;52031403]That I don't see why something we can't even define the nature of is of any importance. Plenty of people are looking for "the source of consciousness" or making arguments about, for instance, "brain uploading" not being viable because it doesn't "transfer consciousness" when nobody can even explain what consciousness is supposed to be, seems to me we're skipping a step here.[/QUOTE]
I would say consciousness is pretty important because it is quite literally the base of our subjective frame of existence. the viability of such projects like brain uploading or enhancement via computers hinges on the nature of consciousness, and jumping right into those technologies without knowing more about the nature of consciousness is going in blind. i'd say that precisely because we know so little about the how and why we think, feel, and experience the world we do is why asking the consciousness question regarding these technologies are so important. we can't even begin to accurately consider unintended consequences if we don't have a solid understanding of what we're working with.
take your brain uploading example. say i could upload an exact replica of my self, memories, personality, and all, onto a computer. it is able to interact with the world as I am and react to things with the same emotions i would. is it conscious? would it be ethical to shut it down and wipe it from whatever device it was running on? how would it react when divorced of the complex systems of glands and hormones that currently effect our conscious experience? would these systems need to be replicated in order to preserve emotional balance? what about expanding "processing power", for lack of a better term? what are the possible effects on mental health? can these functions even be expanded for conscious functioning? could the use of such enhancements interrupt the conscious flow in some way, creating a philosophical zombie? most of these questions can only be answered through further research into consciousness. i would say jumping right into attempting such technologies before better understanding consciousness is skipping a far greater step than bringing up concerns of consciousness without understanding it.
why it seems that certain atoms arranged in certain forms with certain electrical signals running through them can generate an awareness of themselves and the contemplation of abstract, nonphysical constructs is fascinating to me, and better understanding this seems critical to better understanding both ourselves and the nature of existence.
[QUOTE=EcksDee;52031268]No dude this is great, rant some more. Love hearing stuff like this, and I'm sure I'm not alone.[/QUOTE]
Thanks man. Actually takes quite a lot of energy to post these things.
[QUOTE=_Axel;52031113]I'm still not convinced, basically what you're saying is that it is known to arise from brain activity, but nothing about its nature is known other than it arising from brain activity. That's just tautological, if we claim to know of its existence it implies that we've seen some form of evidence of it existing, and thus that evidence would yield some form of information about its nature when it comes to its influence on the measurable world.
"Wide acceptance" isn't really convincing to me. There's plenty of things that were widely accepted until someone actually questioned it.[/QUOTE]
It is exactly the tautology you describe. We haven't really got any further than that. We know of consciousness' existence because, like I previously mentioned, we all are conscious beings. We can only indirectly observe consciousness through the effects of epilepsy, brain plasticity, trauma, etc (solipsists and many dualists would disagree). For example, acquired [url=https://en.wikipedia.org/wiki/Achromatopsia]achromatopsia[/url] (thalamic damage and/or damage to visual cortex V4) completely removes colours from a person's conscious experience and turns everything into grey-scale. They are even unable to remember colours. We know this because of self reports and psychological tests. Their eyes are completely fine, they just can't experience it any more. This leads to the problem of subjectivity; the inability to actually observe someone else's experiences. Ultimately, we cannot actually be sure that anyone else is conscious (to the solipsist's delight, curse them!), and it will spiral into an even deeper philosophical issue. However, as previously stated, the philosophical disconnection for scientists is that many take phenomenal experience for granted.
At the end of the day, we currently don't know how the physical can produce the metaphysical. How the brain can produce something that is directly immeasurable. This is called the explanatory gap for a reason, since we don't know how to actually solve or explain it.
[QUOTE=_Axel;52031403]That I don't see why something we can't even define the nature of is of any importance. Plenty of people are looking for "the source of consciousness" or making arguments about, for instance, "brain uploading" not being viable because it doesn't "transfer consciousness" when nobody can even explain what consciousness is supposed to be, seems to me we're skipping a step here. As far as I'm concerned we may be arguing about something that doesn't even exist.[/QUOTE]
The problem is, much like Ninja Gnome is saying, that we're going in blind. We're fumbling in the dark without any arms or legs at the moment in a space where we actually don't know what the darkness even is.
[url=https://www.dropbox.com/s/e93b38c4bgl9inl/Revonsuo%2C%20A.%20%282000%29.pdf?dl=0]I have an excellent article on the disconnect between consciousness philosophy and neuroscience here. [/url] Uploaded for your leisure.
I mean I can understand the idea behind this in concept but we still don't really understand much about the brain at all.
I really think that neuroscience needs to make several great strides before something like this is actually possible.
[QUOTE=Kazumi;52031880]
It is exactly the tautology you describe. We haven't really got any further than that. We know of consciousness' existence because, like I previously mentioned, we all are conscious beings. We can only indirectly observe consciousness through the effects of epilepsy, brain plasticity, trauma, etc (solipsists and many dualists would disagree). For example, acquired [url=https://en.wikipedia.org/wiki/Achromatopsia]achromatopsia[/url] (thalamic damage and/or damage to visual cortex V4) completely removes colours from a person's conscious experience and turns everything into grey-scale. They are even unable to remember colours. We know this because of self reports and psychological tests. Their eyes are completely fine, they just can't experience it any more. This leads to the problem of subjectivity; the inability to actually observe someone else's experiences. Ultimately, we cannot actually be sure that anyone else is conscious (to the solipsist's delight, curse them!), and it will spiral into an even deeper philosophical issue. However, as previously stated, the philosophical disconnection for scientists is that many take phenomenal experience for granted.
[/QUOTE]
There's a great thought experiment I remember based on the gap.
Imagine an alien that doesn't feel pain, now try and explain pain to them.
You can say that pain is caused by damage or stimuli to certain nerve endings, you can say its an evolutionary adaptation for better survival chances, you can literally tell them the exact molecules one by one that cause pain in us, but they won't actually [B]feel [/B]it from just your description, no matter how detailed or thorough you are.
[QUOTE=Kazumi;52031880]Thanks man. Actually takes quite a lot of energy to post these things.
It is exactly the tautology you describe. We haven't really got any further than that. We know of consciousness' existence because, like I previously mentioned, we all are conscious beings. We can only indirectly observe consciousness through the effects of epilepsy, brain plasticity, trauma, etc (solipsists and many dualists would disagree). For example, acquired [url=https://en.wikipedia.org/wiki/Achromatopsia]achromatopsia[/url] (thalamic damage and/or damage to visual cortex V4) completely removes colours from a person's conscious experience and turns everything into grey-scale. They are even unable to remember colours. We know this because of self reports and psychological tests. Their eyes are completely fine, they just can't experience it any more. This leads to the problem of subjectivity; the inability to actually observe someone else's experiences. Ultimately, we cannot actually be sure that anyone else is conscious (to the solipsist's delight, curse them!), and it will spiral into an even deeper philosophical issue. However, as previously stated, the philosophical disconnection for scientists is that many take phenomenal experience for granted.
At the end of the day, we currently don't know how the physical can produce the metaphysical. How the brain can produce something that is directly immeasurable. This is called the explanatory gap for a reason, since we don't know how to actually solve or explain it.
The problem is, much like Ninja Gnome is saying, that we're going in blind. We're fumbling in the dark without any arms or legs at the moment in a space where we actually don't know what the darkness even is.
[url=https://www.dropbox.com/s/e93b38c4bgl9inl/Revonsuo%2C%20A.%20%282000%29.pdf?dl=0]I have an excellent article on the disconnect between consciousness philosophy and neuroscience here. [/url] Uploaded for your leisure.[/QUOTE]
Thanks for the response! (And the article you shared, I'll look at it when I'm back home) I still have a problem with your explanation of our knowledge of consciousness though. Saying that we know consciousness exists because we are conscious beings is still tautological, and a circular definition.
Achromatopsia is an interesting phenomenon I hadn't heard of before, but I'm unsure how it relates to the concept of consciousness. It is a quantifiable phenomenon (the patient behaves differently than he would if he still saw colour) which is due to a measurable modification of the brain (thalamic damage or damage to the visual cortex, it seems). How is that any more telling of the existence of consciousness than a digital camera which outputs grey scale images because of a software issue, without any modification to its receptors?
[editline]30th March 2017[/editline]
[QUOTE=EcksDee;52031921]There's a great thought experiment I remember based on the gap.
Imagine an alien that doesn't feel pain, now try and explain pain to them.
You can say that pain is caused by damage or stimuli to certain nerve endings, you can say its an evolutionary adaptation for better survival chances, you can literally tell them the exact molecules one by one that cause pain in us, but they won't actually [B]feel [/B]it from just your description, no matter how detailed or thorough you are.[/QUOTE]
That goes for explaining it to another human, too. Just like his "green" may not feel like your "green", the same goes with anybody's experience of pain. In the end what makes pain "pain" as a concept all humans share is their reaction to it, thus describing that reaction to an alien would be a good definition of pain's nature.
[QUOTE=_Axel;52031944]Thanks for the response! (And the article you shared, I'll look at it when I'm back home) I still have a problem with your explanation of our knowledge of consciousness though. Saying that we know consciousness exists because we are conscious beings is still tautological, and a circular definition.
Achromatopsia is an interesting phenomenon I hadn't heard of before, but I'm unsure how it relates to the concept of consciousness. It is a quantifiable phenomenon (the patient behaves differently than he would if he still saw colour) which is due to a measurable modification of the brain (thalamic damage or damage to the visual cortex, it seems). How is that any more telling of the existence of consciousness than a digital camera which outputs grey scale images because of a software issue, without any modification to its receptors?
[editline]30th March 2017[/editline]
That goes for explaining it to another human, too. Just like his "green" may not feel like your "green", the same goes with anybody's experience of pain. In the end what makes pain "pain" as a concept all humans share is their reaction to it, thus describing that reaction to an alien would be a good definition of pain's nature.[/QUOTE]
To respond to the bit about the camera: Theoretically, we don't know that the camera is not a conscious entity. Though technically speaking, the camera is vastly different from the brain, not only structurally but also in terms of lack of agency, and such. A camera is constructed for a very specific purpose, and the array of photo-sensors only registers and outputs the light that strikes them. Is the camera aware of what it is doing? Flipping the same argument onto consciousness studies and many early neuroscientific assumptions leads us to the functionalistic (and the earlier behaviouristic) approach. Before moving on, for this argument it would be pointless to fall into the parapsychological pit of believing everything is conscious, so let's try and stick to biological/equivalent systems. The mammalian brain is currently the most complex entity that we know of, while a camera is invented, designed, and produced by humans. The difference in complexity between the two are indescribably large.
Right, so according to many functionalists, the thing that matters most is this.
[img]https://i.imgur.com/1lFmMOS.png[/img]
This is the "black box", a relatively simple system that is fed input and yielding a corresponding output. Early psychology were only interested in the input-output relation. Functionalism, however, have lifted the hood of the box and are looking at the neural systems within and how they produce this output. They are, however, often completely disregarding the subjective experience of the process. We are not mindless automatons or zombies. Consciousness, thus, constitutes a second (currently) unobservable output which itself can feed the input channel (unless you talk to the epiphenomenalist or the fatalist completely dismissing the idea of free will, but that is a discussion for another time).
Thoughts or reflective consciousness (thinking about thinking) is a form of output produced by our brain inside this black box, but it is not measurable to any meaningful degree at the moment. Thomas Nagel, of whom I've had the pleasure of meeting, wrote in his prominent "What is it like to be a bat"-article published in the 70's, that there is this intrinsic thing of being me, or you, or EcksDee or Ninja Gnome. Only I understand what it is like to being me. I could try to imagine what it's like to be a bat, with echolocation and wings and such, but it would be a futile endeavour. Nagel famously stated: "[I]An organism has conscious mental states if and only if there is something that it is like to be that organism - something that it is like for the organism to be itself.[/I]" Thus, since it is something for me to being myself, I conclude that I am conscious. Someone would say that it is the same statement as Descarte's Cogito Ergo Sum (I think therefore I am), but Descartes was a dualist and applied it as such, arguing that the mind existed separately from the body as the soul. Something which very few modern consciousness philosophers would agree with today.
On that side-note about pain. Pain can have a physical/observable/behavioural reaction, yes. But phantom pains can be very real even if the spatial location exists only in the head of the sufferer. Sure, sometimes the originating signals can be traced to nerve scarring/neuromas near the site of the amputation. But this doesn't remove the fact that the sensation of pain takes place somewhere in spatial relation to the patient, but at a non-physical place. I remember something that Michael Tye wrote, can't remember were, saying that a pain cannot exist without an owner. It has to belong to a person, and these subjective ownerships are the only thing we have that we cannot trade with others.
Sorry for my slow rate of responding. As I mentioned earlier, it's really exhausting to write up these responses and it takes me hours to do so.
[editline]30th March 2017[/editline]
Also, sorry if I'm missing all your points. I'm trying to stay as concise and focused to the topic as I can, while at the same time trying to cover everything that is being discussed.
[QUOTE=Kazumi;52032460]To respond to the bit about the camera: Theoretically, we don't know that the camera is not a conscious entity. Though technically speaking, the camera is vastly different from the brain, not only structurally but also in terms of lack of agency, and such. A camera is constructed for a very specific purpose, and the array of photo-sensors only registers and outputs the light that strikes them. Is the camera aware of what it is doing? Flipping the same argument onto consciousness studies and many early neuroscientific assumptions leads us to the functionalistic (and the earlier behaviouristic) approach. Before moving on, for this argument it would be pointless to fall into the parapsychological pit of believing everything is conscious, so let's try and stick to biological/equivalent systems. The mammalian brain is currently the most complex entity that we know of, while a camera is invented, designed, and produced by humans. The difference in complexity between the two are indescribably large.
Right, so according to many functionalists, the thing that matters most is this.
[img]https://i.imgur.com/1lFmMOS.png[/img]
This is the "black box", a relatively simple system that is fed input and yielding a corresponding output. Early psychology were only interested in the input-output relation. Functionalism, however, have lifted the hood of the box and are looking at the neural systems within and how they produce this output. They are, however, often completely disregarding the subjective experience of the process. We are not mindless automatons or zombies. Consciousness, thus, constitutes a second (currently) unobservable output which itself can feed the input channel (unless you talk to the epiphenomenalist or the fatalist completely dismissing the idea of free will, but that is a discussion for another time).[/QUOTE]
My knowledge of automatic systems is a bit rusty, but that isn't exclusive to living beings at all. Most systems based on electronics do exactly that, they take their own output (the main one or a derivative one) and reinsert it in their own system as an input signal. That's the basic principle behind servomotors.
Does recursion need to be complex enough to not be measurable using current techniques for it to be considered consciousness? Does that mean advances in neurological sciences could eventually lead to the disappearance of consciousness? I'm not really sure where the distinction lies.
[QUOTE]Thoughts or reflective consciousness (thinking about thinking) is a form of output produced by our brain inside this black box, but it is not measurable to any meaningful degree at the moment. Thomas Nagel, of whom I've had the pleasure of meeting, wrote in his prominent "What is it like to be a bat"-article published in the 70's, that there is this intrinsic thing of being me, or you, or EcksDee or Ninja Gnome. Only I understand what it is like to being me. I could try to imagine what it's like to be a bat, with echolocation and wings and such, but it would be a futile endeavour. Nagel famously stated: "[I]An organism has conscious mental states if and only if there is something that it is like to be that organism - something that it is like for the organism to be itself.[/I]" Thus, since it is something for me to being myself, I conclude that I am conscious. Someone would say that it is the same statement as Descarte's Cogito Ergo Sum (I think therefore I am), but Descartes was a dualist and applied it as such, arguing that the mind existed separately from the body as the soul. Something which very few modern consciousness philosophers would agree with today.[/QUOTE]
That definition confuses me greatly. "If there is something that it is like to be it"? Doesn't that apply to every single thing in existence, since everything in existence is? Besides, how do you know that there is something that it is like to be a certain thing? How can you know that there is nothing that it is like to be another thing? That definition strikes me as kind of a dead end.
[QUOTE]Sorry for my slow rate of responding. As I mentioned earlier, it's really exhausting to write up these responses and it takes me hours to do so.
[editline]30th March 2017[/editline]
Also, sorry if I'm missing all your points. I'm trying to stay as concise and focused to the topic as I can, while at the same time trying to cover everything that is being discussed.[/QUOTE]
Hey, don't bother taking that much time to reply to me if you have much better things to do, I'm already appreciative of you taking that much time to answer my questions.
Sorry, you need to Log In to post a reply to this thread.