• Are Substrate-Independent Minds Possible?
    30 replies, posted
[RELEASE] AI researchers have been speculating for decades that it might be possible one day to create substrate independent minds. In 1999 Ray Kurzweil published The Age of Spiritual Machines in which he predicted that minds would be uploaded to computers in the latter part of the 21st century. The subject of Artificial General Intelligence has garnered increasing attention during the last decade as researchers such as Ben Goertzel, Itamar Arel, and Peter Voss have done groundbreaking work in the field. In an interview with Sander Olson, AI researcher Randal Koene discusses how developments in the field of AI are accelerating, how low-cost DNA sequencing could become critical to AI development, and how the first substrate-independent minds could emerge within the next twenty years. [B]Randal Koene[/B] [B]Question: You have just returned from AGI 2011. How did that go?[/B] AGI 2011 was a complete success. Peter Norvig, who is the head of research at Google, attended this event, and he literally wrote the book on AI. The large number of presenters and obvious excitement that AGI 2011 generated clearly indicates that interest in this topic is increasing. [B]Question: Henry Markram's Blue Brain project aims to simulate a human brain within a decade. Do you think these efforts will be successful?[/B] That depends on how one defines "success". Markram is a first-rate researcher, and he is clearly advancing the field. But when he speaks of modeling the human brain, he may mean trying to generically model cortical columns and brain structures. That is quite different than modeling a specific brain, in which each neural circuit is a meaningful consequence of experience. The latter is what whole brain emulation is about. [B]Question: But couldn't such a generic brain serve as a tabula rasa, on which a specific personality could be mapped?[/B] That might be possible. I don't know if that is Markram's specific plan, but I can imagine that he may be aiming to improve the specificity of his model. That is the type of thing that can be achieved using his method. This field is advancing exponentially, and we are constantly attaining new capabilities that we didn't have before. [B]Question: Given the staggering complexity of the human brain, wouldn't simulating a fruit fly's brain first make more sense?[/B] Markram didn't start by trying to simulate an entire human brain - he started with a single cortical column from a rat. He had previously done studies on those types of cortical columns, and so was an authority on the subject. You are probably right that a fine-grained, comprehensive simulation of a fruit fly brain would be more feasible. But that doesn't have the same allure, and human brain simulations are always going to be the ultimate goal. [B]Question: How fine-grained and accurate would a human brain simulation need to be in order to simulate consciousness? What would be the computing requirements?[/B] I don't know what the CPU or memory requirements would be to simulate a whole human brain. I am confident that those requirements could be met today at huge cost, or within a decade at a greatly reduced cost. But the complexity goes up enormously depending on whether you want to model spiking neurons, or if you want to simulate how the membrane potential changes over morphologically correct neurons. If you go down to modeling molecular processes, then that gets even more complex. I make it a point not to speculate on specific technologies and approaches, because before further study jumping on any one path may result in a dead end. [B]Question: What is the carboncopies foundation?[/B] The carboncopies foundation was founded by Suzanne Gildert and myself in 2010 and is specifically technology agnostic. It has four reasons for existing. First, it aims to create and maintain a network of researchers working in the field. Second, we engage in outreach, to explain about the fundamentals of substrate independent minds. Third, we engage in direct research into the fundamentals of minds. Finally, we are creating a roadmap to achieving substrate independent minds. [B]Question: How big is carboncopies? How many members does it have?[/B] In addition to the founders, who take care of organizational matters, there are 25 or so researchers who are working with us. There are about 200 people who have expressed their interest in the subject by signing up to our Facebook group. Given how rapidly this field is growing, those numbers should substantially increase. [B]Question: Is the field of brain science expanding exponentially?[/B] In neuroscience, we are on an exponential curve. For the past 90 years or so the pace of discovery was slow, but new tools are being developed. The process of learning about the brain is becoming automated, and the field of “connectomics” (mapping all the connections in the brain) and emulation of brain circuitry is really taking off. Entirely new fields, such as neuroinformatics, have been created. [B]Question: And what is neuroinformatics?[/B] Neuroinformatics is broader than brain emulation. Brain emulation is different than simulation, in that emulation strives for a perfect replica. Neuroinformatics is an outgrowth from computational neuroscience. Neuroinformatics begins to deal with vast amounts of brain data, connecting large scale and high resolution information. Neuroinformatics is an attempt to take neuroscience to the same place that bioinformatics has taken the genome. [B]Question: What is your response to AI skeptics that claim that brain processes cannot be emulated by a digital machine?[/B] They are probably both right and wrong. They are right in that trying to emulate brain processes on a conventional computer would at best be inefficient. The Blue Brain project uses a supercomputer dissipating 10s of kilowatts to simulate biological activities that consume 10s of watts. However, there is no evidence that the activities occurring in the brain are incomputable. Even AI skeptics such as Penrose and Hammerhoff don't argue that the brain's processes are incomputable. [B]Question: Is the optimal way to AGI to emulate a human brain, or to adopt a different approach, along the lines of Itamar Arel and Ben Goertzel?[/B] Both Arel and Goertzel are being inspired by cognitive models of the brain, so their approaches cannot be considered completely different. Some researchers, however, adopt approaches that are not based on the brain but rather on mathematical models. Marcus Hutter and Juergen Schmidhuber often take a mathematical approach which is unrelated to brain research. I don't know which approach is best, but the brain emulation approach is virtually guaranteed to work, given sufficient resources. [B]Question: How fine-grained a simulation will be necessary to create a truly sentient machine?[/B] There are many different theories regarding how consciousness emerges and what it is. I personally believe that some degree of consciousness will emerge from generating a model of self within one's environment, senses and actions. I don't think that understanding molecular-level processes is necessarily a prerequisite for that. [B]Question: Do you think that an x-prize for a high-level simulation of a human brain could be effective in spurring AGI development?[/B] That would certainly help. Prizes serve as an effective pull to draw resources and funding, and companies realize that the value of developing the technology is much greater than the actual prize. I could see a number of x-prizes being useful, including a brain-computer interface prize, a recording from the brain prize, and a prize for emulating smaller and larger brains. So the brain emulation field would definitely benefit from such prizes. [B]Question: Has there been a writer who has particularly influenced your views on brain emulation?[/B] When I was 13, I read Arthur Clarke's The City and the Stars. In that novel people have multiple 1,000 year lives in turns. After each, they return to the computer, which stores their information, atom by atom. That novel profoundly affected me, because I had already realized that there was no way that I had enough time and capability to do everything that I wanted to do. That novel made me realize that the mind was the crucial element, and inspired me to become a brain researcher. [B]Question: You also work for Halcyon molecular. How is that work going?[/B] The Andregg brothers (founders of Halcyon) have the same dream that I have to gain time and capability, and eventually to achieve substrate independent minds. They are interested in all the subjects that I am interested in, such as AGI, SENS, cryonics, and nanotechnology. The low-cost DNA sequencing that we are perfecting looks very promising, and we are rapidly overcoming the technical hurdles. [B]Question: How important is low-cost DNA sequencing to brain emulation?[/B] It is potentially quite important. This capability will allow us to examine DNA in the brain, perhaps even to extract the entire connectome with a new method that uses synthetic DNA tags for connections. So the technologies that Halcyon is developing are directly beneficial to whole-brain emulation. Question: When will the first "substrate independent mind" emerge? Creating such a mind would depend on how many resources were put towards it. We are in an acceleration period for fields such as whole brain emulation and the human connectome. New people are entering the field, and mainstream scientists are now openly discussing brain emulation and emulating functions of the brain in computers. These fields should accelerate even further. Although I'd like to think that we could succeed within the next 20-40 years, it all depends on the strength of the effort. [MEDIA]http://www.youtube.com/watch?v=4YsMxpx0VQ8&feature=player_embedded[/MEDIA] [/RELEASE] Source: [url]http://nextbigfuture.com/2011/08/are-substrate-independent-minds.html[/url]
Uploading our minds to machines is really the only feasible way to have interstellar travel, at least if FTL is deemed impossible.
[QUOTE=Canuhearmenow;31886961]Uploading our minds to machines is really the only feasible way to have interstellar travel, at least if FTL is deemed impossible.[/QUOTE] It's also jolly practical in general.
I would personally prefer if more research went into improving humans directly (i.e. splicing) rather than computers. I mean, humans are already extremely good at this stuff. Why improve what's naturally designed (I don't mean intelligently designed btw) to be adaptable rather than what will take a hell of a lot more effort? I should make it clear that I'm [I]not[/I] saying don't research AI - instead, shift it more towards improving humans.
[QUOTE=Canuhearmenow;31886961]Uploading our minds to machines is really the only feasible way to have interstellar travel, at least if FTL is deemed impossible.[/QUOTE] Why not devise an AI purpose-built to the task? The idea of 'uploading' yourself to a computer is pure fantasy- the best you could do is create an AI copy of your mind, but your physical body ('you', for all intents and purposes) isn't going anywhere. Not sure I see the point of trying to replicate the human brain specifically using computer software. It can't be cheaper or easier than developing more mundane AI to the same standard.
[QUOTE=Canuhearmenow;31886961]Uploading our minds to machines is really the only feasible way to have interstellar travel, at least if FTL is deemed impossible.[/QUOTE] But that wouldn't carry your consciousness over. Just create an exact replica. Which'd be still mighty useful for interstellar travel, but still.
Isn't consciousness/self-awareness (you) basically a mixture of all the chemicals in your brain working together? So if you uploaded your mind to a computer 'you' wouldn't even be making decisions and stuff of that like?
[QUOTE=Géza!;31887588]But that wouldn't carry your consciousness over. Just create an exact replica. Which'd be still mighty useful for interstellar travel, but still.[/QUOTE] Would you trust an AI copy of yourself to make the best business decisions in another solar system?
[QUOTE=Rainhorror;31887694]Isn't consciousness/self-awareness (you) basically a mixture of all the chemicals in your brain working together? So if you uploaded your mind to a computer 'you' wouldn't even be making decisions and stuff of that like?[/QUOTE] Yeah that's pretty much why I'd fear things like teleportation.
[QUOTE=JgcxCub;31887529]I would personally prefer if more research went into improving humans directly (i.e. splicing) rather than computers. I mean, humans are already extremely good at this stuff. Why improve what's naturally designed (I don't mean intelligently designed btw) to be adaptable rather than what will take a hell of a lot more effort? I should make it clear that I'm [I]not[/I] saying don't research AI - instead, shift it more towards improving humans.[/QUOTE] I'm fairly certain that not only are there are scientists out their with that very goal, but that if we were try to improve on something as complicated as a brain, we at the very least need to completely understand it ( something we don't yet ). Trying to emulate this could help us in the long run understand the brain a lot better, not to mention we could have the best of both worlds : a mix of biological and mechanical enhancing.
[QUOTE=DarkSpider;31887707]Would you trust an AI copy of yourself to make the best business decisions in another solar system?[/QUOTE] If it had a shotgun, absolutely.
A human-like AI would be useful to some degree, but I think the majority of future AI will receive "pleasure" from labour, like garbage collection, like humans receive pleasure from food, sex and other senses.
[QUOTE=Rubs10;31888140]A human-like AI would be useful to some degree, but I think the majority of future AI will receive "pleasure" from labour, like garbage collection, like humans receive pleasure from food, sex and other senses.[/QUOTE] "Ohh yeeaaahhh! This garbage collection is making me so...so, uhh, robots don't do "horny" do we? Uhhh, happy? happy works! It's making me so HAPPY!" Yeah, I can see this working pretty well. I'm all for that.
[QUOTE=Canuhearmenow;31886961]Uploading our minds to machines is really the only feasible way to have interstellar travel, at least if FTL is deemed impossible.[/QUOTE] Uploading one's mind to a machine has a number of problems, or at least questions that need to be answered. If you're being uploaded to a machine, are you still yourself in the sense that your consciousness doesn't die? If not, what happens when the machine creates two duplicates, or leaves the original intact? Which one of you is the 'real' you? It's such a mess that I could write a book on it, and still not feel satisfied. Perhaps we could progressively replace ourselves and not notice the transition? I feel it would be the safest method of transferring the mind. Considering how a complete cycle of 'a brain's rebirth' takes around 7 years, one could say that the original consciousness of the majority of people in the world has been replaced at least once.
[i][quote]The low-cost DNA sequencing that we are perfecting looks very promising, and we are rapidly overcoming the technical hurdles.[/quote][/i] I like the sound of that, and all in all that was quite an interesting read.
[QUOTE=Canuhearmenow;31886961]Uploading our minds to machines is really the only feasible way to have interstellar travel, at least if FTL is deemed impossible.[/QUOTE] I'd never thought of this. Such a brilliant idea! We could hibernate for millions of years and create copies of the same, most suitable conciousness for multiple missions.
[QUOTE=Canuhearmenow;31886961]Uploading our minds to machines is really the only feasible way to have interstellar travel, at least if FTL is deemed impossible.[/QUOTE] Actually it's not feasible, really. We are not capable of building a ship to stay functional for long enough to get anywhere with the speeds we can achieve, even if it didn't have to support life.
After reading this I thought about how porn suddenly turns disgusting after you finished fapping. Isn't that a counterexample?
[QUOTE=Awesomecaek;31888878]We are not capable of building a ship capable of staying functional for long enough to get anywhere with the speeds we can achieve, even if it didn't have to support life.[/QUOTE] Something tells me that the revolutionary breakthrough in interstellar space travel is going to be some sorta worm-hole shortcut. Or a [b]VERY[/b] efficient source of energy. I could be wrong, but my two cents anyway.
Oh and by the way: a few soviet sci-fi authors predicted mind-transfer in the early 70s
To be honest I don't care about the whole "It's just a copy"/"It wouldn't be you". I could just upload, and leave a note saying the guy under the electrodes doesn't get to wake up. Like I care that the thing in the computer is 'just a copy'. To begin with, 'you' is not a perfect, static Platonic image of the current flesher guy. 'You' is a dynamic, constantly-changing thing. In an entropic universe there is no such thing as constants. (inb4 physical laws you know what I mean) But, people still assume the person 20 years before was the same person they are now. They keep their names and identities, even if what defines them has changed. Even if the whole of them has changed. Even if the only thing that brings an illusion of continuity is a common thread of memories and the idea of having the same subjective experience. Action potentials change every time a neuron fires. The atoms in your whole body will probably get wholly renewed after a couple decades, at most. So, where does one draw the line? At a specific number of 'changes per minute'? Below that number you're still the same person, and beyond that, you need to get a new name? Moreover, where do we draw the line? At the body? At the Central Nervous System? Or simply the brain -- Where do we slice off this brain, at the level of the brainstem? At the level of the mesencephalon or the diencephalon? Why draw the line at the brain when only a few regions are active at a time, and only a few contribute to consciousness? And if you were to slice up the brain, and explode it until it's large enough to drive a truck through, until you can see every individual neuron and neuroglia, where would you find consciousness? The neurons don't understand. The neurons are nothing, mean nothing. Why draw a line at all? A copy, an original, ten copies, ten perfect copies, a thousand randomly-modified copies, does it even matter?
[QUOTE=Fatman55;31887731]Yeah that's pretty much why I'd fear things like teleportation.[/QUOTE] This is indeed a fearful prospect. And even if we did manage to get teleportation going, we'd never know if it did that, because the replica it creates would be the same as the original in every way. 'You' would be dead, but no one would be able to tell and it wouldn't be provable because the replica would tell other people it went fine.
[QUOTE=Rubs10;31888140]A human-like AI would be useful to some degree, but I think the majority of future AI will receive "pleasure" from labour, like garbage collection, like humans receive pleasure from food, sex and other senses.[/QUOTE] God that sounds fucking cool, perfecting human AI then manipulating it to serve our interests...
Is anyone else fucking happy that we are the ones who are going to spend most of our lives in this technological revolution?
[QUOTE=Megafanx13;31889241]This is indeed a fearful prospect. And even if we did manage to get teleportation going, we'd never know if it did that, because the replica it creates would be the same as the original in every way. 'You' would be dead, but no one would be able to tell and it wouldn't be provable because the replica would tell other people it went fine.[/QUOTE] :ohdear:
Plug me in scotty.
[QUOTE=Eudoxia;31889212]To be honest I don't care about the whole "It's just a copy"/"It wouldn't be you". I could just upload, and leave a note saying the guy under the electrodes doesn't get to wake up. Like I care that the thing in the computer is 'just a copy'. To begin with, 'you' is not a perfect, static Platonic image of the current flesher guy. 'You' is a dynamic, constantly-changing thing. In an entropic universe there is no such thing as constants. (inb4 physical laws you know what I mean) But, people still assume the person 20 years before was the same person they are now. They keep their names and identities, even if what defines them has changed. Even if the whole of them has changed. Even if the only thing that brings an illusion of continuity is a common thread of memories and the idea of having the same subjective experience. Action potentials change every time a neuron fires. The atoms in your whole body will probably get wholly renewed after a couple decades, at most. So, where does one draw the line? At a specific number of 'changes per minute'? Below that number you're still the same person, and beyond that, you need to get a new name? Moreover, where do we draw the line? At the body? At the Central Nervous System? Or simply the brain -- Where do we slice off this brain, at the level of the brainstem? At the level of the mesencephalon or the diencephalon? Why draw the line at the brain when only a few regions are active at a time, and only a few contribute to consciousness? And if you were to slice up the brain, and explode it until it's large enough to drive a truck through, until you can see every individual neuron and neuroglia, where would you find consciousness? The neurons don't understand. The neurons are nothing, mean nothing. Why draw a line at all? A copy, an original, ten copies, ten perfect copies, a thousand randomly-modified copies, does it even matter?[/QUOTE] You are absolutely correct in that it's a big, nebulous mess. But I think we can agree that if your consciousness is uploaded to a computer and you are then killed, you are still dead, even if a computer copy continues to exist. The nature of consciousness over the course of a lifetime is unsure and open to interpretation and debate, but the cold facts of such an operation are not, and I don't think many people see themselves abstractly enough to commit suicide on the basis that what amounts to a clone will keep on going.
[QUOTE=JgcxCub;31887529]I would personally prefer if more research went into improving humans directly (i.e. splicing) rather than computers. I mean, humans are already extremely good at this stuff. Why improve what's naturally designed (I don't mean intelligently designed btw) to be adaptable rather than what will take a hell of a lot more effort? I should make it clear that I'm [I]not[/I] saying don't research AI - instead, shift it more towards improving humans.[/QUOTE] Because these guys are interested in modeling brains on computers and not working on other things? And any researcher looking into anything related to the brain would kill for an accurate computer model because the average person isn't too enthusiastic about allowing people to poke around in their brains while alive even if it is in the name of science.
All I can think of is Cortex Command right now.
[QUOTE=catbarf;31892609]You are absolutely correct in that it's a big, nebulous mess. But I think we can agree that if your consciousness is uploaded to a computer and you are then killed, you are still dead, even if a computer copy continues to exist. The nature of consciousness over the course of a lifetime is unsure and open to interpretation and debate, but the cold facts of such an operation are not, and I don't think many people see themselves abstractly enough to commit suicide on the basis that what amounts to a clone will keep on going.[/QUOTE] I agree. Don't get me wrong, I want an AI copy of myself to live on after I die, but I'm never going to kill myself to make the AI copy seem more real.
Sorry, you need to Log In to post a reply to this thread.