• Artificial Intelligence: Is It Possible and Is It Ethical?
    204 replies, posted
Personally I think creating ai could be possible by combining neuroscience, engineering, physics, and chemistry But would it be ethical to make a conscious being to aid the hopes and goals of mankind? I think that it would, but only if they were treated like a human being
IBM is making neuro-computers, which is even can "think" better than human brain. As I know, all the global financial "things" is controlling by that. Some people even says that, it can calculate future... Just for fun, IBM workers, made it "CIVIL" version called WATSON, and it won jeopardy. [video=youtube;WFR3lOm_xhE]http://www.youtube.com/watch?v=WFR3lOm_xhE[/video] sorry for bad english.
I don't see why it wouldn't be ethical. We create people all the time and then impose rules on them so they are, at the very least, not a threat to society.
Possible? Yes. IBM's Watson is a definite proof. Ethical? WHO CARES LET'S DO IT ANYWAY!
I don't see any ethical dilemma. It's as much of a dilemma as having a child or animal husbandry.
[QUOTE=JohnnyMo1;34378615]I don't see why it wouldn't be ethical. We create people all the time and then impose rules on them so they are, at the very least, not a threat to society.[/QUOTE] I don't see how it's ethical to have children. You have a right to your own body and you're knowingly a partial cause of every pain and discomfort they suffer. I view that sort of procreation as a neutral action. If creating intelligence where the former isn't a factor, you're still a partial cause of all of their pain and discomfort.
[QUOTE=Rubs10;34379156]I don't see how it's ethical to have children. You have a right to your own body and you're knowingly a partial cause of every pain and discomfort they suffer. I view that sort of procreation as a neutral action. If creating intelligence where the former isn't a factor, you're still a partial cause of all of their pain and discomfort.[/QUOTE] I am using ethical in the sense of "not unethical."
I can't think of any ethical arguments against creating human-level AI that don't also apply to creating humans
Robot buddy!
[QUOTE=JohnnyMo1;34379578]I am using ethical in the sense of "not unethical."[/QUOTE] If they don't like their existence the creator is partially at fault. You should be licensed to create AI. If you're physically or mentally abusing an AI, or abusing its rights, you shouldn't be allowed to create them. Creating an AI comparable to a human and then making its life into hell should be illegal.
[QUOTE=Kastro;34376148]IBM is making neuro-computers, which is even can "think" better than human brain. As I know, all the global financial "things" is controlling by that. Some people even says that, it can calculate future... Just for fun, IBM workers, made it "CIVIL" version called WATSON, and it won jeopardy. [video=youtube;WFR3lOm_xhE]http://www.youtube.com/watch?v=WFR3lOm_xhE[/video] sorry for bad english.[/QUOTE]That is both hilarious and scary.
[QUOTE=Rubs10;34381380]If they don't like their existence the creator is partially at fault.[/QUOTE] Not really, unless the creator could have predicted it.
Yes. It is unethical. If an AI can feel emotions and stuff, What if it goes in a game like Grand Theft Auto? You will kill family, there will be funerals, etc etc.
[QUOTE=DarkCisco;34382346]Yes. It is unethical. If an AI can feel emotions and stuff, What if it goes in a game like Grand Theft Auto? You will kill family, there will be funerals, etc etc.[/QUOTE] I wanna say that sounds very intriguing without being insensitive, but for Science, this is a must.
Possible, yes. Ethical, yes. And no, neither Watson or any other super computer today can "think" better than a human brain. It's one thing to be able to, through a large database, locate answers. It's completely different to do logical thinking and to be self cognitive. But I'm sure we will get there eventually. I think it's interesting to think how we will look in the future. We are already replacing malfunctioning/lost body parts with artificial ones. Glasses and hearing aids are common. So are artificial limbs. Will we ever be able to replace organs with even better artificial organs, and even the brain? At what point will we stop being humans and become "robots"? And what happens then, when we all have become machines? We will just start replacing our "limbs" with wheels and wings and what not, since they work so much better anyway. And eventually our limbs and bodies will become useless, since the whole world is just a massive network of super computers. You don't need to travel anywhere physically, only the brain is needed. So it's just much easier to "upload" your brain into a data bank. And then what, we are all just very very complex programs? And the programs just start working together, creating bigger programs until we all are unified into one enormous static program. What will happen to our consciousness as we gradually travel down this line? I like to think this is the "meaning" of life. Big bang happened, spreading every single particle in the universe. Stuff starts to form. Atoms, stars, planets, life. Like atoms have a natural desire to bond and form more complex entities. Life is unavoidable, and it will grow more powerful as time goes by. We will even be able to control time itself. And the point will come where every single particle, all energy, in the universe have merged into one single static system, a singularity. And then a big bang can happen again. But that's just me speculating :)
Cool.
i like that show BSG cause it goes over this kinda stuff
[QUOTE=Rad McCool;34385738]Possible, yes. Ethical, yes. And no, neither Watson or any other super computer today can "think" better than a human brain. It's one thing to be able to, through a large database, locate answers. It's completely different to do logical thinking and to be self cognitive. But I'm sure we will get there eventually. I think it's interesting to think how we will look in the future. We are already replacing malfunctioning/lost body parts with artificial ones. Glasses and hearing aids are common. So are artificial limbs. Will we ever be able to replace organs with even better artificial organs, and even the brain? At what point will we stop being humans and become "robots"? And what happens then, when we all have become machines? We will just start replacing our "limbs" with wheels and wings and what not, since they work so much better anyway. And eventually our limbs and bodies will become useless, since the whole world is just a massive network of super computers. You don't need to travel anywhere physically, only the brain is needed. So it's just much easier to "upload" your brain into a data bank. And then what, we are all just very very complex programs? And the programs just start working together, creating bigger programs until we all are unified into one enormous static program. What will happen to our consciousness as we gradually travel down this line? I like to think this is the "meaning" of life. Big bang happened, spreading every single particle in the universe. Stuff starts to form. Atoms, stars, planets, life. Like atoms have a natural desire to bond and form more complex entities. Life is unavoidable, and it will grow more powerful as time goes by. We will even be able to control time itself. And the point will come where every single particle, all energy, in the universe have merged into one single static system, a singularity. And then a big bang can happen again. But that's just me speculating :)[/QUOTE] Watson can logically think. Not as well as human, but think logically non the less.
The only problem would be hacking and breaks. You make it idiot proof,somebody makes a better idiot. In this case,you make it hacker proof,somebody will make a better hacker.
Possible? Yes. Ethical? Why wouldn't it be? [editline]27th January 2012[/editline] [QUOTE=!TROLLMAIL!;34416021]The only problem would be hacking and breaks. You make it idiot proof,somebody makes a better idiot. In this case,you make it hacker proof,somebody will make a better hacker.[/QUOTE] What if we were to install some kind of safety to block functions and shutdown in case it got hacked?
The aspect that I dislike about arficial intellgence is that humans become more and more [i]obsolete[/i] and a lot of people will suffer because their abilities will be replaced by A.I. Besides humans become more and more dependant from electronic devices, which is not necessarily bad but can backfire one day.
I believe that AI can only really be simulated, since making a real AI will probably take lots of years and millions of pounds/dollars to do. As for Ethical, I guess christians or some other religeon would probably be like 'God is the only one who is allowed to make life!' In fact, I believe that it's the whole religeous 'I have a problem with this so fuck you' thing which comes with anything from stem cells to this has slowed down our progress in creating better technology, technology of which I believe could really improve our future.
watson is not real AI. it's just a piece of code following instructions. [url]http://en.wikipedia.org/wiki/Blue_Brain_Project[/url] this is by far the closest we got to simulating real brain. they are trying to simulate it on molecular level. [quote]In November 2007,[5] the project reported the end of the first phase, delivering a data-driven process for creating, validating, and researching the neocortical column. By 2005 the first single cellular model was completed. The first artificial cellular neocortical column of 10,000 cells was built by 2008. By July 2011 a cellular mesocircuit of 100 neocortical columns with a million cells in total was built. A cellular rat brain is planned for 2014 with 100 mesocircuits totalling a hundred million cells. Finally a cellular human brain is predicted possible by 2023 equivalent to 1000 rat brains with a total of a hundred billion cells.[6][7] Now that the column is finished, the project is currently busying itself with the publishing of initial results in scientific literature, and pursuing two separate goals: construction of a simulation on the molecular level,[1] which is desirable since it allows studying the effects of gene expression; simplification of the column simulation to allow for parallel simulation of large numbers of connected columns, with the ultimate goal of simulating a whole neocortex (which in humans consists of about 1 million cortical columns). [/quote] [media]http://www.youtube.com/watch?v=_rPH1Abuu9M[/media] [media]http://www.youtube.com/watch?v=wDY4cFJauls[/media] [media]http://www.youtube.com/watch?v=h06lgyES6Oc[/media]
Why is everyone here so focused on a so-called human-level AI? What the fuck kind of AI researcher wants to focus on something so [I]pointless[/I] as a machine that can think like a human? If you want something that can think like a human, get laid and wait 9 months. I don't think people really understand the sheer [I]danger[/I] of making an AI with a greater-than-human intelligence, which is much more likely to come about.
[QUOTE=DainBramageStudios;34419314]Why is everyone here so focused on a so-called human-level AI? What the fuck kind of AI researcher wants to focus on something so [I]pointless[/I] as a machine that can think like a human? If you want something that can think like a human, get laid and wait 9 months. I don't think people really understand the sheer [I]danger[/I] of making an AI with a greater-than-human intelligence, which is much more likely to come about.[/QUOTE] how is having a machine that can learn itself to talk and much much more pointless?
[QUOTE=DainBramageStudios;34419314]Why is everyone here so focused on a so-called human-level AI? What the fuck kind of AI researcher wants to focus on something so [I]pointless[/I] as a machine that can think like a human? If you want something that can think like a human, get laid and wait 9 months. I don't think people really understand the sheer [I]danger[/I] of making an AI with a greater-than-human intelligence, which is much more likely to come about.[/QUOTE] Terminator?
[QUOTE=Satane;34420435]how is having a machine that can learn itself to talk and much much more pointless?[/QUOTE] Because it's aiming so low. It would be as if early computer scientists decided set out to create a machine that could do arithmetic at the same rate as human minds. [editline]27th January 2012[/editline] [QUOTE=dass;34420445]Terminator?[/QUOTE] no
[QUOTE=DainBramageStudios;34420485]Because it's aiming so low. It would be as if early computer scientists decided set out to create a machine that could do arithmetic at the same rate as human minds. [editline]27th January 2012[/editline] no[/QUOTE] they have to start with something, they're making rat brains now. if they even manage to actually make real human-level AI, the next step is of course multiple human minds. just like they had to start with simple computers... your point doesn't make any sense.
[QUOTE=Satane;34421549]they have to start with something, they're making rat brains now. if they even manage to actually make real human-level AI, the next step is of course multiple human minds. just like they had to start with simple computers... your point doesn't make any sense.[/QUOTE] It only seems obvious because you are a human mind and you can't imagine other possible mind configurations very well. The template for the human mind is only a single point in a vast space of possible mind types. Humanlike minds will not be created before more advanced general intelligences. AI researchers are interested in the latter while neuroscientists are interested in the former. Guess which are the ones actually building the machines?
[QUOTE=DainBramageStudios;34421689]It only seems obvious because you are a human mind and you can't imagine other possible mind configurations very well. The template for the human mind is only a single point in a vast space of possible mind types.[/QUOTE] you're contradicting yourself, but i agree with what you said this time.
Sorry, you need to Log In to post a reply to this thread.