[QUOTE=pac0master;45394235]But the thing is the day we can create more robots to create products ,etc... will be the extinction of Capitalism.
No job? No money, no way to buy product. (Large) companies can't sell anymore.[/QUOTE]
I agree with you completely, however, capitalism is probably going to fail even without such a massive industrialization. The perfect capitalist society relies on a 100% product consumption, meaning that every single person has to buy and buy and buy. Not only is that impossible for our time, but the current financial crisis, the rising gap between the rich and poor and even the depletion of Earth's resources may cause the demise of the "middle" class and thus make capitalism in most places impossible. Of course, I might be completely wrong, some people think capitalism will prosper, though it's not an opinion I share with them.
But don't let me derail the topic, continue speaking about technology.
I had always thought that Data from Star Trek was an accurate representation of what intelligent AI in the future would be like. He is sentient, aware of his existence and smart enough to function like a normal human. But at the same time he still isn't actually as intelligent as your average human and this begins to make it very obvious that he's an android. He continually shows a nuanced sense of wisdom, sensitivity, and curiosity which otherwise feels slightly different from what the average human would act.
Granted if and when we ever reach that point in technology, I'm sure they'd continue to improve on it until we reach a fully human-like AI.
[QUOTE=UntouchedShadow;45399171]I had always thought that Data from Star Trek was an accurate representation of what intelligent AI in the future would be like. He is sentient, aware of his existence and smart enough to function like a normal human. But at the same time he still isn't actually as intelligent as your average human and this begins to make it very obvious that he's an android. He continually shows a nuanced sense of wisdom, sensitivity, and curiosity which otherwise feels slightly different from what the average human would act.
Granted if and when we ever reach that point in technology, I'm sure they'd continue to improve on it until we reach a fully human-like AI.[/QUOTE]
I'll take Data over Lore any day.
[QUOTE=Sally;45278482]We've managed to create some functioning organs.
[URL]http://video.pbs.org/video/1754537562/[/URL]
If I could embed this I would[/QUOTE]
'Murican only. Use Hola if you want to watch this if you're in Britain.
[B]
Edit: It's on YouTube
[/B][video=youtube;jLs8DeHVkec]http://www.youtube.com/watch?v=jLs8DeHVkec[/video]
The faster technology develops the better, FTL drives please and thank you.
[QUOTE=Killer monkey;45402634]The faster technology develops the better, FTL drives please and thank you.[/QUOTE]
There's a difference between technological progress and breaking the laws of physics.
Transferring our consciousness to another medium or enhancing our body to live longer so as not to suffer from the length of the journey is theoretically possible, FTL travel is not.
[QUOTE=pac0master;45341887]that's why I find this scary.
One one side i find the idea awesome but I also see the problems so I can't make my decision about it[/QUOTE]
It's not great but I do suggest you watch Gattaca
It involves the engineering of human children to have all of the best genes from each respective parent
however I do have to say that the ramifications of perfecting the genetic contents of our children is not nearly as terrifying as a future with more sophisticated AIs, humanoid robotics and the prosthetic technologies
I honestly do think that our generation will witness what we would call cyborgs but I'm sure that the term will change in time based on what we define as being more or less human as we create no only robotic/synthetic limbs but organs as well
The only things I really fear are.
Teleportation and imaging/re-imaging brains.
One thing I want to see in my life time is a means of preventing cardiac arrest/ a device to quickly provide oxygenated blood/artifical blood to the brain that can be carried around by EMTs. It seems so senseless that so many people die of something so primitive.
[QUOTE=Damaximos;45412319]however I do have to say that the ramifications of perfecting the genetic contents of our children is not nearly as terrifying as a future with more sophisticated AIs, humanoid robotics and the prosthetic technologies[/QUOTE]
Why do people find this terrifying?
[QUOTE=Sector 7;45433198]Why do people find this terrifying?[/QUOTE]
My guess is they are afraid that with such sophisticated AIs a singularity might occur.
[QUOTE=Sector 7;45433198]Why do people find this terrifying?[/QUOTE]
Have you ever seen that episode of Star Trek TNG where Data asks the computer to create an opponent that could potentially beat him? Basically, it leads into all sorts of problems.
[QUOTE=Sector 7;45433198]Why do people find this terrifying?[/QUOTE]
We simply fear the unknown. they might fear that we create something in which we won't have any control over.
[QUOTE=Sector 7;45433198]Why do people find this terrifying?[/QUOTE]
We fear objects that have agency but that we can't intuitively apply a theory of mind to.
In the future...
computers will become the programmers and, humans will become the processors.
[B]The End.[/B]
I recently watched the movie Transcendence, It feature one of the things I was thinking about.
What about a technology capable to transfert our Mind into a computer or something?
Glados is an other example.
[QUOTE=Zenreon117;45340643]I don't know about you, but if genetic engineering becomes a thing, I will be one of those people who just utterly refuse to do it. There will be a schism in the races of humanity when that happens. I prefer to keep my children's DNA intact. I'll be the old geezer grumbling about where society has gotten to.[/QUOTE]
Enjoy being left behind as those around you succumb to perfection. Your child shall become an outcast, an outcast who is both physically and mentality inferior to the norm. Your child will grow a discontent towards you for bestowing a life of inadequacies when you were offered the chance to perfect him. He will have no job, as his skills cannot compete in the market and thus cannot sustain a livelihood. He will have no friends as he fails to comprehend the advance form of social interaction. He will be forgotten in the dust, with no one crying out injustice.
Tradition has no role in the future friend.
[editline]18th August 2014[/editline]
[QUOTE=Nighty;45346828]On the topic of technology, moral and robots replacing humans, do you think a simple machine would ever be able to replace a human being. I mean, people talk how there will be nothing but robots in the military and police, and yet no one so far has been able to give "emotions" to a machine.[/QUOTE]
Emotions stem from a desire. There has to be a stimulus or motive of some sort.
To create a form of intelligence like the human species and implant it into artificial life would be very very difficult.
The reason being is because you have to answer the question "why?"
Right now through programming humans have created reason/motives in machines. However this means that machines are wholly reliant on the human mind to create reason. To suggest that a machine can program a motive into another machine would mean that the initial parent machine must have a existing motive . That raises the question of why that initial parent machine is programing the second machine. So we've reach a dilemma of how artificial intelligence cannot be independent of the human species.
The thing about this whole "singularity", is that technology doesn't just suddenly go "Hey, I'm alive". All of the functions of machine always will require some form us human input, nothing is completely automatic. And even if a machine DOES become fully automatic, that's because of human imput telling it what to do.
There will be no robot uprising, or anything of the sort. No matter how autonomous, it's still just running pre-programmed sequences made by humans.
[QUOTE=SirLemon;45733202]The thing about this whole "singularity", is that technology doesn't just suddenly go "Hey, I'm alive". All of the functions of machine always will require some form us human input, nothing is completely automatic. And even if a machine DOES become fully automatic, that's because of human imput telling it what to do.
There will be no robot uprising, or anything of the sort. No matter how autonomous, it's still just running pre-programmed sequences made by humans.[/QUOTE]
We made robots who are capable to learn in some ways,
The thing about singularity ain't just that " robot uprising " it's also where technology will grow so fast, that it will get to a point where it's imaginable to predict what's next. Similar to how Nowadays's technology would looks like in the bronze age. But in about the time of One generation. ( about 20-30 years )
In the computer domain, the Singularity is the point where a computer calculation power will be over the entire mankind. You simply need to make the computer able to learn. which kind of exist already.
We created autonomous robots which are capable of subsisting by themselves, I think they deployed them at the border of North and South Korea. They are automatic and can shot anyone going in the restricted area without being connected to any command centre or anything.
Scientist are studying brains and try to recreate a robotic one. It's not impossible, Plus. since the technology is evolving faster over time, it's possible that we make robots capable of though one day.
I think that before we even CONSIDER true artificial intelligence, we need to figure how to make a morality chip based off of Asimov's laws of robotics or something similar. If we did, then, in theory, we would not have any skynet- type snafus.
Asimov's Laws:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
[QUOTE=lilshatespug;45910462]I think that before we even CONSIDER true artificial intelligence, we need to figure how to make a morality chip based off of Asimov's laws of robotics or something similar. If we did, then, in theory, we would not have any skynet- type snafus.
Asimov's Laws:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.[/QUOTE]
Even if a robot's behavior was programmed to strictly follow those rules, its perception only covers a fraction of reality and it could still violate them unwillingly, or through the manipulation of other human beings.
[editline]11th September 2014[/editline]
I think the best way to deal with these kind of fictional dilemmas is to always keep having the upper hand over our future artificial companions. Something like a shutdown button or IEM charges.
Alternatively, if we end up with AI that is sentient enough, we could consider them as our equal and maintain a symbiotic relationship. (For example, think of an AI controlled space mission set to terraform some backwater planet which, upon completion would enable humans to use its resources, in part to further the existence of other AIs).
[QUOTE=lilshatespug;45910462]I think that before we even CONSIDER true artificial intelligence, we need to figure how to make a morality chip based off of Asimov's laws of robotics or something similar. If we did, then, in theory, we would not have any skynet- type snafus.
Asimov's Laws:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.[/QUOTE]
actually towards the end of his life, when sci-fi had moved past robots as mechanical workers, he worked with another author to create a set of laws that more or less allowed for AI with free intelligence and yet still protect humanity, basically they were endowed with a need to protect humans but at the same time they weren't required to be servants of humans either.
another later example is the artalects in bowl of heaven and its sequel shipstar, they are AIs that are basically free willed and are curious as ever, they actually work with the humans aboard the ship as they wouldn't be able survive without us, and we wouldn't be able to survive without them
as i see it when we do create artificial inteligence it would be inethical to force them into servatude and it would be in our best interest to allow them to learn with limits though, allowing them to replicate without limit would be a danger to all humanity as they would exponentially require more resources to be devoted to them
What would be the point in time were you think technological advancement will become "immoral" in your point of view and why?
[QUOTE=pac0master;46337006]What would be the point in time were you think technological advancement will become "immoral" in your point of view and why?[/QUOTE]
When we start to separating those who use technology to improve their physical and mental abilities over those who chose not to augment themselves. It will always start with discrimination.
My fellow humans want to turn this world into a fucking nightmare. Everyday I am terrified to death thinking about what kind of future I'm being dragged into. Thanks a lot.
Sorry, you need to Log In to post a reply to this thread.