[QUOTE=woolio1;45683704]There are lots of criticisms of Asimov's laws...
Honestly, I wonder if it wouldn't be better to just not bind robots to any strict code of artificial laws, and instead program them with some standard of noninterference with humanity. Basically, tell them not to interact with the social direction of humanity as a whole, including mass-salvation or mass-destruction.[/QUOTE]
So that if we need to interact with them it is nigh impossible to do so?
[QUOTE=WitheredGryphon;45683725]So that if we need to interact with them it is nigh impossible to do so?[/QUOTE]
I said prevent them from influencing our social direction, not prevent them from interacting with humans. There wouldn't be much of a point to sentient AI if we did that.
I would like to live to see a world in which working is something you did because you wanted to and not because you needed to. Where people were free to follow artistic and scientific pursuits.
[QUOTE=JoeSkylynx;45681925]Our existence as a whole becomes sorta pointless though. Why be alive if your work and existence is occupied by something already infinitely better then you?[/QUOTE]
Perhaps there will be a massive religious revival. How can any robot, no matter how advanced, worship God like His own creations can?
[QUOTE=woolio1;45683755]I said prevent them from influencing our social direction, not prevent them from interacting with humans. There wouldn't be much of a point to sentient AI if we did that.[/QUOTE]
Social direction is a concept made up by us and is too broad a definition to be implemented into something like programming an artificial intelligence. The only way to prevent them from influencing us would be for them to ignore us.
For example, if a robot were to take over in a company and people didn't like whatever came from that company, would that be considered influencing social direction?
It's far too broad a definition. It would be better to have them governed by laws to ensure they can't abuse any loopholes they might run into. This would also allow for small scale testing which could be implemented into larger scales after successful results unlike letting them think for themselves where "don't socially influence us" is all that governs them.
[QUOTE=lazyguy;45683784]Perhaps there will be a massive religious revival. How can any robot, no matter how advanced, worship God like His own creations can?[/QUOTE]
I can (not really but hypothetically) program a robot to do 24/7 sermons, worship, baptisms, preach, what have you. If it can think for itself is there any real difference to it from a human?
So whats the solution?
[QUOTE=WitheredGryphon;45683800]I can (not really but hypothetically) program a robot to do 24/7 sermons, worship, baptisms, preach, what have you. If it can think for itself is there any real difference to it from a human?[/QUOTE]
1. How can a machine programmed to have something occupying its time all day every day think for itself?
2. Why do you apply such a mechanistic viewpoint to matters of the Divine?
[QUOTE=Pantz Master;45683892]So whats the solution?[/QUOTE]
There really isn't one, if we keep advancing our capability of automation.
Of course, as long as companies rely on people to pay for their products, people will be employed at companies producing them. If you don't have people working, you don't have people getting paid, and those companies that are replacing their employees with robots won't make any money. The economy essentially prevents a complete automation of itself. (You could sidestep that by moving to a non-currency based economy, or using extensive welfare... But I'm not really sure why you'd do that if it weren't to just get everyone out of a job.)
I don't agree with the bit about chess. It was easily conceivable that a computer could learn how to play chess. After all, it is a game of defined rules which is perfectly translatable to a computer.
Other jobs or tasks are far less defined and for them to be emulated by robots will require complex models with sufficient data which aren't happening anytime soon.
[QUOTE=TurboSax;45681480]Realistically, they'd probably be better and less consumer-abusive CEOs.
Many human CEOs (not all) typically only care about themselves and their closest personal allies, to the point of actively pissing on normal people despite how it might hurt the business and its profits. In the eyes of many CEOs, you, me and all of the people they don't have to compete/ally with in-person can go die in a ditch after handing over that sweet cash.
A computerized CEO would at least be able to notice that treating consumers like shit reduces profits to some degree, due to consumers refusing to purchase from the company and spreading negative opinions. Therefore, it wouldn't let that happen, and would try to ensure a positive image while also engaging in standard subterfuge, bribery and other such corporate fuckery under the radar. It wouldn't care about its own personal wealth, and it wouldn't have any need for friends and alliances on a non-corporation-wide level, so it would merely focus on maximizing profit.
I mean, a synthetic CEO wouldn't exactly be an angel, since the best ways to maximize profit as a modern corporation almost always involve illegal and immoral acts too numerable to count, but it'd still be better than the human version. At the very least, we'd get fucked over with a smile, which is still a step up from the current standard.[/QUOTE]
CEOs are these weird things that steal all our money, right? the news told me i swear
[QUOTE=OrDnAs;45683917]CEOs are these weird things that steal all our money, right? the news told me i swear[/QUOTE]
maybe CEO's are just the medium which turn computer algorithms into maximized profit
[QUOTE=lazyguy;45683901]1. How can a machine programmed to have something occupying its time all day every day think for itself?
2. Why do you apply such a mechanistic viewpoint to matters of the Divine?[/QUOTE]
Unlike humans, machines have the ability to truly multitask.
Take for example, you leave the computer on for twenty four hours with a browser window open. You can continue to browse the internet all you like and the computer will respond as quickly as it possibly can, but all in the background your computer is freeing and allocating memory to many different processes that are running all the while.
Now if a machine could think for itself, you can substitute the "occupying time all day every day" part similar to how a person can browse the internet. However, machines are capable of truly multitasking and can do it without any human limitations (sleep, eating, bathroom, etc.) and assuming it had enough processing power it could be thinking all the while in the background...similar to how your computer is freeing and allocating memory while you use it.
Now you assume that because you're human, everything religiously speaking belongs to you. You may very well be right, I'm not the judge of that. However, let's say I programmed a machine around the religion of Christianity. Assuming it can think for itself, it would look for ways to apply itself to the bible and look for ways that the bible applies to it. Again, this machine is of equal intelligence or even smarter than you are with the addition of, well being a machine, and as a result, it could far more easily interpret scripture than you could. Would this not make it far superior to preachers?
Because it can think, we can assume it can also interact and teach. We already have robots capable of doing so. This preacher robot would then be able to "preach" to other robots without a religion, were they designed to seek one, and these robots would be able to accept said religions.
Again, I re-iterate, if it can think for itself then the only thing that separates us is what we're made of.
Ya see, this is when we go the Ian M. Banks route.
We automate everything, robots doing everything we would ever need, including war and the like, while we all live ridiculous cushy lives doing frivolous things without an economy or money to worry about.
Long live the Culture.
We'd still have an economy. Communism is bound to fail in the same manner true Capitalism is bound to fail.
This is perfectly relevant. Not so much the murder part but more everything else.
[media]http://www.youtube.com/watch?v=05bGPiyM4jg[/media]
A lot of people ignore the fact that we can create tools or even create cybernetics that could make ourselves equal to robots in efficiency.
Cybernetics, Prothestics and AI are moving forward, doesn't matter how much AI progress, we'll make something to give us a benefit over our AI counterparts.
Can we all just admit to shoot for a Futurama future?
[QUOTE=Swilly;45685207]A lot of people ignore the fact that we can create tools or even create cybernetics that could make ourselves equal to robots in efficiency.
Cybernetics, Prothestics and AI are moving forward, doesn't matter how much AI progress, we'll make something to give us a benefit over our AI counterparts.
Can we all just admit to shoot for a Futurama future?[/QUOTE]
How could using a machine to upgrade a human make a human better than a machine?
Maybe having no need for work, and complete abundance will force us to transfer to a system that is not based on scarcity?
[QUOTE=kaine123;45686451]Maybe having no need for work, and complete abundance will force us to transfer to a system that is not based on scarcity?[/QUOTE]
God willing. It's definitely a possibility that's been brought up.
[url="http://en.wikipedia.org/wiki/Post-scarcity_economy"]Post-scarcity Economy[/url]
[QUOTE=WitheredGryphon;45686596]God willing. It's definitely a possibility that's been brought up.
[url="http://en.wikipedia.org/wiki/Post-scarcity_economy"]Post-scarcity Economy[/url][/QUOTE]
Scarcity is unavoidable though. Nothing is infinite in the universe, and capitalism of some sort will always persist on colony worlds.
Sorry, you need to Log In to post a reply to this thread.