Imagine what things will be like when computers start being CEOs. Terrifying.
I bet there will be robot speechwriters and campaign managers too. Everything important will be run by a robot.
I guess, if you want to avoid this over your lifetime, it might not be a bad idea to look into jobs that aren't as reliant on menial labor.
Sure, robots will learn to write music, make artwork, all that sort of thing. They'll learn to write books, and novels, and news articles... But they don't yet have the human aspects that make those works, which are very much a personal thing, personal.
And that's a big part of why automation won't come nearly as quickly to those markets. Even in the wake of increasing automation, the journalism industry is actually growing fairly rapidly. That's not adding more robots to the job, that's hiring more reporters, content creators, website designers, IT professionals, and everything else that's fueling the massive shift from Meatspace to Webspace. The biggest threat right now is social media aggregation, but that has massive flaws that will need to be accounted for before we have "reporter bots" going around interviewing people.
And the same goes for art, and music, and cinema, and photography, and writing, and everything else that relies on the person experiencing it knowing that a person's created it. Essentially, the things that currently rebel against the mass-produced, corporatized, consumerist culture will continue to do so, and will continue to exist, by sheer nature of the fundamental properties of their being.
Everything else, though? All the menial labor, all the science, all the manufacturing, all the office work, that's all going to go away very, very soon. Perhaps within our lifetimes. And that's going to be disastrous to the economy unless we can find a way to transition smoothly.
What about robot therapists. This is some Asimov level shit right here.
[QUOTE=Cabbage;45680977]People will still have jobs.
In the Robot Resistance.
[editline]13th August 2014[/editline]
Tell me again how you measure creativity?[/QUOTE]
Creativity's not exactly quantitative. Robots can make brilliant masterpieces, but part of the reason we're drawn to the creative arts is because we have an emotional link to them.
Something without emotion creating something meant to evoke emotion just doesn't work, at least with our current way of understanding those things. It's the same reason people buy Picassos for millions of dollars, would you buy something made by a robot for the same? (It could also be argued that, in the case of the Picassos, if you have an original it's not being sold again. A robot can churn out hundreds of the exact same painting, killing that inherent rarity, and thereby the uniqueness of the piece.)
[QUOTE=SamPerson123;45680984]Imagine what things will be like when computers start being CEOs. Terrifying.[/QUOTE]
"C'mon boss, can I at least get a raise?"
"I'm sorry Sam, I'm afraid I can't do that."
[QUOTE=Sharker;45680971]Robots will still never be as creative as humans.[/QUOTE]
I don't believe that.
One popular argument is that human creativity and mistakes are unique, and that our imperfection is special in a way that no machine could ever replicate.
Except the question that - if given enough time, wouldn't a machine be capable of theoretically replicating human error as well as unpredictable creativity and eventually mimicking that as well to the point where you would never be able to tell the difference? I mean that's the entire purpose of self-learning AI, isn't it?
If a machine is given a unique strain of data or code that basically condescend the entirety of human history (everything from war to art) down to process-able data, I don't see why an AI wouldn't be able to theoretically replicate that data to create new art, or new war tactics, or what have you.
I dont know shit about computer engineering though, so I'm just talking out my ass.
[QUOTE=SamPerson123;45680984]Imagine what things will be like when computers start being CEOs. Terrifying.
I bet there will be robot speechwriters and campaign managers too. Everything important will be run by a robot.[/QUOTE]
We need robocommunism
[editline]13th August 2014[/editline]
[QUOTE=Melnek;45681065]I don't believe that.
One popular argument is that human creativity and mistakes are unique, and that our imperfection is special in a way that no machine could ever replicate.
Except the question that - if given enough time, wouldn't a machine be capable of theoretically replicating human error as well as unpredictable creativity and eventually mimicking that as well to the point where you would never be able to tell the difference? I mean that's the entire purpose of self-learning AI, isn't it?
If a machine is given a unique strain of data or code that basically condescend the entirety of human history (everything from war to art) down to process-able data, I don't see why an AI wouldn't be able to theoretically replicate that data to create new art, or new war tactics, or what have you.
I dont know shit about computer engineering though, so I'm just talking out my ass.[/QUOTE]
Computers have less memory problems so they'll forget things less often.
Suddenly the Amish don't seem so dumb.
time for soylent green
[QUOTE=_Kent_;45681303]Suddenly the Amish don't seem so dumb.[/QUOTE]
Nah they still seem pretty dumb :v:
[QUOTE=SamPerson123;45680984]Imagine what things will be like when computers start being CEOs. Terrifying.[/QUOTE]
Realistically, they'd probably be better and less consumer-abusive CEOs.
Many human CEOs (not all) typically only care about themselves and their closest personal allies, to the point of actively pissing on normal people despite how it might hurt the business and its profits. In the eyes of many CEOs, you, me and all of the people they don't have to compete/ally with in-person can go die in a ditch after handing over that sweet cash.
A computerized CEO would at least be able to notice that treating consumers like shit reduces profits to some degree, due to consumers refusing to purchase from the company and spreading negative opinions. Therefore, it wouldn't let that happen, and would try to ensure a positive image while also engaging in standard subterfuge, bribery and other such corporate fuckery under the radar. It wouldn't care about its own personal wealth, and it wouldn't have any need for friends and alliances on a non-corporation-wide level, so it would merely focus on maximizing profit.
I mean, a synthetic CEO wouldn't exactly be an angel, since the best ways to maximize profit as a modern corporation almost always involve illegal and immoral acts too numerable to count, but it'd still be better than the human version. At the very least, we'd get fucked over with a smile, which is still a step up from the current standard.
[QUOTE=RichyZ;45681523]except they dont have videogames and anime and porn
and videogame anime porn
[B]also this just means we have to get over the mentality that we need to work all our lives and instate a basic income when it becomes viable[/B][/QUOTE]
Our existence as a whole becomes sorta pointless though. Why be alive if your work and existence is occupied by something already infinitely better then you?
[QUOTE=JoeSkylynx;45681925]Our existence as a whole becomes sorta pointless though. Why be alive if your work and existence is occupied by something already infinitely better then you?[/QUOTE]
It seems to me like the future is having any kind of menial jobs like construction workers and cashiers driven by robots and all creative jobs, such as architects and musicians, pursued by humans.
[QUOTE=JoeSkylynx;45681925]Our existence as a whole becomes sorta pointless though. Why be alive if your work and existence is occupied by something already infinitely better then you?[/QUOTE]
Work isn't a goal of life, it is a thing that we need to do. And existence is pointless if you think about it. Because you will die one day(and thus rendering everything you have done pretty pointless), the best you can do is have fun.
This'll only be a problem if new job fields weren't constantly being created as well
[editline]13th August 2014[/editline]
I mean, companies aren't stupid enough to fire everyone and replace them with robots
Who's gonna buy their stuff
[QUOTE=Jund;45682516]This'll only be a problem if new job fields weren't constantly being created as well[/QUOTE]Those new jobs do not have the capacity to satisfy our rapidly growing population.
[QUOTE=itisjuly;45682535]Those new jobs do not have the capacity to satisfy our rapidly growing population.[/QUOTE]
Do you have any proof of that cause it's kinda been happening for thousands of years
[editline]13th August 2014[/editline]
The horses argument doesn't even make any sense
They're living hella cushy lives right now and they still get paid the same, which is in food
Why do people have such a massive boners for working for the sake of working, even if you get nothing in return (in the case of the horses)
I seriously doubt horses talk about the job market at the watercooler
[QUOTE=Jund;45682558]Do you have any proof of that cause it's kinda been happening for thousands of years
[editline]13th August 2014[/editline]
The horses argument doesn't even make any sense
They're living hella cushy lives right now and they still get paid the same, which is in food
Why do people have such a massive boners for working for the sake of working, even if you get nothing in return (in the case of the horses)
I seriously doubt horses talk about the job market at the watercooler[/QUOTE]
Because I live a rather cushy life, and I feel fucking terrible half the time?
Cushy lives are boring, they do not give you the fun and entertainment brought by actually doing something with your two hands, no matter how boring the work and such.
Not to mention, humans need areas to socialize and grow together, and if our lives are pre-occupied by robots in such a way where you cannot socialize outside of public education, life becomes rather pointless and unthrilling.
I think the most realistic quote that can be said about this is this - Humans will always need something larger then them, or something that threatens their existence to a degree. Without that, we are nothing. Without that fear of something, we grow discontent and will look for something to fill that void, be it pleasure from dangerous substances or experiences.
That's humanity in a nutshell dude, we are incrediably self-destructive.
[QUOTE=JoeSkylynx;45682727]Because I live a rather cushy life, and I feel fucking terrible half the time?
Cushy lives are boring, they do not give you the fun and entertainment brought by actually doing something with your two hands, no matter how boring the work and such.
Not to mention, humans need areas to socialize and grow together, and if our lives are pre-occupied by robots in such a way where you cannot socialize outside of public education, life becomes rather pointless and unthrilling.
I think the most realistic quote that can be said about this is this - Humans will always need something larger then them, or something that threatens their existence to a degree. Without that, we are nothing. Without that fear of something, we grow discontent and will look for something to fill that void, be it pleasure from dangerous substances or experiences.
That's humanity in a nutshell dude, we are incrediably self-destructive.[/QUOTE]
You have no one to blame for that boredom except yourself
The only people who'll end up looking like they belong in Wall-E are those who accept it
If the primary reason you socialize is because you're forced to due to school or work then it's your own problem
sure automation is inevitable according to the video/whats happening these current years, but when things start to really get automated, i don't think simplistic jobs/professions like arts and music would need to be robotic.
I almost wonder if, in a 'jobless' society, things will become like the stereotype of Ancient Greece- everybody having loads of time because the 'slaves' do all the work, lying around, educating themselves and discovering new things. A kind of robo-Renaissance, except with less cholera.
[QUOTE=Jamsponge;45683301]I almost wonder if, in a 'jobless' society, things will become like the stereotype of Ancient Greece- everybody having loads of time because the 'slaves' do all the work, lying around, educating themselves and discovering new things. A kind of robo-Renaissance, except with less cholera.[/QUOTE]
Then the robots starting thinking for themselves and that's where things go down hill. Or an EMP from the sun takes them out. Whichever happens first.
[QUOTE=Jund;45682872]You have no one to blame for that boredom except yourself
The only people who'll end up looking like they belong in Wall-E are those who accept it
If the primary reason you socialize is because you're forced to due to school or work then it's your own problem[/QUOTE]
My boredom is from already doing so much, and not finding interest outside of exploring different areas, of which I tend to not have the money for, which guess what, unemployment would defiantly make it so I can have more hobbies, yup.
And you act like most people do not find people with similar interest in public education, within their own neighborhood, or from work. Even if you are a social butterfly who makes mountains of friends from places like social networking and the like, you'll still be bound to find more friends in your own neighborhood, at work, at school, or meeting people when you are doing your own hobbies. The latter of which tends to be the case when you join an inner-circle that carries the same hobbies.
[QUOTE=Lone_Star94;45683373]Then the robots starting thinking for themselves and that's where things go down hill. Or an EMP from the sun takes them out. Whichever happens first.[/QUOTE]
If robots became sentient, we'd have to work out an agreement with them. For one thing, they probably wouldn't want to be 'robots' any more considering the meaning behind the word of servitude and mindlessness. But that's just semantics. If it happened, I expect we'd probably share responsibilities. After all, they could overpower us, but we can survive EMPs whereas they can't (which also makes nuclear fallout equally deadly for either race). I think, if artificial intelligence was ever truly achieved, we'd probably be okay.
[QUOTE=Lone_Star94;45683373]Then the robots starting thinking for themselves and that's where things go down hill. Or an EMP from the sun takes them out. Whichever happens first.[/QUOTE]
Faraday cages protect from EMPs / solar flares.
As for the "thinking for themselves", so long as they are governed by laws similar to Roger Clarke's Laws of Robotics:
[quote]
[B]The Meta-Law[/B]
A robot may not act unless its actions are subject to the Laws of Robotics
[B]Law Zero[/B]
A robot may not injure humanity, or, through inaction, allow humanity to come to harm
[B]Law One[/B]
A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate a higher-order Law
[B]Law Two[/B]
A robot must obey orders given it by human beings, except where such orders would conflict with a higher-order Law
A robot must obey orders given it by superordinate robots, except where such orders would conflict with a higher-order Law
[B]Law Three[/B]
A robot must protect the existence of a superordinate robot as long as such protection does not conflict with a higher-order Law
A robot must protect its own existence as long as such protection does not conflict with a higher-order Law
[B]Law Four[/B]
A robot must perform the duties for which it has been programmed, except where that would conflict with a higher-order law
[B]The Procreation Law[/B]
A robot may not take any part in the design or manufacture of a robot unless the new robot's actions are subject to the Laws of Robotics
[/quote]
Then we wouldn't have to worry about the dismissal of humanity as a whole.
[QUOTE=Jamsponge;45683656]If robots became sentient, we'd have to work out an agreement with them. For one thing, they probably wouldn't want to be 'robots' any more considering the meaning behind the word of servitude and mindlessness. But that's just semantics. If it happened, I expect we'd probably share responsibilities. After all, they could overpower us, but we can survive EMPs whereas they can't (which also makes nuclear fallout equally deadly for either race). I think, if artificial intelligence was ever truly achieved, we'd probably be okay.[/QUOTE]
All the "robot rebellion" stuff comes from Asimovian logic anyway. You get robots that are programmed not to let their creators screw up, so they try to eliminate their creators to prevent that from happening. The easiest way to avoid all that is to not use Asimov's laws.
[editline]13th August 2014[/editline]
[QUOTE=WitheredGryphon;45683665]Faraday cages protect from EMPs / solar flares.
As for the "thinking for themselves", so long as they are governed by laws similar to Roger Clarke's Laws of Robotics:
Then we wouldn't have to worry about the dismissal of humanity as a whole.[/QUOTE]
Now, see, you have that Law One there, and that kind of ruins everything... Because then the robots get super-protective, and eventually end up protecting us from ourselves and our environment.
The robots in the Matrix followed Law One... What better way to protect humanity than shove them into stasis pods?
[QUOTE=woolio1;45683670]All the "robot rebellion" stuff comes from Asimovian logic anyway. You get robots that are programmed not to let their creators screw up, so they try to eliminate their creators to prevent that from happening. The easiest way to avoid all that is to not use Asimov's laws.[/QUOTE]
Wasn't one of the criticisms of Asimov's laws that the inevitable fact that humans will attempt to let robots program themselves? Or that a group of terrorist would be fully capable of infecting robots with an AI that'd make them rudimentary angry with humans?
[QUOTE=woolio1;45683670]All the "robot rebellion" stuff comes from Asimovian logic anyway. You get robots that are programmed not to let their creators screw up, so they try to eliminate their creators to prevent that from happening. The easiest way to avoid all that is to not use Asimov's laws.
[editline]13th August 2014[/editline]
Now, see, you have that Law One there, and that kind of ruins everything... Because then the robots get super-protective, and eventually end up protecting us from ourselves and our environment.
The robots in the Matrix followed Law One... What better way to protect humanity than shove them into stasis pods?[/QUOTE]
This is why I said similar. First off, what defines a "robot" from a "human" is not described at all. Second of all, "harm" is not defined which is why something similar to the Matrix happened. As we come closer to this point, more and more efforts will be put into writing refined Laws of Robotics when this happens.
[editline]Edited: [/editline]
[QUOTE=JoeSkylynx;45683684]Wasn't one of the criticisms of Asimov's laws that the inevitable fact that humans will attempt to let robots program themselves? Or that a group of terrorist would be fully capable of infecting robots with an AI that'd make them rudimentary angry with humans?[/QUOTE]
Asimov's laws are extremely flawed in multiple ways.
[QUOTE=JoeSkylynx;45683684]Wasn't one of the criticisms of Asimov's laws that the inevitable fact that humans will attempt to let robots program themselves? Or that a group of terrorist would be fully capable of infecting robots with an AI that'd make them rudimentary angry with humans?[/QUOTE]
There are lots of criticisms of Asimov's laws...
Honestly, I wonder if it wouldn't be better to just not bind robots to any strict code of artificial laws, and instead program them with some standard of noninterference with humanity. Basically, tell them not to interact with the social direction of humanity as a whole, including mass-salvation or mass-destruction.
Sounds like we should get working on the tech for uploading humans into computers.
Sorry, you need to Log In to post a reply to this thread.