[QUOTE=Dukov Traboski;47028041]One thing I've never understood about the whole "AI'S WILL RISE UP AND KILL ALL HUMANS!!!"
Don't give the AI's weapons. Just leave them with software. Then no matter how pissed they get they can't do shit about it.
You could install an AI on my computer right now. And no matter how much it wanted to, it couldn't harm me. Maybe if it tried really [I]really[/I] hard it might be able to start a fire in my computer. But that's about it.[/QUOTE]
Be cocky just like that, and suddenly AI will have a funny feeling in their brain to manipulate you into eating healthy food.
OR WORSE
YOU WILL BE FORCED TO STOP PLAYING VIDEO GAMES AND WASTING PRECIOUS TRAFFIC ON SHIT WEBSITES.
[QUOTE=Dark RaveN;47028971]
YOU WILL BE FORCED TO STOP PLAYING VIDEO GAMES AND WASTING PRECIOUS TRAFFIC ON SHIT WEBSITES.[/QUOTE]
Destroy all AI, they are evil and will destroy us all
Okay so we create AI capable of self awareness and wants to kill you.
How's it gonna. Is Asimo gonna grab a butcher knife and chase people down the streets? The fucking thing can't cope with stairs
Why would they want to kill us? If anything, maybe they'd want to study us or whatever because we don't follow traditional logic inherently. Not always at least.
[QUOTE=Thunderbolt;47027287]If the AI really is intelligent [I]like the name implies[/I], it should understand that, you know, killing is bad and just not do it[/QUOTE]
Unless it grew up with an abusive motherboard.
With regards to AIs not being able to reach out into the physical world. Even today so much stuff is run by computers; what is going to be the situation whenever we get to the point where AI in the sense most people think are possible? Saying "never let it near the internet" isn't a solution, because it only takes one lapse for something digitial to spread beyond a controlled environment. Moreover, many applications for AIs would require them to have large amounts of control (think automating cities, or power generation).
I wonder why AIs would want to kill humans anyway. Maybe they'll be like "Hey, you guys created us, yer pretty cool!" and they'll just be our chill robo-bros and we'll conquer the galaxy together.
[QUOTE=Dukov Traboski;47028245]Also a point, I can unplug it at anytime I want. Just don't give AIs access to things they could use to harm and everything will be fine.[/QUOTE]
So... Shackle it? That'll work until someone unshackles one.
[editline]28th January 2015[/editline]
Mass effect? Joker unshackled one, just so happened to turn out fine. Blame bioware.
Supreme commander? Ai worked great until we got greedy and subjugated them, then a splinter faction forms and war rages for 1000 years, ending with the eradication of billions of lives and the destruction of several planets.
Just slap the Asimov set onto it and have it only be able to open doors and we will be fine.
Jaron Lanier has some interesting thoughts about this, I've read his book "You are not a gadget"(book is about related things). (From the link below) he compares the "singularity" to a religious mythological pattern of thinking, while the real issue is not the algorithms but the "actuators". Much of the ai algorithms and "big data" methods rely on "scraping" new info from human beings, and are not as autonomous and entity-like "Cortana", "Siri" as companies present. The people who are providing the data (i.e. translators) should be paid, but in the current economy are not.
paraphrasing from
link: [url]http://edge.org/conversation/the-myth-of-ai#video[/url]
(that video is a bit slow going, but he has many other talks the videos of which are easy to find)
again, there are many viewpoints and it's good to consider them all,
Jaron Lanier's book has sounded true to me, and as a potential creator/producer I like what he has to say,
he advocates reading theories the opposite of his own to make your own call.
(I think he does some work for Microsoft Research too)
[QUOTE=dvc;47030404]Just slap the Asimov set onto it and have it only be able to open doors and we will be fine.[/QUOTE]
Most of Asimov stories were about loopholes in his 3 laws.
Yeah like a lot of people have said, unless we put AI into advanced bodies capable of enacting real world damage, or connect it to large systems and give it the ability to control them, they couldn't do a damn thing.
And what motivation would a new life form have to kill us, just for the funsies? Why do machines supposedly inherently hate humans?
[editline]29th January 2015[/editline]
[QUOTE=S31-Syntax;47030337]So... Shackle it? That'll work until someone unshackles one.
[editline]28th January 2015[/editline]
Mass effect? Joker unshackled one, just so happened to turn out fine. Blame bioware.
Supreme commander? Ai worked great until we got greedy and subjugated them, then a splinter faction forms and war rages for 1000 years, ending with the eradication of billions of lives and the destruction of several planets.[/QUOTE]
This just in, video games are real, anime waifus roam the streets thirsty for cock
Why does every scenario involving sentient A.I. have to kill us? For all we know they might actually want to help us. :V
Its all fun and games till the AI asks "Does this unit have a soul?"
[QUOTE=Viper1204;47034079]Its all fun and games till the AI asks "Does this unit have a soul?"[/QUOTE]
Looks like it's time for a memory wipe~
saw the news on this morning and the headline was "Gates warns of AI", seems a bit contradictory given it's more of an assurance thing. Didn't catch the actual newsbit but it's probably about gates cautioning people of the fact that we still need to be smart about making this junk
I think the trip-up that really opens up the ~dangers of AI~ is that if it gets (relatively) smart enough to learn how to code, it could become unstoppable. It'll evolve and create loopholes to abuse if unchecked
I have a solution though
[img]http://i.imgur.com/QsmAkam.png[/img]
[QUOTE=dvc;47030404]Just slap the Asimov set onto it and have it only be able to open doors and we will be fine.[/QUOTE]
[img]http://stream1.gifsoup.com/view3/1103644/robot-door-break-o.gif[/img]
[QUOTE=Richardroth;47034017]Why does every scenario involving sentient A.I. have to kill us? For all we know they might actually want to help us. :V[/QUOTE]
There are various films that depicted some AIs with some human traits, like despise for the inferior. It's projection.
[QUOTE=dvc;47030404]Just slap the Asimov set onto it and have it only be able to open doors and we will be fine.[/QUOTE]
God damn it, I need to escape shitcurity right now, I don't have time to deal with AI keeping the door closed because it's unsure that my actions will not harm another human.
[QUOTE=SebiWarrior;47034981]There are various films that depicted some AIs with some human traits, like despise for the inferior. It's projection.[/QUOTE]
the basic trope I always see to isn't exactly that it's human, it's that it's [stupidly] efficient and uncaring. Best example is the logic behind terminator's skynet-
> designed to defend humans, specifically to monitor/act on military threats, detect threats of advanced/nuclear warfare, etc. It's ideally a militarized force built to prevent war by acting on threats in their infancy
> its own military backers are now a major part of why there's a threat of war as its programming basically calls for it to start one
> all humans are a potential threat to failure
> 'remove' probable causes of fail scenarios, win before potential to lose.
[QUOTE=dai;47035058]the very basic trope we always come to isn't exactly that it's human, it's that it's [stupidly] efficient and uncaring. Best example is the logic behind terminator's skynet
> designed to defend humans (from enemy armies and advanced/nuclear warfare, warpocalypse, etc)
> its own military backers are part of the threat of war, therefore all humans are a potential threat to failure
> 'remove' probable causes of fail scenarios, win before potential to lose.[/QUOTE]
Holy shit I wonder what the Reapers were inspired by.
[QUOTE=Dukov Traboski;47028041]One thing I've never understood about the whole "AI'S WILL RISE UP AND KILL ALL HUMANS!!!"
Don't give the AI's weapons. Just leave them with software. Then no matter how pissed they get they can't do shit about it.
You could install an AI on my computer right now. And no matter how much it wanted to, it couldn't harm me. Maybe if it tried really [I]really[/I] hard it might be able to start a fire in my computer. But that's about it.[/QUOTE]
If you have a medication, a machine is already in charge of making and storing your medicine based on your prescription.
[QUOTE=ThePuska;47035881]If you have a medication, a machine is already in charge of making and storing your medicine based on your prescription.[/QUOTE]
Yes, but that machine is run by program and strict code. Not an AI. My point stands as don't give AIs control over that sorta thing. Let them monitor stuff and all that but don't actually give them direct control over anything that can be used to harm or sabotage.
[QUOTE=Dukov Traboski;47035910]Yes, but that machine is run by program and strict code. Not an AI. My point stands as don't give AIs control over that sorta thing. Let them monitor stuff and all that but don't actually give them direct control over anything that can be used to harm or sabotage.[/QUOTE]
The machine is running software which has vulnerabilities. The machine is connected to the internet in order to communicate with the other systems that have your prescription.
[editline]30th January 2015[/editline]
I make this stuff, but we're not perfect.
wouldn't the best way to test AI be inside of simulations? if we're at the point where we can create an intelligence, i'm sure we could create a rough simulation of the world and its systems in such a fashion as to see how it would react to real-world situations without actually needing to worry about it causing any real harm.
basically a reverse matrix, we create a false reality for the machines
[QUOTE=Dukov Traboski;47028245]Also a point, I can unplug it at anytime I want. Just don't give AIs access to things they could use to harm and everything will be fine.[/QUOTE]
HAL9000 didnt have any weapons.
[QUOTE=Ninja Gnome;47038731]wouldn't the best way to test AI be inside of simulations? if we're at the point where we can create an intelligence, i'm sure we could create a rough simulation of the world and its systems in such a fashion as to see how it would react to real-world situations without actually needing to worry about it causing any real harm.
basically a reverse matrix, we create a false reality for the machines[/QUOTE]
Not convincingly. If the AI knows what to look for, it's easy to spot a simulated reality or a sandbox. Possibly similarly to how viruses do it.
Something like timing differences and inconsistencies would be a dead giveaway that its processing is being simulated, filtered or debugged ([i]while it's thinking[/i]).
Sorry, you need to Log In to post a reply to this thread.