• Warning over armies developing 'Terminator" style killer robots
    92 replies, posted
[QUOTE]Human Rights Watch has issued a warning that ‘Terminator’-style killer robots could be developed in decades - and that several governments are working on the technology. The U.S government in particular has admitted to working towards robotic weapons systems with ‘total autonomy’. The Human Rights Watch calls on policy-makers to outlaw ‘fully autonomous weapons’ - robots that can decide to kill. It says such weapons could be feasible “in decades”. "Fully autonomous weapons have the potential to increase harm to civilians during armed conflict,” says Human Rights Watch “They would be unable to meet basic principles of international humanitarian law, they would undercut other, non-legal safeguards that protect civilians, and they would present obstacles to accountability for any casualties that occur. "Although fully autonomous weapons do not exist yet, technology is rapidly moving in that direction. These types of weaponized robots could become feasible within decades, and militaries are becoming increasingly invested in their successful development. One in three U.S. war places is now a drone, and the U.S. has said that it aims for its forces to be 30% robotic by 2020. “There is an ongoing push to increase UGV - unmanned ground vehicle - autonomy, with an ultimate goal of full autonomy,” said a U.S military report in 2011. A U.S Air Force report in 2009 said, “Increasingly humans will no longer be ‘in the loop’ but rather ‘on the loop’—monitoring the execution of certain decisions. “Advances in AI will enable systems to make combat decisions without necessarily requiring human input.” At present, drones such as Reapers and Predators are capable of fully autonomous flight, take-off and landing, but require a human to “pull the trigger”. The fact that such drones are supervised by highly trained combat jet pilots makes them expensive to operate. Future generations of drones are already moving towards increased autonomy. Drones such as British Aerospace’s Tanaris will be capable of severing radio communications and flying unsupervised for up to 36 hours. The move towards ‘autonomous’ robots has been in place for decades. A Northrop Grumman ‘drone’ performed a fully autonomous landing in 1975. P W Singer, a former Pentagon weapons advisor and author of the book Wired for War, said in a previous interview that the drive towards autonomy was inevitable. “With drones that are remote-controllled you have two pilots up front, but a team of around 160 people behind - we have to get to a point, for budget reasons, where planes can fly themselves,” says Singer “With every problem in using unmanned vehicles, the solution seems to be more autonomy. What's the easy way to stop a drone? You jam its satellite feeds. You don't want it to stop and just give up at that point. “Never ever. So you endow it with enough independence to continue the mission. Can we get this genie back in the bottle? Yes, sure - as long as we get rid of war, science and capitalism." [/QUOTE] [url]http://uk.news.yahoo.com/warning-over-armies-developing--terminator--style-killer-robots-19112012.html[/url]
So this is how it ends.
Noo! It's too early! It's too early! Wait with it! Wait with iiit! I need like 5 years to finish my robotics engineering diploma!
Where is Sarah Connor?
Dark age of technology, anyone?
oh fuck they're really making biped killing machines? [editline]19th November 2012[/editline] they better be controlled like the flying drones are, via a guy with some controls. if they're self-controlled then it's game the fuck over.
brb making an EMP suit
[QUOTE=MightyMax;38514624]oh fuck they're really making biped killing machines?[/QUOTE] Probably. But that's not what the article refers to when it says terminators-like killing machines. “Advances in AI will enable systems to make combat decisions without necessarily requiring human input.” Like a terminator.
[QUOTE=MightyMax;38514624]oh fuck they're really making biped killing machines? [editline]19th November 2012[/editline] they better be controlled like the flying drones are, via a guy with some controls. if they're self-controlled then it's game the fuck over.[/QUOTE] The article contains the word "autonomy" 6 times. That means "self-controlled".
And remember, people, a naked guy stripping your badass rocker clothes from you Is mankind's savior.
[QUOTE=aznz888;38514618]Dark age of technology, anyone?[/QUOTE] Solution: build Inquisition-resistant robots!
[QUOTE=CaioLugia;38514663]And remember, people, a naked guy stripping your badass rocker clothes from you Is mankind's savior.[/QUOTE] What if it's the bad terminator in a sequel?
[QUOTE=Awesomecaek;38514654]The article contains the word "autonomy" 6 times. That means "self-controlled".[/QUOTE] i figured they meant autonomous in that they just need simple left/right forward/backward movements, not that they had AI :suicide: CAN'T READ FOR SHIT.
Just infect them with stuxnet. Unless that's a alias for skynet.
All this has happened before, and all this will happen again
[QUOTE=Awesomecaek;38514654]The article contains the word "autonomy" 6 times. That means "self-controlled".[/QUOTE] It'd be a total disaster if any country makes a totally autonamous killing device. Hell, human operators fuck up way too much as is, taking the human out of the equation only leads to more potential problems. Fully autonomous killing machines need to be banned by an international mandate, though I'd really like to see their potential in other fields, like fire and disaster rescue.
[quote]"Fully autonomous weapons have the potential to increase harm to civilians during armed conflict,” says Human Rights Watch “They would be unable to meet basic principles of international humanitarian law, they would undercut other, non-legal safeguards that protect civilians, and they would present obstacles to accountability for any casualties that occur. [/quote] But wouldn't it be possible to program these robots to never break ROE under any circumstances? Robots wouldn't look for revenge or retribution, or get angry and fire a rocket into a refugee camp. It seems to me that they would be safer to use than human soldiers.
I wouldn't worry about autonomous robots too much, just take a look at AI in videogames - they work well at killing you given that you can rarely escape the boundaries of the map and other physics but when you apply that to real life I can't imagine a bunch of robots armed with AK47's and aimbots bunny hopping across the land towards a bunch of manned machine guns ending up well for the robots.
[QUOTE=King Tiger;38514827]But wouldn't it be possible to program these robots to never break ROE under any circumstances? Robots wouldn't look for revenge or retribution, or get angry and fire a rocket into a refugee camp. It seems to me that they would be safer to use than human soldiers.[/QUOTE] I think it's more about the fact that if there's a fuck up (Because technology), it won't just be a single rocket being shot but a payload large enough to flatten a town. Human error is common, these AIs will be programmed by humans, technology can break down, do the math.
[QUOTE=MightyMax;38514624]oh fuck they're really making biped killing machines?[/QUOTE] [media]http://www.youtube.com/watch?v=AM8A0GrYmFU[/media]
The industry standard average number of software bugs/programming errors is ~10 per 1000 lines of code... So no thanks to autonomous killer bots.
The thing with autonomous combat machines is that they need to have very good AI and VERY good IFF; they need to be more advanced than simply not firing upon soldiers wearing the appropriate IFF badges. If we can train a solder with the instincts and clear judgements of someone unlikely to shoot their friends or bystanders, then use the soldier's brain as the driving intelligence behind a robotic combat platform, that could work. At least until we have a computer capable of emulating human levels of intellect, with the all-important distinctions between friend, foe and bystander, that's probably one of the better options; the soldiers would be pioneers of elite armoured infantry if anything, they'd be the Space Marines of our time except less racist, less gung-ho, and less [B]"FOR TH' EMPRAH!"[/B], though they'd probably need some way to enjoy life as the humans they once were, so that they wouldn't go insane from their cybernetic fate. In the meantime, if we ARE making combat machines with current technology, they should be armed with non-lethal armaments like smoke grenades, tasers, microwave beams, tranquilisers, etc. Y'know, the kind of stuff that is less likely to kill someone; at least give them rubber bullets and beanbag cannons instead of hollow-points and HE grenades, since the former items aren't as dangerous as the latter, even if they technically are still dangerous.
What if they make less mistakes than humans, on average? Would you accept the mistakes they do make? Already A.I like the one that plays jeopardy make judgements of how certain they are. If it isn't sure, it won't fire a payload. I'm sure they'll err on the side of inaction over an overreaction.
Remote controlled terminators would be alright in my opinion, it's safer than sending an actual person into the battlefield. Just no unmanned ones.
I cant take them seriously as a threat after this... [media]http://www.youtube.com/watch?v=QRbvNL1PHKg[/media]
Might as well just conduct all wars over Call of Duty.
[QUOTE=Awesomecaek;38514654]The article contains the word "autonomy" 6 times. That means "self-controlled".[/QUOTE] To be honest a lot of our armour is already semi autonomous. Remember the story about the french tank that almost killed it's entire crew by accident? It was programmed that on the death of it's crew it would unload everything it could into the nearest enemy. The crew sensor failed and the tank system fired at far greater speed than the crew ever could. As a result almost suffocated from the fumes. [QUOTE=Chopstick;38514892]I wouldn't worry about autonomous robots too much, just take a look at AI in videogames - they work well at killing you given that you can rarely escape the boundaries of the map and other physics but when you apply that to real life I can't imagine a bunch of robots armed with AK47's and aimbots bunny hopping across the land towards a bunch of manned machine guns ending up well for the robots.[/QUOTE] That's because you're essentially dealing with bad AI most of the time. The ai will almost always make awfull strategic choices, but they will usually outclass humans in pure one on one applications. A human pilot can generally not do the same maneuvres as a drone. Just take a look at this. [url]http://www.youtube.com/watch?v=DXUOWXidcY0[/url] A human just can't match that.
[QUOTE=Terminutter;38514775]It'd be a total disaster if any country makes a totally autonamous killing device. Hell, human operators fuck up way too much as is, taking the human out of the equation only leads to more potential problems. Fully autonomous killing machines need to be banned by an international mandate, though I'd really like to see their potential in other fields, like fire and disaster rescue.[/QUOTE] Seems like it would be the other way around. Taking the human element out of it seems like it would be easier to regulate what is "OK" to kill what what isn't. A human has to make split second decisions based on assumptions and adrenaline that may end up killing civilians, or just being in the war may break the mind of the soldier. That theoretically can't happen with a fully autonomous machine. If they're programmed to never break the Rules of Engagement, they never will unless they malfunction. And no matter how much disgusting and revolting things they see, it won't break their mind or drive them insane.
Well there are already tracked robots that you can mount guns on. Imagine if one of them got dropped into a crowded area and started shooting. It'd be quite bad.
[QUOTE=wraithcat;38515601]To be honest a lot of our armour is already semi autonomous. Remember the story about the french tank that almost killed it's entire crew by accident? It was programmed that on the death of it's crew it would unload everything it could into the nearest enemy. The crew sensor failed and the tank system fired at far greater speed than the crew ever could. As a result almost suffocated from the fumes. That's because you're essentially dealing with bad AI most of the time. The ai will almost always make awfull strategic choices, but they will usually outclass humans in pure one on one applications. A human pilot can generally not do the same maneuvres as a drone. Just take a look at this. [url]http://www.youtube.com/watch?v=DXUOWXidcY0[/url] A human just can't match that.[/QUOTE] Wow, never heard of that! Got any more about that? I am heavily interested!
Sorry, you need to Log In to post a reply to this thread.