Robot Soldiers Could Make Better Decisions Than Human Leaders on Battlefields
74 replies, posted
[QUOTE]Modern warfare relies increasingly on robotics for intelligence gathering and increasingly for strike capabilities, but the decision-making capacity still rests solely in the hands of human commanders. But British defense company BAE systems is testing a way to turn over battlefield decisions over to robot troops as well.
ALADDIN (Autonomous Learning Agents for Decentralised Data and Information Networks) is BAE’s response to the overload of sensors and data now confronting battlefield commanders who now have UAV observations, soldier-based sensors, satellite data, and reams of other intelligence washing over them in such volumes that, as Air Force Lt. Gen. David A. Deptula puts it, they’ll be “swimming in sensors and drowning in data.” The system allows a network of robot soldiers to quickly collect and exchange information and then to bargain with each other to determine the best course of action and execute it.
The robots are armed to the teeth with algorithms employing a range of models – game theory, probabilistic modeling, optimization techniques – that let them predict outcomes and allocate battlefield resources far more quickly and efficiently than humans trying to process the same amount of data. All that should help troops – both robotic and otherwise – keep stay afloat in the data deluge.
But does it work? ALADDIN hasn’t seen any trigger time yet, but BAE and university researchers collaborating on the system have put it through simulated natural disasters (another potential application). Disasters, they theorize, are similar to warfare in their chaotic nature, and therefore the simulations are a good analog.
And in disasters the system operates well: Robots gather data on the various casualties in different areas, objectively assess where a limited number of ambulances can have the greatest possible impact, and execute a strategy quickly without egos or human emotions or errors clogging up the machinery. It’s like an auction for resources based on need, and while it may sound insensitive to auction off life-saving help to a bunch of machines, when this resource auction was eliminated from some simulations, some of the ambulances weren’t used at all because the system couldn’t figure out where to send them.
BAE is building what’s known as “flexible autonomy” into ALADDIN which will keep the higher decisions in the hands of humans (decisions like “go to war” and “don’t go to war,” for instance). And the idea of being able to crunch sensor data, raw intel, crowdsourced info from the Web, and other data sources to make good decisions quickly could prove invaluable. So while idea of robot armies with decision-making capabilities is terrifying to some, the allure of such high efficiency – be it in warfare or disaster response – is difficult to deny.[/QUOTE]
Source: [url]http://www.popsci.com/technology/article/2010-11/robot-soldiers-could-make-better-decisions-data-strewn-battlefields-future[/url]
fucking aimbots
:buddy: Holy shit, this my god damn wet dream! It will only be a matter of time now, until we replace the fallible human element in governments.
/technocrat
Wasn't this the plot of Terminator 3?
Will they be anatomically correct fembots?
Big Dumb American predicted this!
BAE are one crazy company. Where does an idea like this even come from.
Technology should drive technology, not warfighting.
Some good reads:
[url]http://fmso.leavenworth.army.mil/documents/fog/fog.htm[/url]
[url]http://www.defensenews.com/story.php?i=3979783[/url]
This one jumped out to me the most:
[quote]Our technologies are making it very easy, perhaps too easy, for leaders at the highest level of command not only to peer into, but even to take control of, the lowest level operations. One four-star general, for example, talked about how he once spent a full two hours watching drone footage of an enemy target and then personally decided what size bomb to drop on it.
Similarly, a Special Operations Forces captain talked about a one-star, watching a raid on a terrorist hideout via a Predator, radioing in to tell him where to move not merely his unit in the midst of battle, but where to position an individual soldier. [/quote]
Except when the "better decision" is to exterminate the inferior human race...
[QUOTE=Rotinaj;26420655]Except when the "better decision" is to exterminate the inferior human race...[/QUOTE]
They'll do it by sending are guys in to a van with a meat grinder in it.
Something like this, in my opinion should be used to support decision making not take it out of the hands of humans.
Perhaps they could include Clippy with it "It looks like you are trying to start a war, would you like some help?"
Logical=/=better. A machine will always go after what would be strategically sound, not what works tactically. In the heat of the moment, the personal element is lost and an AI combat leader would not be able to efficiently manage anything. Perhaps for formulating tactical assaults, but otherwise? I'll never trust an AI to do something such as this. All robotic aids should be just that, aids. Battlefield data management, communications forwarding, casualty and supply management, etc. The stuff that they'd excel at. Ingenuity is not something our tech is capable of programming just yet.
[QUOTE=ExplodingGuy;26420083]:buddy: Holy shit, this my god damn wet dream! It will only be a matter of time now, until we replace the fallible human element in governments.
/technocrat[/QUOTE]
I remember when I was little and I first heard that. I freaked and thought Oh NO ROBOTS WILL TAKE OVER THE EARTH but now i understand how a computer would make better choices then a person
I don't think a computer (at the moment) can make better choices than a human. We can think things over in so many different ways that a computer just cannot.
It really depends on what type of "warfare" these bots are waging.
Disasters and Responses to Insurgents? Sure. That's a time when quick action, high stakes responses require the best procedural answers.
Actual prolonged combat, Occupation, Country V. Country rat-fucking? Not so much. Ask any player who's been left to play against the expert AI on an RTS long enough, and you'll realize that algorithimic solutions have easily exploited weaknesses, even when it gets to cheat.
Not to mention the dissonance of "My commander is a canopener, arguing with a toaster."
[QUOTE=Arachnidus;26420759]Logical=/=better. A machine will always go after what would be strategically sound, not what works tactically. In the heat of the moment, the personal element is lost and an AI combat leader would not be able to efficiently manage anything. Perhaps for formulating tactical assaults, but otherwise? I'll never trust an AI to do something such as this. All robotic aids should be just that, aids. Battlefield data management, communications forwarding, casualty and supply management, etc. The stuff that they'd excel at. Ingenuity is not something our tech is capable of programming just yet.[/QUOTE]
It's like spock and kirk.
[QUOTE=Jsm;26420817]I don't think a computer (at the moment) can make better choices than a human. We can think things over in so many different ways that a computer just cannot.[/QUOTE]
It wouldnt be so much as controlling you and making laws, it would be more about figuring out how many resources we have and how to use them best.
:psylon:
So what the fuck happens when it's just robots on the battlefield? Robots outsmarting robots? That seems like shit could get pretty fucking intense. I mean at that point wouldn't the robots juts gang up and fuck the humans up cause we'd be dumb as shit in comparison? We would have to build something just as good as robots to protect us from the machine menace.
What about cases like [url]http://en.wikipedia.org/wiki/Stanislav_Petrov[/url] ?
[QUOTE=Pantz76;26420188]Big Fat American predicted this![/QUOTE]
Big Dumb American.
Ironic you'd leave that out.
"War has changed..." Ahhhooooooo Simmaaaeeeee
Just watch Tonights episode of Caprica :P pretty bad ass robot soldiers.
Robot Hitler
[QUOTE=Arachnidus;26420759]Logical=/=better. A machine will always go after what would be strategically sound, not what works tactically. In the heat of the moment, the personal element is lost and an AI combat leader would not be able to efficiently manage anything. Perhaps for formulating tactical assaults, but otherwise? I'll never trust an AI to do something such as this. All robotic aids should be just that, aids. Battlefield data management, communications forwarding, casualty and supply management, etc. The stuff that they'd excel at. Ingenuity is not something our tech is capable of programming just yet.[/QUOTE]
No, the robot will prioritize what it is programmed to. Which would very likely be the preservation of allied lives.
Dudun-dun-dudun.
[QUOTE=Athena;26420125]Wasn't this the plot of Terminator 3?[/QUOTE]
The plot of the whole series in fact.
[QUOTE=GunFox;26422055]No, the robot will prioritize what it is programmed to. Which would very likely be the preservation of allied lives.[/QUOTE]
Which can then be exploited by an experience human commander. Robot AI leaders would be good for beginning commands but once you get to having to create a plan on the fly, they'll just spat out shit that IN THEORY will work. That's what it comes down to, EDI in Mass Effect 2 stated it perfectly.
Post quote soon...
Sorry, you need to Log In to post a reply to this thread.