I think the headline of this story says a lot:
It was going to have to happen sooner or later – we would end up with robots capable of killing us, so someone was going to have to program some kind of guidelines into them. Without that vital code, a robotic tank would blow up civilians at an even more alarming rate than the flesh and blood American armed forces. If you thought the USMC was all about the overkill, wait until you see a fully autonomous mobile howitzer platform choose the wrong ammunition (high-explosive fragmentation rounds, or maybe some kind of charged plasma burst, perhaps) to take out a sniper in an occupied apartment building.
You would want this to be fairly well road-tested well before implementation, of course. As reconnaissance flight leader Debenham pointed out (with a turn of phrase of the highest calibre) when he brought this issue to the attention of the CIC: “Has they not seened Robocop where ED-209 kills that guy till he is dead at the demonstration?”
Curiously, the article makes no mention at all of Asimov‘s Three Laws of Robotics, which I shall repeat here for fear of having yet another Wikipedia link in one of my posts (seriously, I pimp that shit way too hard).
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Oh, wait – that doesn’t work for military robotics. AT ALL. Oh well, still, no reason to not conjure up the name of Isaac Asimov.
Man, I really don’t trust the Yankees with this one. The Canadarm is yet to kill anyone, maybe we ought move this program north?
I suggest everyone stock up on copies of How to Survive the Robot Uprising, just in case this turns out poorly. Like, Terminator poorly. Or Battlestar Galactica poorly. Or Hardware poorly. Or Runaway poorly. Or Judge Dredd Poorly.