That an organization called the Campaign to Stop Killer Robots exists would not come as a surprise to anyone who follows science fiction and who is generally aware of the paranoid nature of some segments of our society. That its members include such highly respected worldwide groups as Amnesty International, the Nobel Women’s Initiative and Human Rights Watch, respected diplomats, scientists, engineers and former military members is downright scary.

As it turns out, the Campaign to Stop Killer Robots is not a group of sci-fi fans who expect machines to take over the world. It’s a 2-year-old organization that’s working to prevent military contractors from developing drones capable of deciding on their own when to pull the trigger on a suspected target.

And the group may be losing ground.

BAE Systems, the British aeronautics and weapons firm that has operations in New Hampshire, has a drone that could easily be outfitted with the software to act on its own once set in motion. Northrup Grumman has one, too.

So far, the political climate has been one of caution regarding this historic step. Most members of the United Nations appear to be against the practice, according to a report earlier this month by the McClatchy-Tribune News Service. But two nations expressed a preference to hold off on banning it: the United States and Israel.

The U.S. has a poor human rights record when it comes to the use of drones. For a decade we’ve been targeting suspected terrorists in Pakistan, Afghanistan, Yemen and elsewhere. Far too often, the resulting attacks — in which a human pilot effectively pulled the trigger, based on intelligence information — have killed bystanders. Sometimes they’ve even destroyed targets that included no hostile agents. Last month, when President Obama apologized for the deaths of two hostages in a drone attack on an al-Qaida compound in Pakistan, he made the collateral damage sound like an aberration. But a report in Britain’s The Guardian last fall put the number of dead from U.S. drone attacks on just 41 terror suspects at 1,147.

That’s sobering, but not the point the Campaign to Stop Killer Robots is trying to make. The organization is calling for a ban on drones that can decide on their own whether to pull the trigger, not their general use. And it’s not afraid the practice would lead us to some sort of machine-dominated, apocalyptic future out of the “Terminator” films.

The real danger lies in taking the humanity out of war.

If war is hell, then war using drones threatens to become merely videogame hell. Dropping bombs from high above the fray removed much of the danger of fighting 100 years ago. Operating drones from as far as 1,000 miles away from the action took that a step further.

Giving the machines the ability to process input and to decide when, or whether, to shoot is a step too far. It removes not only the danger, but the culpability. The idea of taking innocent lives may be, in many cases, the only thing stopping an attack. That’s a safeguard we can’t afford to dismiss.

And on a larger front, we can’t afford to hand to our military the built-in excuse that faulty software made the wrong call, or that no one can be held individually accountable because a machine made a poor choice.

We do see a bright side to the technology, however. If a drone can be programmed to fire based on its assessment of conditions at a targeted site, perhaps it can be programmed to call off an attack based on those inputs. That aspect of the technology would be welcome and could go a long way toward making drone strikes the “precision” method of warfare they’ve been touted as by U.S. leaders.