One Congressman’s Crusade to Save the World From Killer Robots


#1

From the National Journal:
If a robot soldier commits a war crime, who is held accountable?

You can’t punish a collection of parts and coding algorithms. But can you blame a human commander, who gave a legal order only to see the robot carry it out incorrectly? And what about the defense manufacturers, which are often immune from the kind of lawsuits that would plague civilian outfits if their products cost lives.

The culpability question is one of a host of thorny moral dilemmas presented by lethal robots. On the one hand, if effective, robot soldiers could replace ground troops and prevent thousands of American casualties. And robots aren’t susceptible to many of the weaknesses that plague humans: exhaustion, sickness, infection, emotion, indecision.

But even if robot warriors can keep American lives out of danger, can they be trusted with the complicated combat decisions now left to human judgment?

Rep. Jim McGovern thinks not.

Interesting questions that are raised as the result of technology. I know it seems a bit flakey, but with remotely operated technology already common and autonomous technology beginning to be fielded, it does actually raise some valid questions.


#2

Every time I read about ethics and technology (such as privacy, data mining, big data analytics etc), technology isn’t so much blamed as the person using it. For example, I’ve seen database vendors defending their products because the people who bought them just simply didn’t know how to use them. They thought the technology was like a magic bullet (which can strike as odd given that they pitch it with that impression, if not mistakenly).

But in the end, I think the buyers and the sellers can all agree that there’s really no such thing as a ‘fully automatic’ gizmo. Someone has to push the button, pull the trigger, or give the command. Accountability seems like a no-brainer regardless of what’s invented. :shrug:


#3

The ethical issues involved is one reason why I believe there should be a person who ultimately gives permission to the robot for any potential lethal or destructive actions it takes. In other words, it should not be able to attack until a person can expressly give permission for each and every lethal or destructive act.


#4

Except when the button or order result in totally unexpected behavior.

If there is a bug in the code that causes the machine to cause problems, who exactly is responsible for the problem?

When a POS system fails to give the right change, the answer is pretty easy.
But when the system is interacting with many more sets of data, the issue may not be so cut and dry.

Sci Fi authors have been wrestling with this for years.

Unfortunately we are nearly to the point of having a fully automated and armed robot, and we are no closer to being able to answer the question of accountability.


#5

Again, a no-brainer. Ever heard of warranty? Customer service? Those exist to make sure a company takes responsibility for defective products. That’s the risk you take when going into the business of anything, technology or not.


#6

What does Arnold Schwarzenegger have to say about this? Among politicians he would be the expert on futuristic killer robots.


#7

Any weapon can malfunction and kill innocents. An arrow or bullet may go astray, or may be mistakenly aimed at a noncombatant. Bombs can go off target. The more complex things have become, however, it seems the lower collateral damage. So, I really do not see that more complex and remote systems would be a greater danger, unless the military did something really stupid like used Comcast for communications. Try and get through to* them* for customer support!


#8

It is a risk I take, but if the automaton is capable of harming human life, I risk their lives.
Warranty service is more difficult under these circumstances.
The company that holds the warranty is going to do all it can to wiggle out of it.
And the more complex the system as well as the complexity of the environment it is released into may provide all the wiggle room it needs.

Consider for a moment, two systems interact, they lose bank accounts as a result of the unexpected interaction.
Who is responsible? The bank? The designer of one system? The designer of the other?
And how can we really hold either responsible since they had no way to test one system against the other?

I think we can trust our government to find and implement the most stupid action possible.


#9

You’re making this sound way more complicated than it actually is. You’re asking who is responsible for products that go haywire. Answer: The one selling the products. The bank takes responsibility for the accounts but holds the designers responsible for the accident.

Like the other poster says, accidents happen. Bugs can be made. That doesn’t make it hard to trace accountability.


#10

Which designer is responsible?
It is common now for multiple designers from many different companies to be involved in the applications running on a specific platform.

Consider a small computer sales and repair shop…they easily could have a SQL (Microsoft) database for their sales data, Quickbooks (Intuit) for their accounting piece.
Kronos for their employee and timekeeping data, and sysaid for their ticket system and work order tracking.

All of these apps must work well alone, but must also deal with interactions with each other.

When something goes wrong, which one of the software companies do we hold responsible? Lost accounts…perhaps the SQL or quickbooks. Of course, either could readily point to another and claim the problem lies elsewhere.

And just that simple small business with an employment of under 50 has a computer system that is too complex to ever learn with certainty.

Now just take that and increase it to the size of a corporation.

Or more to the point, increase it by the number of subcontractors involved in building a weapon for the government.

Now you do have the point that as consumers we can easily just hold the one that put all of the products together responsible.

Of course, in this country there is very little that can be done when the government is the one that did it.

So…if the government arms a drone and it flies off and kills and innocent person, is anyone likely to see jail time?


#11

This is different from your original question. Just because another party insists the problem lies elsewhere doesn’t make it so. It just means you need due process, due investigation. This in turn still means you’ll trace someone responsible. No matter how complicated the detective work, someone is responsible. Period. Your first question is already answered. What’s complicated is figuring out who, not the fact that whether or not there ever was a who.

Jail time isn’t the only undesirable consequence of misusing technology or failing to invent it properly. :wink:


#12

OK.

I went a store and had my card declined at the register.
I verified I had the money and tried again.

The card declined again.

Then the cashier and I watched as the register declined twice more (without swiping it again) and then it went through.

I later found my card charged 5 times.

Who was responsible?

The store with the system?
The card company?
Perhaps some poor code sitting in a cube somewhere writing a subroutine?
Perhaps his buddy in the next cube writing a different subroutine on the same system?
Perhaps the ISP for failing communication?
Maybe me for wearing out the card?

There are so many cooks in this kitchen that it is not possible to ever find out who did it.

And with more complex systems it gets even more complicated.

So if a drone kills someone without a human actually pulling the trigger, who do we hold responsible?


#13

Actually, that’s what a head chef is for. :thumbsup: The thing with your example is that it presumes only one customer. I see at least two. One being you but the other is actually the store. That’s what makes retailers part of a B2B market. You have a right to complain to a store but that in turn prompts a store to take it up with the people who sold them that system.

Now take this chain of events and apply it to a chain of command. Sure, it can get complicated in the case of the drones but at the end of the day (proverbially at least), someone (or several someones) are going to have shares of the responsibility. Complications don’t eliminate their existence.


#14

When a human life is taken, there needs to be a single someone that has pulled the trigger.

Complications do not remove the responsible party. But they can well mask them.

And a responsibility that cannot be recognized is tantamount to no responsibility at all.


#15

Then you’re just going to have to unmask them. I think you’re contradicting things here. Just because the detective work is hard doesn’t mean responsibility ceases to exist as you imply. The only fact here is that the mystery can be difficult.


#16

I do not believe you have a true understanding for the complexity inherent in these systems.

Not only are there any number of people writing the code, in some instances you have machines writing the code.
You have any number of people combining the systems with others.
And then you have any number of people actually working with the finished product.

Any one person could be responsible for a system issue.

Then we also have to deal with the lack of records on any specific event.

Of course, you also need to consider the coders themselves may not necessarily know the final product. They are given specifications, and write a routine to suit.
They may not know if this final product of theirs keeps a missile on track or monitors DoD payroll. They just know a given spec.

All of this leads me to conclude it is not a good idea to hand over decision making to a drone. If a life is at stake, there must be a human that decides to pull the trigger.


#17

Because it’s not relevant if you’re asking about who’ll be responsible for a faulty product. If you’re simply going to bemoan the work that’s going to be needed to figure it out, that’s a different discussion.

Nobody is really arguing against this. Replace trigger with voice command, activation switch, or any other human-dependent mechanism and you’ll essentially get the same thing. The complications of a device doesn’t terminate this as so much just gives a lot more work to the contractor providing the weapons system, coders and all.


#18

This discussion has both an angry tone and an adult philosophical tone. I think the tone will change somewhat when some 12 or 13 year old figures out how to hack into the system and give it new orders. :rolleyes:

What will the new tone be? I don’t know, but I’m pretty sure it won’t be quite the same after that . . . :rolleyes:


#19

DISCLAIMER: The views and opinions expressed in these forums do not necessarily reflect those of Catholic Answers. For official apologetics resources please visit www.catholic.com.