Do we Trust Machines More than People to Kill?

Trust thought for the day - do we trust machines more than people to actually kill people? Even if we do, should it be allowed? If we don't, who is going to stop the US military doing it anyway? What to do either way?

Interesting scary article on testing of drones in warfare from Wired.

"General John Murray of the US Army Futures Command told an audience at the US Military Academy last month that swarms of robots will force military planners, policymakers, and society to think about whether a person should make every decision about using lethal force in new autonomous systems. Murray asked: “Is it within a human's ability to pick out which ones have to be engaged” and then make 100 individual decisions? “Is it even necessary to have a human in the loop?” he added."

Oh Lordy - the writing is on the slippery slope here folks.

UN says: “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law”.

UN: https://lnkd.in/gMG5dny

Wired: https://bit.ly/3o4EFpT



Previous
Previous

Trust in the Governance of Neurotech

Next
Next

‘Trust First’ concept in practice