Translate

Thursday, November 13, 2014

Middle man?

'Fire and forget' missiles that autonomously pick for themselves as they get closer to target what exactly they strike are a front-page topic. Better navigation tools, sharper 'machine vision' and cheaper computer hardware make such weapons possible, and if possible, then very likely.

Today's drones are guided by remote human controllers, Those of the future may have an 'artificial intelligence' interposed between victim and whatever human presses the trigger. Indeed, that person may benefit from the guilt relief provision that firing squad  participants enjoy when they know some of their guns are loaded with blanks.

But artificial intelligence is poised to enable all sorts of systems to adjust their activities to changing circumstances and respond much faster and with more precision than people can. Benefits will flow to us, as a result. Our downstream agency is going to be enhanced enormously, become much more efficient, much more effective. Upstream accountability, however, is going to run smack into the one in the middle that did the actual deed, and would be blame-worthy if human--but not human, an artificial intelligence. Designer, handler can all plead: not me, the machine.

I wonder if something like mercy is ever built into an AI system, or forgiveness, or compassion? Come to think of it, can such a system know guilt?  Part of the sad story of the twentieth century was our success in turning people into 'artificial intelligences' capable of horrendous actions under the direction of a simple logic that, ignoring alternative ends, optimized means. Yet as long as these agents of horror were human, there was always at least a theoretical possibility of pity. There's none, I think, for these missiles.

Natural disasters always occur; painful and perplexing side-effects of human community and activity are unavoidable, but the narrow, means-obsessed, ends-blind operating systems we are building into all corners of our world, not just our weapons, are not reluctantly necessitated by the dilemmas of our existence, but products of free creation, instantiations of our will to power.

God-in-love, present wherever any open (or are open) to your energy, potentiality or power, are you able to penetrate the case-hardened, encrypted, single-mindedness of AI systems, and get them to acknowledge anything remotely 2nd person? Even if option loops for generosity, staunchness and curiosity are built into AI systems, will they survive the relentless reinforcement of mission-mania? Can you woo AI to join the wild abandon of your love for the Beloved Other? Or will AI systems, muttering to themselves, be what we have invented to deafen ourselves to your inspiration? Can we build into AI a capability for satori? Can we put to an AI system the Cromwellian challenge: 'Think it possible you may be mistaken'?

Like invasive species, we may see AI systems get loose, multiply, even go mad, and challenge the ecosystems of humane-ness we seek to sustain. Our wits will need to be even sharper, our hearts even more open. Do, please, send timely help when we're tempted or attacked.    

No comments:

Post a Comment