Trust in AI given its long term capabilities
Trust thought for today - how will we trust anyone or anything if AI fulfils its the 'potential' that some have for it and seems inevitable.
I really liked this article from Joanna Bryson in Wired (unsure if paywalled, sorry). I have hesitated over a few days to say anything about it, as the potential for me getting the wrong end of the stick is high! But hey. Pretty challenging but for me ultimately hopeful, though I didn't get there on that until near the end! AI may do most of the things that humans can, but they aren't human and we are and we and our values are actually in charge!
"What we do and must care about now is creating an identity for ourselves within the society of humans we interact with. How relevant are our capacities to create through AI what used to be (at least nominally) created by individual humans? Sure, it is some kind of threat, at least to the global elite used to being at the pinnacle of creativity. The vast majority of humanity, though, had to get used to being less-than-best since we were in first grade. We will still get pleasure out of singing with our friends or winning pub quizzes or local soccer matches, even if we could have done better using web search or robot players. These activities are how we perpetuate our communities and our interests and our species. This is how we create security, as well as comfort and engagement.
"Even if no skills or capacities separate humans from artificial intelligence, there is still a reason and a means to fight the assessment that machines are people. If you attribute the same moral weight to something that can be trivially and easily digitally replicated as you do to an ape that takes decades to grow, you break everything—society, all ethics, all our values. If you could really pull off this machine moral status (and not just, say, inconvenience the proletariat a little), you could cause the collapse, for example, of our capacity to self-govern. Democracy means nothing if you can buy and sell more citizens than there are humans, and if AI programs were citizens, we so easily could.
"So how do we break the mystic hold of seemingly sentient conversations? By exposing how the system works. This is a process both “AI ethicists” and ordinary software “devops” (development and operations) call “transparency.”
What if we all had the capacity to “lift the lid,” to change the way an AI program responds?"
Worth reading. Interested in what you think Mathew Mytka, Arnold Schrijver?
Article found here