Trust Is Given, Not Earned

on

Humans are imperfect, flawed, biased, emotionally unstable beings. Are those the very reasons why we should trust them…trust ourselves…more than we trust technology?

The idea contradicts the thesis underlying the development of AI and its automated applications, which posits that the variable wetware of human intelligence and judgment can’t hold a candle to the silicon certainty of machine hardware and programming software.

Running factories, operating cars and trucks, even delivering medical diagnoses will be safer and more efficient if we let computers do the work instead of people. Our experience of daily economic and social life will be better, too, if we get ourselves out of it. Commercial transactions will be easier and less biased. When personal interactions are required, they’ll be more predictable and therefore safer and more satisfying.

We’ll find in technology the deserved trust that we’ve traditionally misplaced in our institutions, in one another, and in ourselves.

There are at least three problems with this sales pitch:

First, you can’t take people out of the equation. They’re just fewer of them, we don’t have visibility into their activities (let alone possess the ability to understand them), and therefore it’s harder to hold them accountable when their technology fails us.

So it’s not technology that we’re trusting, really, but rather the abilities and intentions of people we don’t know and over which we have no say. Technology isn’t a substitute for human agency, it’s just an extension of it that’s by no means objectively better. It’s just different.  

Second, technology is always biased, as well as imperfect. No amount of machine learning will yield smart devices that are omnisciently aware, and even if it does, the road will be long and circuitous and in the meantime technology will operate according to the purposes and rules of its human designers.

Would you rather trust a person on the other end of a sales or customer service call who may or may not have some unannounced ulterior motive, or an automated ordering or chatbot system has been created to accomplish things about which you can be sure will never be revealed to you?

Third, giving more time to technology makes us more dependent on it, and renders us less able to trust that we’ll get from one another what we believe we deserve.

There’s research that suggests our expectations for immediacy and efficiency from online experiences make us less tolerant of geophysical experiences in which we must rely on one another. After all, how could a retail clerk not know if a certain product size lurked in the backroom, or a restaurant waiter not visit tables the moment after patrons took their seats? 

It’s this third point that gets at the heart of trust.

We can trust that other humans won’t be perfect. They’ll make the wrong decisions, sometimes for the wrong reasons, and lack knowledge or empathy that should be givens. They’ll…we’ll serve as our own worst enemies sooner or later. The one certainty of human interaction is that it’ll be disappointing as often as it’s rewarding.

In other words, we can trust that we won’t always be trustworthy. We have thousands of years of experience — data — that inform and support this truth. We choose what or whom we trust and, just like loyalty, we give it even when circumstances suggest we shouldn’t.

The idea that technology deserves our trust is not supported by the facts, as per my points above. 

We need to consciously choose to trust it, even though it might not deserve it.

Or not.

This is the conversation we should have, or the question we should ask ourselves. 

There’s every reason to use technology to inform and advise decision-making. If our sources of information are better, we’ll make better decisions, generally. But technology, and artificial intelligence in particular, can’t earn our implicit trust any more than human beings can destroy it.

Trust is given, not earned.

Leave a Reply

Your email address will not be published. Required fields are marked *