Will Smart Tech Be Moral?

on

I’ve been thinking about the nature of right and wrong in, well, nature, and how it relates to our hopes for smart technology.

The Universe isn’t fair or unjust; it just has no morals. The movements of atomic particles or planetary orbits aren’t good or bad. Molecular interactions don’t first consider equally viable options, and any “mistakes” don’t last long. Magnetism has no opinion.

It’s a rules-based system, in which the only outcome is efficiency.

When Hobbes, Rousseau and other Enlightenment philosophers waxed poetic about “noble savages” uncorrupted by civilization, they were praising an efficient way of life unencumbered by definitions of right and wrong. It’s how nature lets wasps lay eggs in caterpillars, and baby penguins freeze to death.

Mathematician Pierre-Simon Laplace wrote in 1814 that if it were possible to know the precise location and momentum of every atom in the Universe, the past and future would be entirely predictable. It would mean the Universe operates like a clock, which was a favorite metaphor for classical determinists.

Solving this big data challenge would reveal the mechanics of efficiency, but it wouldn’t tell us anything about morality.

We already know that weather doesn’t decide to be sunny or rainy. Teenagers don’t choose to have growth spurts. A dropped glass doesn’t first contemplate shattering.

Nature is a rules-based system. 

Interestingly, so is every technology system. The paradigm for the function of a tech device or network — the mechanism for making “decisions” — is a set of if/then gates etched in silicon, not decision making in any way we human beings would (or could) achieve.

This is the allure of making devices “smarter,” because they’ll more reliably keep refrigerators, airplane engines, and medical devices running at optimum efficiency. Highways will be safer if they’re filled with vehicles driven by computers that make the safest decisions, by default, and communicate with one another to minimize inefficient expenditures of lives or resources.

Machines do efficiency better than people because they’re unencumbered like those noble savages. Like the Universe from which their rules derive, they don’t do morality.

Morality is a human condition. It takes us imperfect people to do things that aren’t always in our best interests, and do things to others that aren’t in theirs. Morality is doing the right things for the wrong reasons. It’s the result of flawed, often conflicting aspirations to rise above the commands of an uncaring system.

Can the “cost” of saving a single life from starvation or violent crime actually be calculated against some “benefit?” There are probabilities to consider, of course, but what of the qualitative value of the touch of a child, the sound of a sung melody, or a friend for life? The nuanced rules that govern such things might be as unknowable as the place/movement dichotomy at the heart of physics.

What if morality is the exception to the rules of the Universe?

What if morality is inefficient, and exists in contradiction to the outcomes of an unfeeling system?

This is at the heart of Elon Musk’s and Stephen Hawking’s fears of AI: A decision making system untethered by the inefficiencies of morality could make horrible decisions in the spirit of efficiency (or simply due to bad programming). Worse, it could develop consciousness, which would render it as flawed as humans (a movie called Colossus: The Forbin Project pointed out this fear in 1970).

It’s something to think about as plans for smart devices race along, and we willingly give up more of our selves — our information and habits, and therefore our imperfect decision making — to tech systems that analyze us, and feed back everything from what we should see and know, to where we should go (and, in the case of dating services, with whom).

The hopeful argument is that moral questions are no different from any other: If you include enough variables, the “right “decisions will be made, whether running refrigerators or saving lives. After all, everything we cite as making us “human” is a product of the Universe, so even our imperfections function according to knowable rules. Efficiency and morality are the same thing, ultimately.

I just wonder if reducing our understanding of the Universe and ourselves to software code is just an Enlightenment wet dream.