There are many reasons why we shouldn’t regulate artificial intelligence, or “A.I.,” and one reason why we should.
The case against regulation was argued most recently in a New York Times op-ed that gave at least three reasons: 1) nobody can agree on what even constitutes A.I., 2) machines have been doing “smart” things for a long time, and nobody saw fit to regulate them, and 3) if humans aren’t given visibility into what the silicon brains are doing, then we should pass legislation that ensures it.
Trusting technology is the popular position and, speaking as an avid user of all things technological, I’m a believer.
But there’s a line dividing the decisions my toaster makes about how long to burn bread and, say, an autonomous car forced to make a zero-sum decision between saving the lives of its occupants or the pedestrians in front of it.
A.I. that decides what we know by what’s revealed on our smartphones, or what colleges or jobs we get based on a deep understanding of what we say and do, is different than the machine smart enough to direct an elevator to stop on the right floor.
The case for regulation is simple: We regulate the behavior of organic intelligence.
Why wouldn’t we do it for artificial versions, too?
In fact, you could see most of the laws by which we live as originating in a need to regulate human intelligence (or the lack thereof).
Speed and other road laws exist because we can’t trust that every driver will be smart enough to drive responsibly. Criminal law protects people from those who don’t understand that community and mutual responsibility make society possible. Laws ensure that buildings are constructed to withstand stiff winds, thereby impeding a builder who was dumb enough to bet that he could avoid bad weather. The list goes on.
Why would the behaviors of imperfect A.I. be treated any differently?
Again, the argument against regulation is comforting, because it assumes that technology will improve constantly and, therefore, needs to be left alone (we humans will just impede and muck up its inexorable advance toward perfection). Machines don’t have the same biases or loose morals as human beings, so that improvement will always be for the betterment of all.
But we know this isn’t actually true.
A specific tech artifact might not possess any intent to do wrong, but its individual and/or social effects might be deleterious. The intentions of its designers and investors could be, too, if they’re not at least uninterested in those effects, so distinctions between “right” and “wrong” could become biases literally hardwired into A.I. devices.
So, if at some point your home assistant becomes all-knowing and aware, is it reasonable to assume that it will be incapable of mistakes? How will you know, for sure? Mere visibility into why it locked you out of your home, or refused to feed your dog, won’t substitute for rules that keep it from doing so in the first place.
It’s accepted wisdom that A.I. will disrupt the very basis of how we define work and our very lives, so you’d think we’d want to expend at least some effort trying to understand and anticipate those impacts via legislation.
If you believe it’s reasonable that we regulate the safe operation of toasters and elevators, why wouldn’t we want to regulate the A.I. that has responsibility for driving our cars and operating our cities?
Dr. Forbin, what do you think?