Microsoft will invest $1 billion to build an AI in the cloud that is as aware and independent as a human mind. Hasn’t anyone working at the company seen the movie?
Movies, really. Whether Skynet in the Terminator series, Dr. Forbin’s Colossus, or Matthew Broderick’s chess-playing partner WOPR in War Games, there are ample and very compelling examples of how machines built by humans, intended to think like humans, tend to do the same horrible things that humans do…to everyone’s surprise.
Consider all the movie plots in which people walk down dark, narrow hallways in scary places armed only with flashlights, as if they’d never seen how badly that turned out for characters doing it in other movies.
Only this is real-life.
The recipient of Microsoft’s largesse and future engine of its cloud sentience, OpenAI, was founded in 2015 as a non-profit by Elon Musk and others to develop AI in “the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”
Musk, along with Stephen Hawking and Bill Gates, was suspicious of the threat AI posed to humanity, not the least of which could be realized by exclusive corporate control of its capabilities.
OpenAI is under new leadership now, and its for-profit subsidiary promises that its first financial backers will have their profits capped at only 100 times their initial investment.
A $1 billion spend on a super brain is but a drop in the bucket considering all of the money and effort going to collect data on everything, so an artificial general intelligence (or “AGI”) can accomplish just about anything one-day.
So OpenAI is only one of many players in the competition to profit from building the first HAL9000.
Advocates for AI developed unhindered by regulatory or moral concerns claim that the innovation will happen anyway, its performance can’t be any worse than the human decision-making it would replace, and governments are too stupid or timid to provide legitimate oversight, so it’s better to own the results instead of being subject to them.
They could well be right, but that doesn’t mean that we organic beings should only be involved in the process as a raw resource for it to harvest.
Where’s the transparency, follow-up, and accountability for not just the direct commercial effects and occasional good works spin-off, but for all of the indirect externalities to AI development that are anything but external to our lives, like trust, mental health, economic empowerment (or the lack thereof ) and, oh yeah, making sure there’s truly an off button in reach of somebody other than the people who want to keep AI on because they’re getting rich from it?
OpenAI is not a provider of that engagement, nor are any of the academic centers set up to cheerlead for it (since they’re funded mostly by the main players in the competition, too).
Instead, we get technologists making decisions based on their generational or willful ignorance of the lessons of history, whether real or contrived by creative screenwriters.
News of another initiative to build an artificial super mind isn’t news to those of us who’ve already seen the movies.