The declared mission of Stanford’s Institute for Human-Centered Artificial Intelligence is “to advance AI research, education, policy, and practice to improve the human condition.”
It makes me think we humans are doomed.
For starters, the phrase “human-centered artificial intelligence” is an oxymoron, since AI is based on silicon and code, not people. Sure, human beings designed the first chips and wrote some code, but most of the manufacturing and machine learning that has AI speeding up the IQ food chain is accomplished by, well, AI.
Robots are already learning on their own, as human workers are often paired with them so that robots can study how to duplicate and then supplant their efforts. Factories not only run autonomously, but improve processes therein, without the aid of human intervention.
What could Stanford possibly mean by it? Its Letter from the Directors reads: “If AI is to serve the collective needs of humanity. it must incorporate an understanding of what moves us — physically, intellectually, and emotionally.”
Uh oh. This sounds like the ending of the Twilight Zone’s “To Serve Man” episode.
It’s a cookbook!
The Letter goes on to say: “We are bringing together leading thinkers across multiple fields so that we can better prepare future leaders to learn, build, invent and scale with purpose, intention and a human-centered approach.”
Forget that the sentence borders on gibberish; what is stunning is that it describes Faculty and Distinguished Fellows who are all fully bought into the inevitability of AI domination of our lives. Many of them come from the big tech firms who’re developing the tools, and others teach its implementation to wanna-be overlords.
Maybe they’re true believers, or they truly believe in the money that can be made, but the people pictured on its website are a literal who’s who of the Robot Domination Dream Team: scientists, educators, and business people ready to figure out how to ease our transition to a future that isn’t human, but at least human-centered.
What’s so demoralizing to me is that there’s no countervailing authority that can challenge their preconceived notions and self-fulfilling prophecies.
You’d think a university entity formed to study such an all-encompassing topic would include scholars, workers right advocates, psychologists, economists, even theologians who could debate the implications of AI, not confab on how to make it more palatable.
Only the conversation is over, at least at Stanford.
It’s a cookbook.