Cast your imagination back to the turn of the last century. You rode a horse, a magnificent creature. Alternatively, you drove a horse and buggy, all the while an American entrepreneur was stalking you and weaving a wild tale about progress and the imminent demise of the horse. He spoke of a new age of technology. Change was just over the horizon. The technology was so promising it would elevate mobility to hyper speeds, allowing humans to soar with the gods. Under this new world order, "horsepower" would become a measure, a metaphor, a symbol, a reminder of the old ways. We wouldn't need real horses. No more horses.
Come on. Three million horses were counted among the combatants in the American Civil War in the1860s. In fin-de-siècle North America, horses were essential to urban mobility and farm life. What? No more horses?
Who would listen to this nonsense?
Our great-grandparents did. They were kids. They bought in. Why wouldn't they?
The entrepreneur was Henry Ford, and Ford was right. Technologists, once referred to as engineers, had birthed the internal combustion machine. One Nicolaus Otto perfected the system: steam out, gasoline in.
Praising the efficiencies of oil and gas, Ford modified the combustion engine to suit his vision and championed the gas-fueled car. The Ford Motor Company was on its way. Car owners unwittingly became part of the grand experiment.
Ford mass-produced an automobile he dubbed the Model T.
Ford wished for all Americans to relish the benefits of automobile travel, emphasizing affordability, simplicity, and durability. Ford believed that within fifty years anyone in the U.S. who desired a car could own one. The best selling feature? Car owners became the test drivers, offering necessary feedback to improve travel features and safety systems.
Ah, let's raise a cheer for safety systems, or as recently termed, "guardrails." Or even more grandly called, "ethics." Why were guardrails necessary for an inanimate object like a car? Nothing is perfect, even Ford acknowledged that. There were dangers in car use, and, sure, accidents were inevitable. But should the fear of misuse (theft, shoddy worker-ship and car crashes) stop most of us from enjoying the astounding super-mobility the car afforded us? Certainly not.
In any era, dissenters exist—oftentimes labeled hysterics.
Dissenters have vocalized concerns about a myriad of pressing issues, from enslavement to male-only suffrage, from lead in baby-crib paint to Thalidomide, from C8 to the fervor of acolytes in male-dominated religions like Judaism, Islam, and Christianity. Dissenters, by definition, are not a great mass of the population. Until dissenters awaken a critical majority of voters (especially in a liberal democracy), they remain voices crying in the wilderness.
Shoppers tend to ignore dissenters. Shoppers go about their business, uncaring about the open pits they are about to step into, or the asteroids zooming through space. Shoppers are something akin to hapless, earth-bound dinosaurs. (Dissenting dinosaurs flew the coop.)
And anyway, back in1908, individuals felt they knew about the downside of the automobile. Ford had told everyone. Yes, Ford acknowledged concerns of the dissenters, answering dissenters' sour questions with optimistic assurances about improvements, optimistic assurances about the future of human mobility, and optimistic assurances about the future of accident-prevention. Ford declared that the pursuit of progress would drive car companies to create better-performing and safer cars—ethical cars. All good. A bright mobile future was in store.
Now for the rest of the story. An unexpected byproduct of technology (or of evolution itself) is what one calls the "unforeseen consequence." The unintended accident. Some accidents are positive, e.g., semaglutides (for pre-diabetics) apparently help certain people to lose weight. That was a surprise.
Some accidents, as we're currently learning, are very bad.
Of all the foreseeable future complications with the car, Henry Ford didn't give much thought to the emissions of an internal combustion machine bringing on the human endgame.
Ford's propaganda about our mobile future had shoppers looking in the wrong direction. People looked at the car when they should have looked at the fuel. Dissenters, always the dissenters, tried to warn shoppers, but we had to get to that store. Be the first to buy. Damn, this line is long.
The twenty-first century faces an existential crisis. One hundred and fifteen years after the release of the Model T, the body politic well knows the message bleached coral sends us: The "unforeseen consequence" of the internal combustion machine has nothing much to do with guardrails or car accidents. Our oceans are boiling. CO2 is killing life on this planet.
Skip to a current replay of the old story: Speaking of dangerous COs, how does Sam Altman, CEO of OpenAI, operate? Is he first and foremost a sales maven, a shill for AI? Yes, he is.
The American saleman. Even before Arthur Miller, Canadians knew him well. Maritime-English writer, T. C. Haliburton, wrote satiric pieces on this person. Nineteenth-century Canadian readers laughed at a certain Sam Slick.
Slick's soft-sawder merchandizing was brilliant and aimed at pre-Confederation Bluenoses (Maritimers). Slick wanted Bluenoses to buy mantel clocks. Quick-as-a-wink, Slick made sales. Slick could sell clocks, or anything, because he understood human nature. What does it mean to be human? We're relentlessly competitive, sometimes lazy, readlily embarrassed, easily flattered, often misogynistic, and furiously desirous of prestige and respect.
Sam Altman is Sam Slick. Sam Altman knows us. He knows our weaknesses. Have you ever listened to Altman? Take a listen. What a sales pitch. In admiration of Altman, Sam Slick would have stopped his clocks.
And when you hear Altman make his pitch, you might also remember Ford and the Model T. The two men, actually the three men, two real, one fictional, shared the sound of optimism, promoted as beneficial the transformational nature of their admired technology, created a need where there was none, and finally, admited the downside: fails will happen. But oh, what joy awaits us.
The modern version of joy: Altman declares AI will advance healthcare, medicine, and driverless cars; AI will solve complex problems; AI will encourage better urban planning, etc.
Also, you, the non-techie, are invited to march in the AI parade. AI needs Chatbot users; we're part of the grand super-intelligence experiment to help the scions of Silicon valley, those who grapple with the wild west of lawless AI. Chatbot users, it is hoped, will offer programmers suggestions for guardrails and manners.
Users are in the drivers' seats yet again?
Although complimentary to users, the let's-turn-it-loose-on-them OpenAI directive doesn't sound very wise. Or efficient. Or hopeful. Social media uses us as commodities. How's that going? But I digress.
To be fair . . Altman admits the transition to super AI will be tricky.
Despite dissenters' concerns about the wisdom of Pandora (Altman) freeing the secrets in AI's black box, there's apparently no stopping him. The CEO of OpenAI goes right ahead and flips up the lid even though he and we can't quite see inside the black box. Nonetheless, he sets the game in motion. Are you up for playing the AI game, Chatbot users? Sam Altman cannot wait to find out whether sapiens and AI can reach and/or stay in alignment.
We hear the soft sawder (flattery) and look at the sell. And we wonder. What about Altman on AI's need for massive energy? Altman almost blows off the question. He says algorithmic improvements will take care of and reduce the energy it takes to run AI, even if using AI becomes as popular as car ownership. Note: AI is very popular.
We come (at last) to the central point of this piece.
Computer scientists, such as, Geoffrey Hinton and Ilya Sutskever, as they worry and warble about the "unforeseen consequences of the exponential use of AI," are (occasional) dissenters.
Hinton and Sutskever have challenged Altman. Tell us about the unforeseen, they demand. Christ on a cracker. THEY don't know? We, the users, ask everyone on the board of OpenAI, Sutskever, Hinton to elucidate their concerns, . . . you're the geniuses, YOU tell us where the unforeseen pitfalls lie?
The bad news. The geniuses cannot tell us because the unforeseen is unimaginable. Literally unimaginable.
Henry Ford couldn't tell us about the human endgame he set in motion because, to him, a climate crisis of global proportion was unimaginable.
No one knows where to look for the unforeseen with AI, and that's the BIG problem. AI will have fails we cannot begin to fathom. No one, not the savviest dissenter in1908 thought engine emissions would create an existential crisis. Pollute the air, for sure. Kill us all, no.
Joseph Conrad's favorite narrator, seaman Charles Marlow, illustrates time and again throughout Lord Jim that sapiens is smart enough to know at least one great truth: the imaginable doesn't change our lives; the unimaginable does.
The theme of the "unexpected consequence" is woven into Marlow's narrative, and fear of the unknown reflects the complex interplay between individual actions and future repercussions. The novel explores the consequences of Jim's moment of moral weakness and abandonment of a sinking ship, the Patna, which leads to a cascade of events shaping the course of his life, and the lives of those around him.
What are we to think about Geoffrey Hinton's motives for abandoning Google? Hinton is too vague. What are we to think about Sam Altman's abandoning the concerns of the OpenAI Board and coaxing Ilya Sutskever to leave his worries on the boardroom doorstep? And once again we hear Sutskever has quit the board to start his own "meaningful" project. We don't have a clue what's meaningful in Sutskever's world, or why this computer genius felt he couln't do meaningful work at OpenAI.
With little explanation as to why they fear super AI, the geniuses remain alarmed. There's lots of moral abandonment going around. The dissenters leave us shaking in our boots because they're too afraid to tell us what it is they fear, and we know what they fear won't even be the half of it.
Morally, is Sutskever a contemporary Lord Jim?
Will the OpenAI Board's reinstatement of the slick salesman Sam Altman lead to a cascade of unknown events reshaping the course of all our lives?
What will the technologies of alternative man –Altman– look like should sapiens survive the next hundred years? What are the unforeseen consequences of AI? Please rmember, the genuises have a black box but they don't know what secrets lie in the box, or what mischief they can release. There's trouble coming. That's for sure.
But where the fuck should we be looking for it? We don't know.
What? No more humans?
Comments