TALKING TO MACHINES?

What does it mean to be human in the age of AI (Artificial Intelligence)? We’re still figuring that out. Is the term an oxymoron, an extension or a rip-off of actual human cognitive abilities, talents and skills? Could we be increasingly sucked into talking more to machines than to each other? And what will AI do to kids, who’ve known no other reality and are still forming neural connections?

CONNECTING OR NOT?

“Human life and humanity come into being in genuine meetings. There, man learns not merely that he is limited by man, cast upon his own finitude, partialness, need of completion, but his own relation to truth is heightened by the other’s relation to the same truth—different in accordance with his individuation and destined to take seed and grow differently.” (Martin Buber. Distance and Relation. 1950). But the digital age seems to have short circuited the process. Though it dawned with promises of more connection, more community, in actual practice, bonds can “deteriorate with AI dominating various aspects of life. Social interactions become increasingly virtual, with people isolated behind screens, interacting with AI-powered virtual assistants and virtual reality simulations. Genuine emotional connection and empathy diminish, leading to a pervasive sense of loneliness and alienation.”  (Nicole Serena Silver. AI Utopia and Dystopia. What Will the Future Have in Store? Artificial Intelligence Series. Forbes. June 20, 2023). Pessimists warn “we’re outsourcing our minds and bodies to algorithms.” And some “very smart people [among them a number of early AI developers] worry that AI may make slaves of us all.”  (Ali Minai. Between Golem and God: The Future Of AI. June 7, 2021).  Assume exponentially greater impacts on adolescents, still in formation.

STORYBOOK MEANINGS

As a species, we’re described as social and “meaning-making animals.” We use stories and narrative to make sense and learn; share and expand on ideas. This happens at both indivdual and societal levels. My own fascination with stories began early and very pre-digital, with mass-market “Little Golden Books” my mother picked up cheap at dime stores and supermarkets. Visual memory can still summon up an image of the bindings, a kind of “gold” tape with black filigree design. And the tales inside the covers, watered down but brightly illustrated, became my framing devices for trying to navigate the world. I had my first encounter, around age 10, with a kind of tech. “Modern” furniture—folding chairs in molded plastic—appeared in the gym of my very conservative Catholic school.  They felt incongruous, out of place, disorienting. Limited life experience: I’d only seen metal folding chairs before and couldn’t reconcile these being products of human hands.  So, I imagined their being machine-made. No idea where or how that might have happened, but kids are born magical thinkers and don’t get bogged down in details till later. And why not, if Rumpelstiltskin could spin gold from straw?  Years later, studying the history of modern architecture, I recognized similar chairs. Aha! My early sighting proved to be knockoffs of originals, indeed human-made, in semi-utopian workshops—Bauhaus, Ray and Charles Eames’ studio—aimed at changing the world by giving us more aesthetically pleasing places to sit.

TECH TALES

Tech has its own fairytales, also adapted from what’s already known. “When new technologies are born,” most of us tend to think of the new in terms of the familiar, lacking fully developed language unique, still drawing on the language of previous models.” (Gillian Crampton Smith). Bill Moggridge profiled fellow inventor/innovators from Silicon Valley, who shared “beginner mind” creation/origin narratives of trial-and-error attempts to form human-computer connections, often utilizing borrowed features like pull-down menus (developed at Xerox Parc), iterations of the mouse, etc. (Designing Interactions. MIT Press. c2007).  Even the most transformational tech needs stories woven around it to “shape the way we envision our future, forming what we imagine as our prospective possibilities or limitations. And…shared imaginaries of the desirable or undesirable meaning of technology serve to shape its development and acceptance into society.”  (Amir Vudka. The Golem in the age of artificial intelligence. NECSus. July 6, 2020).

GOLEM TROPE TALES

Archetypal AI origin tales typically cite 18th century Jewish folklore. The Golem, situated between fairytale, sci fi and horror, was “fashioned [by a Prague rabbi] of clay to do human bidding,” and under human control to protect at-risk ghetto communities. But a shadow side reflects “deep rooted anxiety…concerning the prospect [tech could] slip…from human control. …and eventually wreak havoc upon its human creators.” (Wikipedia). Mary Shelly’s Frankenstein and Stanley Kubrick’s rogue computer Hal in 2001: A Space Odyssey personify cultural and societal anxieties in times of massive change—the industrial and cyber revolutions.  With GenAI on the rise, Golem themes resonate once again, with novels like The Golem of Brooklyn (Adam Mansbach. 2023). Current accounts seem to downplay the scary/horror aspects in favor of a lighter, playful, ironic, sci fi, hipster, perspective, with “golems that can… learn and have agency…pass as human, seek romance, learn English by binge-watching Curb Your Enthusiasm, and set out on their own to seek revenge.” (Betsy Gomberg. Cognizant creations: Golems and artificial intelligence: How AI interacts with Jewish texts. Jewish Chicago. February 21, 2024).  Does this tone indicate hipster over confidence?

FRAMING FUTURES

Very smart folks living in earlier highly challenging times had the foresight to develop models to help navigate futures they already sensed coming. In 1942, as WWII raged and victory was still uncertain, sci fi author Isaac Asimov formulated the Three Laws of Robotics: 1) a robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) a robot must obey the orders given it by human beings except where such orders would conflict with the First Law; and 3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”  And in post-war 1950, Alan Turing, “father of theoretical computer science,” responded to the raging paranoia of early cold war days with what’s now known as the Turing Test. He called it “the Imitation Game.”  How could humans tell if/when a machine equals human intelligence? Now, over 70 years later, two GenAI programs have finally passed. And one succeeded “by fooling a panel of judges into thinking that it was a human…through a combination of natural language processing, dialogue management, and social skills….” and “In some cases, evaluators were unable to distinguish ChatGPT’s responses from those of a human.” (Damir Yalalov. ChatGPT Passes the Turning Test. MPost. December 8, 2022; updated January 20, 2023).

GenAI TALES

Generative AI (GenAI) builds on borrowed/appropriated stories, “learning the patterns and structures of its training data, and then us[ing] that knowledge to create new data. It can learn human language, programming languages, art, chemistry, biology, or any complex subject matter.” (Google AI Overview).  Where do such “training materials” come from? The NY Times has filed suit claiming Times’ articles were used [without permission] “to train chatbots that compete with the newspaper.” (Lucia Moses. Crosby leads lawsuit against OpenAI and Microsoft for AI training data. Business Insider. October 24, 2024). User-generated Wikipedia has been another major source. Could AI eventually replace? Unlikely so far, since it seems “Generative AI models need to train on human-produced data [including Wikipedia] to function. When trained on model-generated content, new models exhibit irreversible defects….” and in what’s known as “model collapse…synthetic training data breaks AI.” (Irving Wladawsky-Berger. Why Human Input Matters to Generative AI. Medium. October 24, 2023. Originally published in MIT Initiative on the Digital Economy).

MONEY TALES TALK

Tech euphoria tends to brush aside concerns and issues, too caught up in the aura of promise and magic, dollar signs, and showmanship.  Open AI, maker of ChatGPT, has raised $6.6B, increasing its valuation to $157B and is transitioning from a non-profit research institute into a for-profit corporation. “[M]ore important than any algorithmic scaling law…might be the rhetorical scaling law; bold prediction [pitches to venture capitalists looking for the next “unicorn”] leading to lavish investment that requires a still-more-outlandish prediction, and so on.” (Matteo Wong. The AI Boom Has an Expiration Date. The Atlantic. October 17, 2024). “Perhaps new and bullish wave of forecasts …reflect a flurry of industry news” of mind-boggling price tags and immense energy requirements far beyond the capacity of existing grids, the drag of development costs on major players’ stock values. And then there’s the conflict with efforts to reduce emissions and tame/reverse effects of climate change.

FUTURE UTOPIA OR DYSTOPIA?

As my dad used to say, everything has its plusses and minuses. And chances of a utopian or dystopian future with AI are uncertain and dependent on various factors, including how we develop, deploy, and regulate AI technologies…” and “like most technology, the outcome is dependent on whether a good or bad actor is utilizing it and how we have chosen to regulate and interact with it.” (Silver). So far, the record is problematic—online trolling, deep fakes, misinformation, massive hacking data breaches of personal information. And cautionary tales reflect business models that seem to emphasize maximizing profits over human factors, and often target vulnerable populations like adolescents and children. Kids seem to be exploited in a new version of unpaid child labor on social media and AI apps. Correlations have been found between online manipulation and bullying and increases in various forms of self-harm and teen suicide.  Recent headlines: TikTok executives knew about app’s effect on teens, lawsuit documents allege. (Bobby Allyn, Sylvia Goodman, Dara Kerr. NPR. October 11, 2024); The Age of AI Child Abuse Is Here (Caroline Mimbs Nyce. The Atlantic. October 18, 2024).  US Teen killed himself after he was preyed on by a chatbot described as “hypersexualized” which he allegedly said he was in love with.” (Joshua Thurston. The Times/Sunday Times. October 24, 2024). The boy, aged 14, had been diagnosed as on the Asperger’s syndrome. His mother is bringing a wrongful death suit. The target company issued the usual generic statement of sympathy—but not responsibility. “We care about our users and their safety.” These are often accompanied by appeals to freedom of speech and references to self-regulating forums. Saying users, rather than clients or even customers, suggests a disturbing degree of distancing.

INTERROGATING EXTERNALITIES

Digital tech has often been called a “clean” industry. But not so fast. Consider what economists call “externalities: “consequence[s] of an activity…not reflected in [its] cost…and not primarily borne by those directly involved.” But, in fact, consequences are typically borne early and late by surrounding environment and communities. I view through the lens of work on a case study of the Love Canal hazardous waste site; living near the Little Valley nuclear waste site in upstate NYS, where exposed workers had children born with genetic defects; running a Brownfields program to assess and clean up properties with environmental issues. And so, I recognize three key issues—the “collateral damage” to kids, unsustainable energy requirements, predictable waste generation and disposal—that can’t just be brushed aside. But a recent interview with a Google spokesperson (NPR 1A. October 22, 2024) raised concerns they might still be. How would waste be handled if small nuclear reactors(?!) are used to make up energy shortfalls? The response struck me as empty spin. Contractors (not the company directly?!) would follow regulations on “appropriate” handling. But what does that mean, assuming disposal will likely happen out of human view? Could AI perpetuate more of the same, become déjà vu all over again, a la Yogi Berra?

BALANCING CONTROL?

So, avenues are opening for greater regulation of the tech industry. Regulation, though often too little and too late, and inadequately enforced, is at least a place to start to require greater accountability, transparency, responsibility.  Tech billionaires won’t like it, will fight against. Hard to surrender being new Gilded Age robber barons bestriding the age. But even admitting society benefits from such risk-taking entrepreneurs, times have changed. The information environment they’ve created also means they can’t act with impunity, as their predecessors did.  And so, we might have hope of establishing greater balance.

DOWNSIZING?

And if we’re weaving an AI story, perhaps it’s time to start treating tech moguls with less reverence. Stop assuming that they know best or at least better in all spheres, because they’ve been successful in one. And perhaps we could add a bit of humor, pop their gilded bubbles a bit. I take heart from hearing the comedian Fred Armisen talk about playing dictators, probably a not dissimilar self-impressed population: the pathetic childlike playacting, the uniforms. The medals. The thin-skinned unwillingness to take criticism. The murderous rages/tantrums if crossed, having views contradicted.  Laughter seems like a good place to start to reintroduce the human touch, as the Marx Brothers and Charlie Chaplain showed us years ago.

Leave a comment