Baby Girl started last week with the flu. She ended the week walking it out. Life is coming at us fast.
A disclaimer: This post felt like a lot. Too many ideas to work through. So many strands I wanted to pull. Perhaps I should have just used ChatGPT. But, that would have also proved the point I make below. (Or, at least try to make.) Curious what you think.
With that, let’s begin.
When I was born in 1973, technology was accelerating. According to the Computer History Museum, “wireless, packet-switched digital networks, including the kind your mobile phone uses today,” were being developed. Xerox PARC was starting to link “Ethernets with other networks” giving rise to “internetworking” or “internetting.” A few years later, the internet was born.
Over the last 50 years, these advances, along with so many others, shaped my life in ways I will never fully understand. Even though I’ve never worked in the technology sector, I thrived through a technological revolution without fully realizing I was living through a technological revolution.
I was incredibly fortunate. I worry Anisa’s generation will not be.
Weeks away from her first birthday, Anisa, to paraphrase Neil Gaiman, is as old as her tongue, older than her teeth, and just a month younger than the public release of ChatGPT. Hers is a future shaped by artificial intelligence — a technological revolution that will be impossible to avoid.
Before we go any further, let’s acknowledge large language models and artificial intelligence have been with us for a while. We just didn’t know it. As a senior expert in the field said a couple months ago, “Once there is a name for the artificial intelligence tool, we stop calling it artificial intelligence.”
AI Gilfoyle makes this point quite:
It is hard to be wildly optimistic about a more efficient and productive future when I think about how actual human beings will be affected. We don’t really know how AI will increase disinformation and fraud, restructure labor markets or completely realign our relationship with data.
Which is what people are worried about. “The majority of Americans,” Data and Society’s Executive Director, Janet Haven, recently told members of Congress, “are concerned about issues like how employers use AI to hire and manage workers, how healthcare providers’ use of AI may degrade patient outcomes, and how AI’s widespread deployment will affect their privacy and freedoms.”
“Federal lawmakers,” Haven said, “should respond to the urgent concerns of their constituents rather than to unverifiable risks.”
The European Union is closing in on legislation that would require companies that make A.I. tools, “Provide regulators with proof of risk assessments, breakdowns of what data was used to train the systems and assurances that the software did not cause harm like perpetuating racial biases.”
Which is all fine. Even if all of these policy efforts will change (and change again) as the technology evolves.
See, I am actually more interested in (and terrified of) how AI changes society beyond the reach of regulation. How do the generations of the future develop, execute and improve ideas when ideas become the purview of technology - and the very few people who (hopefully) control it?
In 2017, James Davison Hunter referenced the work of John Patrick Diggins to point out that, “Authority represents the intellectual expression of power, and the problem of authority remains the problem of men and women of ideas.” Ideas are born of data, knowledge and understanding. One builds on the other. And each step is put at risk by the ability of AI to short circuit the process. In the future, women and men of ideas may be few and far between. Who has authority?
Put another way, “Science has never been faster than it is today,” Matteo Wong writes in The Atlantic. “But the introduction of AI is also, in some ways, making science less human.”
So, is AI the death of ideas? Or, as a techno-optimist would likely claim, will AI make ideas better?
Wong makes the case that, “For centuries, knowledge of the world has been rooted in observing and explaining it. Many of today’s AI models twist this endeavor, providing answers without justifications and leading scientists to study their own algorithms as much as they study nature. In doing so, AI may be challenging the very nature of discovery.” How do we make decisions if we can’t discover new ideas?
Which brings me to John Steinbeck’s 1962 Nobel Prize in Literature speech. As some of you know, I grew up in Salinas and spent many hours in the John Steinbeck Library. So, every time I run across his speech, I like being reminded of Steinbeck’s take on the “high duties and the responsibilities of the makers of literature.” (This substack most certainly not living up to those high duties and responsibilities.)
This time, my attention turned to the closing section where he reflected on the life of Alfred Nobel, who, “Perfected the release of explosive forces, capable of creative good or of destructive evil, but lacking choice, ungoverned by conscience or judgment.”
I do not believe AI will become a destructive force in the world. But the sheer power of the technologies at our disposal means, as Steinbeck said, “Having taken Godlike power, we must seek in ourselves for the responsibility and the wisdom we once prayed some deity might have.”
The generations to come that will battle on a narrowing field of ideas have, “Become our greatest hazard and our only hope.”
No pressure, Baby Girl.
No pressure.
Holiday Beverage
I’m not much of a mixologist, but I am quite pleased with a new (AI-free) discovery:
One part gin (preferably on the dry side)
Three quarters part cynar (h/t to the artichoke capitol of the world, Castroville)
A splash of St. Germain
Garnish with an orange slice (or peel)
As a friend reviewed, “Nice and bitter with a sweet finish.”
Let’s call it the Cranky Dad.
What I’m Reading
I still can’t find the words to describe how the Israel-Gaza conflict is impacting me. Much less how it is all playing out in communities and on campuses. But, a few pieces I think are worthy of your time:
Yascha Mounk: “The Universities That Don’t Understand Academic Freedom”
Danielle Allen: “We’ve lost our way on campus. Here’s how we can find our way back.”
David French: "What the University Presidents Got Right and Wrong About Antisemitic Speech”