AI Won’t Change As Much As You Think
Smart people, like Tyler Cowen, make dubious claims about AI
Before I get into it, will you please subscribe to my new podcast? On Apple or Spotify?
Okay then.
As a temperamental conservative with Luddite leanings, I tend to be in denial about the effects of technology until it is way too late. I was editing a print weekly from 2004 to 2006 and had no idea that the internet was coming for us, with Craig’s List already devouring our classified-ad dollars. Anything I say about artificial intelligence (AI) should be taken with a heaping grain of kosher salt (which, by the way, is the same as regular table salt, no matter what the fancy chefs and the Food Channel would like you to believe—see here).
But reading this dialogue in The Free Press about AI, I felt I had to respond. The participants are Tyler Cowen, the economist whose nerdy podcast is one of the very few podcasts I listen to regularly, and Avital Balwit, chief of the staff to the CEO of Anthropic. They are clearly very smart people, and I take what they say seriously. But even though they know a gajillion times more about AI than I ever will, I think I can spot the flaws in their arguments. I can sum it up this way: I think AI will change the way we work, somewhat, while they both seem to agree AI will change what it means to be human—will change the way we are.
They believe that human nature is malleable, that human contentment could see great gains (or losses) depending on how we relate to this software. I think this is profoundly mistaken.
I think the best way I can take apart what they are saying is to give some quotations from their dialogue and offer some replies. This used to be known as “fisking,” but I don’t know when I last saw that term; but here I go, fisking away!
In their introduction, they write:
We stand at the threshold of perhaps the most profound identity crisis humanity has ever faced. As AI systems increasingly match or exceed our cognitive abilities, we’re witnessing the twilight of human intellectual supremacy—a position we’ve held unchallenged for our entire existence. This transformation won’t arrive in some distant future; it’s unfolding now, reshaping not just our economy but our very understanding of what it means to be human beings.
More profound identity crisis than when, with the nuclear age, we had a new ability to destroy the earth? More profound than when, with the Copernican revolution, we realized we were not the center of the universe? More profound than what philosopher Charles Taylor would call the disenchantment of the world, the time, after the Middle Ages, when we ceased to see God all around us?
Maybe so. But let’s not be too glib about this, shall we?
Both of us have an intense conviction that this technology can usher in an age of human flourishing the likes of which we have never seen before. But we are equally convinced that progress will usher in a crisis about what it is to be human at all.
Heck, I’ll read on, maybe you can persuade me. But there is absolutely no reason to believe that software will bring in more human flourishing than ever before. Have machines ever made us happier, more fulfilled, more purposive? Sometimes, in little bits. But right now we are seeing the backwash of software and machines—smartphones, social media, etc.—that have set back human flourishing by decades or more. Our children are sadder and more anxious; as adults we are less connected. It could take us a long time to get back to the average level of flourishing (as if these things could be measured) of, say, 1996.
And I’m not sure why we’d have a crisis about what it means to be human. We all know what it is to be human. Having machines around us that can pass the Turing test won’t change that, any more than Deep Blue beating people at chess threw us into a collective identity crisis.
Machines have been outdoing us for a while. Cars are faster than people. Calculators compute faster. Boats move faster through water. People are smart, so we build machines that seem “smarter” but of course rely on humans to exist at all. This is nothing new.
Our children and grandchildren will face a profound challenge: how to live meaningful lives in a world where they are no longer the smartest and most capable entities in it.
Right. We haven’t been the smartest for a while. In some ways, not since the invention of the abacus.
Balwit writes:
It’s not just an issue at work. Claude [the AI] has injected itself into my home life, too. My partner is brilliant—it’s a huge reason we are together. But now, sometimes when I have a tough question, I’ll think, Should I ask my partner, or the model? And sometimes I choose the model. It’s eerie and uncomfortable to see this tool move into a domain that used to be filled by someone I love.
I too have a smart partner, my wife. But it’s hard for me to imagine what part of our household conversation would be replaced by ChatGPT or Claude or Bob or Jim the AI or whatever. What exactly is this dialogue that could be with the AI or with the lover/spouse/”partner”? The kinds of questions I ask my wife include:
• “Do you want to take the kids to soccer today, or shall I?”
• “Shall we watch Hacks or The Studio tonight?”
• “Why don’t you love sprinkles the way I love sprinkles?”
That sort of thing. Which of these might I offload to Claude? Who are the people whose domestic lives include conversations that could be handled by an AI?
We typically give young people advice to “try to find a room where you are not the smartest person in it.” But any room with a laptop or mobile device now satisfies this criterion.
I think this is good advice, but I can’t imagine giving it to any young person I care about. My advise runs more to “Be kind” and “Don’t have social media” and “Find a job you love” and “No unusual piercings.”
Since joining Anthropic, I had been begging my mom to try Claude. She had casually used it a few times, but claimed it didn’t help much with her work. Then she had her first serious interaction with Claude—she asked about avoiding isolation after retiring from teaching. The response made her tear up multiple times. Something about its empathy—the listening words “ah” and “oh”; how it reassured her that her fears were understandable; the way it called her “thoughtful”; and its assertion that her words “resonated” both moved her deeply and unsettled her. It also just gave solid advice.
As she put it, we all want that kind of wise friend who will lean in and really listen, someone infinitely patient and nonjudgmental who can help us process our anxieties while nudging us gently toward greater wisdom. The fact that Claude outperformed her human circle at providing this kind of support left her with a profound sense of grief, even as she found herself immediately attributing personhood to it, catching herself using “he” instead of “it” when describing their conversation.
The AI made her tear up? The AI outperformed her human circle in empathy and really listening? I don’t mean to sound uncompassionate here—to the contrary, I say this with compassion—but this seems to suggest the poverty of her human relations, not the impressiveness of AI. I have (or think I have) a circle of human friends who could talk to me about retirement as empathetically as AI and give me hugs. Plus, I have a dog. Two dogs, actually. And a cat.
I don’t deny that a computer listening bot, stocked with AI instructions, may be better than nothing. But I’m not persuaded it’s better than real people. The right kind of real people, anyway. If we have such profound levels of human loneliness out there, then solve for that problem—do not ask AI to fill the void.
Cowen writes:
I am a professional economist, but there are plenty of things I don’t know. In the world before AI, I would turn to libraries, the internet, and also my colleagues. These days, I go to the top AI models. And they can answer almost all of my questions. I can try to make the questions artificially hard, or “out of distribution,” to use a more technical term, but when it comes to questions I might ask in the natural course of events, the top models have good answers. I cannot really stump them. I also am relieved that I don’t have to query my colleagues nearly as often.
Another way to put the point is that the top models are better economists than I am. Or at least they are along one medium, namely answering questions.
It seems to me that what Cowen is saying here is that AI finds facts, and sorts and solves quantitative problems, faster than he could, faster even than he and his friends could. But that is what technology does, much of the time: what humans do, but faster. That is what the airplane does: get you somewhere faster. The telephone gets your voice to people faster than the mail, which gets it there faster than traveling a great distance by foot to talk to someone.
To a great extent, technology is the overthrow of distance. And has been since we made boats, and then steam ships, and telegraphs, and so forth. Tech saves time and overthrows distance. Nothing new there. That’s a lot, but it’s also not very much. (Hat tip to my old teacher David Gelernter, who made this point in his undergrad comp-sci class that I took in Spring 1996.)
… most humans will be working with AI every day in their jobs. The AI will know most things about the job better than the humans. Every single workday, or maybe even every single hour, you will be reminded that you are doing the directing and the “filler” tasks the AI cannot, but it is doing most of the real thinking.
We don’t doubt that many people will be fine with that—and, in many cases, relieved to have so much of the intellectual burden removed from their shoulders. Still, for a society or civilization as a whole there is a critical and indeed quite large subclass of people who take pride in their brains.
What is to become of them and their sense of purpose?
I mean, I get it. Being a farmer became less meaningful as agricultural equipment replaced human knowledge and skill. Some jobs become outmoded, and the dislocation is sad. And, I’d argue, often not worth it; I’d rather have fewer, more expensive, better-made, hand-made clothes than the profusion of junk that cheap fast fashion gives us. I am very aware of and sensitive to the cost of dislocation.
But again, it’s not new.
The drive to find purpose in our work, and the elusiveness of that purpose for many people, is not new. Civilization does have to adjust. Right now, many people who derived purpose from manufacturing have had to make the mental shift to finding purpose in service industries; we need fewer machinists and more nurses. I get that. I also get why it’s hard. But I am not sure why AI will qualitatively change this fact of human existence that’s as old as our shift away from subsistence farming.
The big story in the short-term is this: Blue-collar workers (carpenters, gardeners, handymen, and others who do physical labor) will become more valuable. And white-collar knowledge jobs, many of which are already near or under the waterline, like legal research and business consulting, will diminish in value. Ultimately, though, the people who will benefit most are those who use AI heavily to generate projects or guide them in their work.
This sounds right, but one thing to keep in mind is that even mind-numbing legal research, done by oppressed recent law school grads, has be guaranteed by human beings. If an AI messes up some key citations, there is nobody to hold accountable, nobody to fire; one reason you’d have a human being do it is to create an incentive structure and an accountability structure. Otherwise, you are kicking the can up the ladder: do you fire the programmer of the AI? If the AI still needs a programmer with enough legal knowledge to be held accountable … well, doesn’t that sound like a recent law-school grad?

There will also be new jobs. Many of these are hard to predict, but a few things seem likely. One is the energy sector, as strong AI will place significant demands on the U.S. energy infrastructure. Another area will be biomedicine, where the pace of scientific progress will dramatically accelerate. (The advent of more drugs and devices will mean more testing, more regulatory approval, and so on.) It also might be possible that the scientific advancements resulting from AI, such as cheaper water desalination, allow us to terraform more parts of the globe.
Right on.
For these reasons, more jobs are likely to be outside rather than at a screen, compared to the status quo. And that may well be a good thing.
That too seems cool.
There also will be more coaches, mentors, and individuals who supply the services of inspiration.
Er, why? Why will the need for these people go up? And is “mentor” a job? Isn’t mentoring an organic byproduct of a close relationship between someone more experienced and her less experienced friend or colleague?
This is where these two seem to have some conceptual deficits in their AI thinking—they think of “work” differently from how I do.
The AIs may be better therapists than humans, but lots of times we still desire human contact in our interactions, if only to satisfy our biological programming. The number of entertainers and comedians is likely to rise as well. (Maybe the AI can write funnier jokes, but most of us do not want to hear a robot deliver them.) For related reasons, we also can expect more humans to work as greeters, charmers, and flatterers.
Wait, why? Why will the need for greeters go up? Right now, we need greeters at restaurants, hotels, clubs, Wal-Mart. We’ll still need them. Where else will we need them?
So far, I am not persuaded work will change much at all.
So, will leisure time still fulfill us if we are working far less?
To live a happy and fulfilling life without work, one needs meaningful pursuits, social connection, and physical movement. We don’t expect any of those needs to go away in a world of AI ascendance. Indeed, those needs will likely become more intense.
Group hobbies and projects will proliferate, leading to a growth in sports, clubs, volunteering, religious services, walking, biking, and chasing your kids around, too.
But in the past century or so, technological advances had not led to less work; we keep working the same hours, but with more productivity. Indeed, some very low-tech societies have more leisure time. (Remember, the pre-tech Hawaiians invented surfing.)
And am I the only one who suspects that the spread of ChatGPT will not lead to a “growth in sports, clubs, volunteering”? Right now, the dearth in volunteering and club sports—the fact that we are in an era of “bowling alone,” to use Putnam’s phrase—is not because we don’t have enough good software! If anything, the AI-like software we do have has created those problems! Social media algorithms and smartphones make us more isolated; does it stand to reason that the spread of general AI is just the thing we need to reverse that trend?

Ideally, we’ll compete on our virtues, but maybe we’ll compete even harder on our looks, our physical strength, or how much AI systems like us—or some dimension as of yet unfathomable. Tech founder Avi Schiffmann suggests we will compete along the quality and interestingness of our AI companions. (Perhaps eventually the AIs may compete along the dimension of how interesting their human companions are.)
This is the idea that with intelligence now the domain of machines, we’ll have to find new sources of status. But this is madness. Status right now is not really linked to intelligence at all. It’s linked to money, for sure, at least for some people. For others, it’s linked to amorphous coolness “it” factors, or charm, or looks. For many people, it’s already linked to virtue. And of course for the truly evolved, status is a mirage; we’re all created in the image of God, all have equal worth, all should be trying not to arrange people in hierarchies.
You can debate the pace at which this will happen, but AI is going to extend our lives, the result of radical biomedical advances. A typical 26-year-old (Avital’s age) is likely to benefit significantly from having a longer life, a healthier life, and one less likely to be plagued by dementia or other maladies of the brain. Ten or 15 extra years seems likely, and for many people, we may do much better yet.
They think this is the most profound change. It strikes me as the least profound. Has human meaning and purpose changed much as our life expectancy has gone from 50 or 60 to close to 80? I mean … not really! We have the same problems, choices, demands, struggles.
I don’t want to suggest that life expectancy plays no role in human meaning. When a small cut could, if infected, mean death, or when the majority of babies did not survive to adulthood, the stakes must have seemed higher all the time (or lower? if you didn’t expect any one baby to survive, your sense of attachment to children must have been different). So, okay, life expectancy matters. But will going from an expectancy of 77 in the United States to one of, say, 90 matter that much?
There are interesting ways to look at this, because we all experience changes in life expectancy every day. At age 20, a male’s life expectancy in the U.S. is 74; at age 50, my life expectancy is 78; if I make it 60, my L.E. will be 80. The longer we live, the longer we can expect to live. Does that change how we live, what we feel about living? Not really?
I don’t mean to suggest that AI won’t change the world. It will. But the central human projects of meaning-making, finding purpose, finding meaningful ways to spend our time (including work), finding love—there is no reason to think those won’t be our main projects ten years from now, or twenty. No matter what Claude does.