The subject line of an email I received from The Spectator World in July asked the question.
In the tech and financial worlds, AI (artificial intelligence) has been the topic du jour, the “next big thing.” Whether it will cause as big an upheaval in the way we live as the Internet (and personal computers and smart phones, which provide almost universal access to it) remains to be seen, but billions of dollars are being staked on the belief that it will.
Nevertheless, while the venture capitalists of Silicon Valley are getting almost giddy about AI’s prospects, a strong anti-technology current has arisen. This dynamic has old roots—the Luddites of the early 19th century rioting against the power loom—but this time the fears are much greater. They have spread to broader strata of society, including to the technology world itself. Recent advances in AI, such as ChatGPT-4, which can mimic a human interlocutor, have fed fears that sentient artificial beings could successfully contest with humans for control of the world.
Whether such seemingly far-fetched scenarios are plausible enough to worry about is not a question that will be settled any time soon. How a computer or network of computers would go from being a very complex and efficient, but unself-aware, collection of electronic hardware controlled by software, to a self-conscious “brain” is not at all clear; some theorists claim that out of the network’s very complexity various new characteristics, including self-consciousness, can “emerge.” Others regard this as hand-waving, rather than an explanation of anything.
Meanwhile, we have the example of an “advance” in information technology that has arguably created serious problems that have nothing to do with our technological inventions outsmarting us. I refer to the baneful effects (political and psychological) that many believe are due to the prevalence of social media. Clearly, social media does not have the scary capabilities of Skynet, the automated missile system in the Terminator movies that almost destroyed humanity. Platforms like Facebook, X and Reddit make use of some algorithms, but they aren’t intelligent, let alone self-aware; these platforms are not connected to machines, let alone ballistic missiles, that can do physical damage. Rather—to speak as generally as possible—social media does three things:
- It allows us to communicate with each other in a new way: without physical contact, regardless of distance, and, perhaps most importantly, anonymously.
- It allows us to find and communicate with people who share our interests or opinions of whom we might otherwise never have become aware, regardless of their physical location.
- It “helps” us navigate the ocean of available content by means of algorithms that highlight certain messages for us at the expense of other messages.
None of these phenomena are entirely new. The telephone allows us to communicate regardless of distance. It can also allow the caller (perhaps with the aid of a voice distorter) to be anonymous. Similarly, there have always been clubs that allow people with an interest in, for example, bird-watching, to find each other. In the traditional print and broadcast media, “gatekeepers” are even more powerful in directing our attention to certain items while not even mentioning others. However, the combination of these characteristics has certainly changed the way we communicate with each other with, many argue, major and perhaps catastrophic effects.
Social media’s effects are widespread and complex.
Perhaps not coincidentally, as the optimism surrounding globalization has faded, so also has the benign view of social media, which is now held responsible for a host of societal ills. The content of social media’s “charge sheet” varies widely among commentators, but typically includes:
- It facilitates living in a “bubble,” that is, being exposed only to people who share some particular opinion. One can find fellow believers more easily, wherever they may live. Constant contact with them may strengthen one’s belief in the particular opinion and enable one to “block out” contrary views. Thus, one is not led to reconsider one’s opinions.
- Things that one might be reluctant to say to someone’s face are easier to say online. “Flame wars”—encouraged by anonymity and the absence of a physically present interlocutor who could punch you in the nose if he got angry enough—can continue without repercussions.
- “Virtual” communities of online acquaintances may come to replace “real” communities, such as family or neighbors. These connect people, but do so partially (“virtually,” as we say), on the basis of a single interest or opinion. Such virtual communities rarely provide the kind of connection that exists among neighbors that allows them to act on a communal and voluntary basis, in the manner celebrated by Tocqueville’s Democracy in America.
While seeming to connect us with each other, social media may have the opposite effect. Social media gives us a simulacrum of social connection, but without prodding us to learn how to get along with other people. That includes especially those with different opinions, with irritating habits and quirks, and with all the other sources of friction that exist in an actual, as opposed to a virtual, community.
These phenomena are summed up by the notion of a decline in American “social capital.” Even before the advent of social media, the decline in the healthy patterns of communal feeling and trust among members of a society was being noted. Most observers believe that the decline has only continued, and that social media are at least in part to blame.
The Surgeon General’s 2023 report, “Our Epidemic of Loneliness and Isolation,” says the following about the effects of social media:
… the existing evidence illustrates that we have reason to be concerned about the impact of some kinds of technology use on our relationships, our degree of social connection, and our health. …
Technology can also distract us and occupy our mental bandwidth, make us feel worse about ourselves or our relationships, and diminish our ability to connect deeply with others. Some technology fans the flames of marginalization and discrimination, bullying, and other forms of severe social negativity.
None of this is conclusive, of course, and the effect of social media on society at large remains an important question for research and for reflection. Not to mention—technology related to communications includes many things other than social media.
With this background, what can we say about the addition of AI to the mix? In discussions about its possible dangers, AI is often conflated with the notion of “superintelligence,” that is, the acquisition by computers (or networks of computers or similar machines) of an intelligence that surpasses that of human beings. The scenario has been defined by a researcher in the field as follows:
Superintelligence is when you get to human level and then keep going—smarter, faster, better able to invent new science and new technologies, and able to outwit humans. It isn’t just what we think of as intellectual domains, but also things like predicting people, manipulating people, and social skills. Charisma is processed in the brain, not in the kidneys. Just the same way that humans are better than chimpanzees at practically everything.
The fear that we are creating a set of beings smarter than we are provokes the sort of fear epitomized by the Terminator movies scenario. However, the harms with which social media have been charged have nothing to do with any kind of “superintelligence.” Rather, they affect the way in which humans interact with each other. Much of the criticism comes down to the fact that social media can not only augment the ability of “real life” friends and family members to communicate with each other, but can facilitate, and maybe cause, the substitution of “real life” interactions by online ones.
The latest AI sensation, ChatGPT-4, impresses us less with “superintelligence” (although it does “know” more facts than anyone could have in their memory) than with its ability to simulate a human being. (We have for several years been familiar with such systems as Siri and Alexa, which are much less humanoid.) An extended conversation with ChatGPT-4 is at times eerily similar to a discussion with another human being, but not quite identical. A person who responded exactly as the bot does would seem strange, perhaps even on the autism spectrum.
Nevertheless, this is probably just a case of growing pains. More advanced versions of the bot may seem (and sound) much more normal. Indeed, many of the early business applications of AI may be in places like telephone service centers, where a bot will answer calls, resolve many of the routine problems, and pass the more complicated cases on to humans. Ideally, the customer would not notice the difference.
If this becomes a major use of AI, what might be some of the “unintended consequences” that we can expect? Some of the consequences of social media usage provide some hints. A paradox of social media is that it promised people a way to keep in touch with each other but was coincident with a rise in loneliness. In essence, the complaint is that while social media can offer a simulacrum of social connection, it falls short of the real thing. Nevertheless, it is able to absorb much of the time and effort that might otherwise have gone into the creation of real social connection and social capital.
If this analysis captures at least part of the problem, then it is easy to see how the use of AI may cause great damage by providing an “artificial” interlocutor for lonely people, that is even further removed from real social connection than an online “friendship” in a chatroom or via a medium like Facebook.
One way to approach this would be to look at the phenomenon of ELIZA, a program developed in the 1960s and ‘70s to mimic human communication (to gain experience in what might be necessary to pass the “Turing test,” that is, to fool its interlocutor into thinking it was a human being, rather than a computer program). The available resources then, in terms of hardware and software, were meager in comparison with what is available now. Nevertheless, one of the programs—that sought to mimic a “Rogerian” (non-directive) psychotherapist—attained a certain notoriety for its eerie ability to imitate a real therapist. Some users developed the strong sense that they were exchanging typed messages with a human being rather than with a computer. The developer of ELIZA wrote in 1966: “Some subjects [experimental users of the system] have been hard to convince that ELIZA (with its present script) is not human.”
Presumably, using current hardware and software techniques, a much better version could be produced now: it might be even harder to convince users that it was not human (especially if it were possible to converse with the bot via speech rather than typed messages). This might turn out to be a blessing: such “therapy” would be more readily available to a larger number of people, and certainly a lot cheaper to provide. One could imagine, for example, that a suicide “hot line,” if it were unable to find a sufficient number of volunteers to handle all its calls, could employ a mixed system in which the AI bot answers the call when no volunteers are available, and passes the call on to a human as soon as one is.
Similarly, a psychotherapeutic practice might offer various levels of therapy employing bots. One could imagine a system in which the patient has regular sessions with the bot and a monthly session with a human therapist (who would have access to the bot’s summary of its conversations with the patient.) Could there ultimately be a therapy bot that could treat patients autonomously? Would it be authorized to write prescriptions?
Ultimately, it is easy to imagine much more questionable uses of AI bots. Will lonely people turn to bots for companionship and conversation (sexually oriented or otherwise)? Whatever the business model of the companies providing these bots (whether their revenues come from subscriptions or from advertising), the financial incentives will be to attract as many users as possible and to keep them online for as long as possible. (If the business model depends on advertising, an additional incentive would be to extract as much information about the user as possible; advertisers will pay more to reach people the evidence suggests will purchase their product or service.)
What additional kinds of problems could this create? Here we seem to be entering the realm of science fiction; but the example of social media can perhaps point us in the right direction, since the operative financial incentives are similar:
- Conversation bots may emphasize extreme or sensational political or other views; those that resonate with the user may be repeated and exaggerated. (Social media algorithms are often blamed for highlighting sensational information on the grounds that it enhances user engagement.)
- Conversation bots would always be willing to talk about what the user wishes, unlike human interlocutors, who will have things on their minds that they wish to discuss.
- Conversation bots will be careful about disagreeing with the user on any topic, unless they have reason to believe such disagreement would be welcome. (As noted, social media facilitates users’ communication with those who agree with their view and block out those who do not. Social media algorithms can encourage this phenomenon as part of their general approach of figuring out what the users want, and then giving it to them.)
In short, conversation bots could give the user the illusion of a social connection without any of the bother of dealing with an actual human being. Whether users would find this fulfilling, or whether it would only make them lonelier and more depressed in the long run, remains to be seen.
But it seems likely that one of the major societal effects of AI will be to complete the process—discussed in Robert Putnam’s Bowling Alone—of isolating individuals more and more from the common life of their communities and society. And, in the process of doing so, further degrading the qualities of American life that de Tocqueville found so attractive in Democracy in America.
Abram N. Shulsky is a senior fellow at Hudson Institute.
American Purpose newsletters
Sign up to get our essays and updates—you pick which ones—right in your inbox.Subscribe