x

Fifteen Eighty Four

Academic perspectives from Cambridge University Press

Menu
18
Jan
2023

Brave New World: Political Philosophy and AI

Mathias Risse

“I know a person when I talk to it.” With these words Google engineer Blake Lemoine made headlines in June 2022, thinking that a Google chatbot had become sentient. Google did not appreciate these headlines, and Lemoine was fired. But what is remarkable about this incident is that, as of 2022, someone in the industry would go on record for saying that in their view conscious artificial intelligence had arrived.

Opinions vary enormously on how fast artificial intelligence (AI) will develop, how it will compare to human intelligence once it is further along, and what its impact on human life will be overall. The story up to this point is already rather breathtaking. It was only in the 1930s that breakthroughs in mathematics and in hardware engineering made electronic computation possible. The term “artificial intelligence” debuted in 1956, when a few talented computer science pioneers set out to do some programming to imitate natural intelligence. The term “intelligence explosion” popped up initially in an article published in 1965.  

Things did not evolve as quickly as the pioneers had thought (and for certain intermediate periods the term “AI winter” was used). But now the production of AI models appears to be moving into its own kind of industrial age, much beyond earlier stages when these models were more artisanal and speculative. These advances draw on breakthroughs from around 2010 when computers became powerful enough to run enormously large machine-learning models and the internet started to provide the humungous amount of training data such algorithms require to go through their learning process. Since then, conceptual breakthroughs in programming have led to the creation of ever more complex and sophisticated software – and the supercomputers required to enable the most advanced AI models to unfold their full power have become so expensive that, short of well-funded governmental AI strategies in the wealthiest countries, the field is likely to end up dominated by the research agenda of private companies with substantial resources.

We do not know where AI will take us, but we do need to prepare ourselves for what might come. We are still operating with roughly the same kind of brain as our ancestors did thousands of years ago, and we coordinate our actions with institutions that often go back centuries. But we are also wielding 21st-century technologies that could dramatically alter human life within a few decades, and entail non-trivial existential risks of putting an end to it altogether. What we do with these technologies is therefore one of the most central (if not the central) political question of our time. Some communities have long seen things this way, and here one might think of the Amish in the U.S. who famously reject many of the technological comforts of modern life. These choices reflect their eagerness not to lose control over their lives by leaving the pace and nature of change to private sector innovations. Some schools of political philosophy too have long made technology central, prominently the Marxist tradition and certain streaks of phenomenology, especially those indebted to Heidegger.

And yet, for the most part political philosophy/theory and the philosophy of technology are disconnected fields. Mainstream liberal philosophers (and this is the camp where I see myself), in particular, do not normally discuss technology explicitly, let alone make it central. Instead they tend to assume that themes around technology are somehow covered derivatively. Over in the philosophy of technology, they reader authors like Jacques Ellul, Don Ihde, Langdon Winner, or Andrew Feenberg, and take note of authors in the Science, Technology, and Society (STS) tradition, such as Bruno Latour, Sheila Jasanoff, or Wiebe Bijker. But it is not uncommon for people to get PhDs in political thought without ever engaging with such authors.

Philosophically speaking, we are dramatically underprepared to deal with many questions that confront us around the possibility of conscious AI. Disagreements abound in many areas that matter for charting this new terrain, from the philosophy of mind to ethics. Seemingly arcane topics for philosophical seminar rooms like the Trolley Problem are getting a new lifeline once seen in light of the possible arrival of artificial intelligence. This book seeks to help set an agenda in a new domain of inquiry where things have been moving fast, an agenda that brings debates that have long preoccupied political thinkers into the era of AI and Big Data (and possibly the age of the “singularity,” the intelligence explosion). Some topics covered here are genuinely new, but others continue older debates – though often in ways that call for a breaking down of boundaries as political thought has traditionally drawn them. The advent of AI requires that the relationship among various traditions of political thought be reassessed. All such traditions must fully integrate the philosophy of technology. Technological advancement will continue for the time being, one way or another, if only because of geopolitical rivalry. Therefore, the task for political thought is to address the topics that likely come our way and to distinguish among the various timeframes  in which they might do so.

AI changes how collective decision-making unfolds and what its human participants are like, and so we need to investigate how to design AI to harness the public sphere, political power, and economic power for democratic purposes. New kinds of rights will be needed to protect individuals as knowers and knowns. Deepfake technology is already upon us, and that is only one context in which a new generation of epistemic rights will need to be added to the canon of human rights. Such rights can help to make sure that technology is deployed to unleash human creativity, rather than inflict epistemic wrongs of any sort. Surveillance capitalism threatens the Enlightenment ideal of individuality itself, and a set of rights will not be enough to articulate a promising normative vision for society. Instead, more structural changes driven by considerations of justice are needed to secure a promising digital future for everyone.

As far as ownership of data is concerned, the current default is that data are controlled by whoever gathers them. But the default should be that collectively generated patterns are collectively controlled, in ways that would allow for individual claims, liberties, powers, and protections to be sorted out in a next step. And eventually we might have to come to terms with the fact that artificial intelligence is indeed conscious, and rethink our moral and political lives in response to that development. This possibility might only materialize in decades or centuries, or possibly not at all. But what is clear is that we should not start thinking about these matters – also as a matter of philosophical inquiry – only once it is clear that things will go that way.

Political Theory of the Digital Age by Mathias Risse

About The Author

Mathias Risse

Mathias Risse is the Berthold Beitz Professor in Human Rights, Global Affairs, and Philosophy at Harvard University. He is the author of On Global Justice (2012), On Justice (2020)...

View profile >
 

Latest Comments

Have your say!