"Common sense is the great difference between human and artificial intelligence"
Daniel Innerarity, professor of Political and Social Philosophy and author of 'Una teoría crítica de la inteligencia artificial'
Invited by the UOC-DIGIT research centre at the Universitat Oberta de Catalunya (UOC), philosopher Daniel Innerarity talks in detail about his latest book, in which he warns of "digital hysteria" and explains why traditional political processes cannot be replaced by a system based on the recommendations of artificial intelligence. Published this year, Una teoría crítica de la inteligencia artificial [A Critical Theory of Artificial Intelligence] challenges the promise of a "democracy of recommendations", which claims it can govern more effectively than the ballot box and public debate by drawing on our digital footprints.
Innerarity is a full professor of Political and Social Philosophy, an Ikerbasque researcher at the University of the Basque Country, director of the Institute of Democratic Governance, and holder of the Chair on Artificial Intelligence and Democracy at the European University Institute in Florence. His latest book, which has received the Eugenio Trías Essay Prize, provides a springboard for urgent reflection on how technology is reshaping the foundations of social coexistence.
This interview is based on a conversation between the philosopher and Ana Sofía Cardenal, full professor at the UOC's Faculty of Law and Political Science and coordinator of the eGovernance: Electronic Administration and Democracy research group.
Your book opens by asking what distinguishes human intelligence from artificial intelligence. If you had to single out just one key difference, what would it be?
The fundamental difference is that humans have common sense. Well, some more than others. Common sense is hard to explain, but easy to understand. It's the ability to grasp the context of a situation and cope effectively with incomplete information, filling in the gaps ourselves somehow. Machines don't have this ability; everything has to be spelled out for them.
In the book you introduce two new ideas: "algorithmic governance" and the "democracy of recommendations". How would you define them in simple terms?
"Algorithmic governance" refers to political decision-making carried out through algorithms, designed to manage levels of complexity that are beyond humans. In the book, I set out a series of conditions and limits for this type of governance. In a "democracy of recommendations", the recommendation systems used by platforms such as Amazon are applied to politics. This involves inferring what people "really" want from our digital footprints, without ideological mediation. I criticize this approach because there's a profound misunderstanding of what preferences are and what kind of individual is present in these systems. In my view, it's an inadequate substitute for democracy.
You say that algorithmic governance has not yet become a fully fledged part of politics. Is there a risk that these concepts could create confusion by anticipating events that haven't yet happened?
It's coming. It's unstoppable. The use of algorithms in decision-making will become ever more intensive, and it will cause problems. There have already been serious cases, such as one in the Netherlands where an algorithm interpreted as fraud a mistake made by some migrants who did not have a good command of the language, and the social benefits system rejected their application. A human would have understood it immediately, on seeing that they were unable to communicate in Dutch. A democracy of recommendations is more utopian, but it wouldn't surprise me if those in positions of technological power were dreaming of imposing such a model. We already see this happening in other areas where public service is being replaced by a customer-oriented model. We need to prepare conceptual arguments to counter something that's going to have a great impact.
You trace the history of automation in politics, from Hobbes to modern bureaucracy. What is different about today's algorithmization? Why might it make governance more opaque or harder to control?
It's a different kind of opacity. This technology introduces a level of radical opacity. We don't fully understand why a particular decision has been made. With a bureaucrat, we could ask for an explanation, and they might give us one, or they might not. But now we're dealing with machines that would struggle to explain their decisions. There's a general opacity built into the system. My idea is to distinguish between different types of opacity and to demand the appropriate level of transparency for each one.
You criticize the idea that "ethical add-ons" are enough to keep AI in check. If a moratorium is unrealistic and ethics alone are insufficient, what path would you propose to achieve genuinely democratic AI?
It would have to be an artificial intelligence whose entire life cycle involved explicit, and even political, human intervention. The democracy of recommendations seeks to replace people's explicit will by the implicit will of the consumer. What people want depends on what they say they want, not on what we infer. Democracy is not a waste of time. It's a fragile, contestable and improvable guarantee that we'll end up somewhere worth being that's accepted by all of us. That's the model. The democratic utopia is related to this.
“Democracy is not a waste of time. It's a fragile, contestable and improvable guarantee that we'll end up somewhere worth being, somewhere that includes and is accepted by all of us”
One of your warnings is that algorithms may erode our personal autonomy. This concern also arose with earlier technologies, such as radio or television. But isn't it true that our opinions have always been open to influence, with or without technology? Is the idea of a fully autonomous, reflective subject really viable, or is it more of a useful fiction?
I think we're facing a situation that could be described as "digital hysteria". We've been turned into hysterical subjects with wildly inflated expectations, but also gripped by panic that's out of proportion to reality. My book, if you'll allow the analogy, is an anti-anxiety drug for digital subjects. It's a sedative. We need to think carefully about intelligence, control, transparency and democracy. Reflecting on these concepts will allow us to calibrate our expectations and our fears.
You also warn about the impact of algorithms on democratic deliberation. Is deliberative democracy threatened more today than it was in the age of mass media? Or is it also a necessary fiction, like that of the autonomous subject?
Yes, it's a fiction, but one that at least helps us to set a limit to the mere aggregation of interests. I don't really know how deliberative democracy is achieved, but we do know that simply aggregating interests often causes harm to those concerned or yields suboptimal outcomes. We need to think about what needs to be added to aggregation, and what strategies, institutions and procedures might improve it.
Your idea of democracy as a system for correcting human fallibility is an intriguing one. But perhaps it's a rather sophisticated view. Do you think this way of understanding democracy is actually shared by a significant proportion of the public?
Essentially, it's an old idea from Spinoza: humans are unlikely to unanimously keep making the same mistakes. Democracy is an invention designed to make that even more unlikely. In a democracy there is government, opposition, freedom of expression and freedom to criticize… I'm not an unconditional supporter of epistemic democracy, but I think we tend to assign democracy a mainly normative value and sometimes fail to pay sufficient attention to its epistemic advantages. The problems of the world require knowledge more than good will; to a great extent, they're problems of ignorance rather than lack of leadership.
You argue that two forms of legitimacy coexist in democracy: procedural legitimacy and results-based legitimacy. Do you think part of today's democratic crisis stems from the fact that many people only recognize legitimacy grounded in results?
This suggests that we still have a great deal of work ahead of us. Right now, there is tremendous pressure to make results the sole criterion. I've been surprised to encounter well-educated, democratic people who are beginning to think that the Chinese model is "not so bad after all". As a society, we're tempted to adopt this view. We need to counter this in two ways: academics must develop an appealing theory of the other aspects of democracy, such as procedures and alternative forms of legitimacy; and the political system must deliver better results. I reject the notion that academics should bear the entire burden of fixing this dysfunction. If the political system is unable to address urgent and long-standing issues such as climate change and inequality, it will find itself increasingly powerless against those willing to throw everything overboard in exchange for results.
The UOC is intrinsically linked to the use of technology. From your perspective, what fundamental principle should guide the application of AI in education, so that it enhances, rather than limits, students' autonomy and critical thinking?
I don't believe there's any contradiction between the UOC's basic medium being a digital environment and the kind of education it provides incorporating those values. During the pandemic, we realized that digitalization was very useful for certain things and had limitations for others. The UOC has known this for much longer. My answer would be that we shouldn't see these things as incompatible. The kinds of cognitive strategies we foster in students can be entirely compatible with a digital or face-to-face format.
Transformative, impactful research
At the UOC, we see research as a strategic tool to advance towards a future society that is more critical, responsible and nonconformist. With this vision, we conduct applied research that's interdisciplinary and linked to the most important social, technological and educational challenges.
The UOC’s over 500 researchers and more than 50 research groups are working in five research units focusing on five missions: lifelong learning; ethical and human-centred technology; digital transition and sustainability; culture for a critical society, and digital health and planetary well-being.
The university's Hubbik platform fosters knowledge transfer and entrepreneurship in the UOC community.
More information: www.uoc.edu/en/research
Press contact
-
Núria Bigas Formatjé