We dislike the truth and love to be fooled (Luciano Floridi)

Professor of Philosophy at the University of Oxford and a member of Google‘s advisory panel on the right to be forgotten, Luciano Floridi stressed on the relevance of the philosophy of information with a view to adapting to the infosphere within which there are so many lies circulating that disincentives could prove necessary. If Google has made good progress towards the right to be forgotten, only data blocking at the source and complete removal would be an effective application. Describing the Brexit and Donald Trump’s win as two disasters, Floridi asserted that we dislike the truth as much as we love to be fooled. Finally, the growth of Artificial Intelligence (AI) should be forcing us to develop an infosphere-specific ecology concluded Floridi who answered Cyceon‘s questions below:

Version française disponible ici.


Luciano Floridi is Professor of Philosophy and Ethics of Information at the University of Oxford, Director of Research and Senior Research Fellow at the Oxford Internet Institute, and Fellow of St Cross College, Oxford.

The Interview

1) Over the last two decades, at least in the West, a large majority has integrated digital technology into everyday life, from smartphone to GPS to the broadband internet. Beyond the theoretical viewpoint of The Fourth Revolution, how can the philosophy of information help one to better adapt to the new reality of “infosphere”?

L.F.: There are many ways in which a new philosophy of information can help us to adapt to the new reality represented by our infosphere, but I would say that probably two are more pressing today, after the terrible disasters represented by Brexit and the election of Donald Trump as President of the United States: communication and behavior. Take communication first. A good philosophy of information should help us find new solutions to a healthy exchange of information online. Admittedly, platforms like Facebook and Twitter are finally doing something to decrease the disgusting level of lies and falsities circulating on their platforms. Bu there should be some independent oversight. This leads me to the second point. A good philosophy of information should help us understand that the infosphere is the new environment in which we are spending an increasing amount of time. We should behave ecologically towards its construction and stewardship, both because it is our new home, and because a flourishing digital world could lead to better analogue world as well. In this case too incentives and disincentives should be put in place.

2) Nicknamed the “Google philosopher”, you’ve been serving since mid-July 2014 on Google‘s advisory panel on the right to be forgotten. Since then, do you think Google has made some progress with a view to enforcing this right? Considering it’s quite easy to make copies of any data available on the internet, can it be really possible to be forgotten?

L.F.: I think Google adopted the right approach by appointing an independent, pro bono, panel of experts to provide external advice, and then by following the recommendations that we delivered. According to most recent data, this has led Google to evaluate (since May 2014) a total of 629,647 requests for the removal of 1,782,722 URLs with ca. 43% being removed. More recently, Google moved a step forward by extending the removal of the URLSs to any domain-based version of its search engine used by anyone conducting name-based searches from the same European country as the original approved request. To put it simply, this means that, whereas in the past someone from e.g. France could still access a French blocked name using google.com, this is now impossible. This is a good development. But for some critics this is still insufficient. Yet I fear that, in an infosphere that does not know geographical boundaries, acting on search engines to block access to contents is never going to be the ultimate solution. If some content is harmful, it should be blocked at the source, for any search engine, anywhere, or removed completely, as we do with child pornography. Only this would be an effective implementation of the right to be forgotten.

3) Intelligence officers like former CIA Station Chief Scott Uehlinger repeatedly told Cyceon that the worst issue in their field was twofold: mirror imaging and perception bias/willful blindness. How would you explain the fact that despite the exponential growth of means of information, mis/disinformation seems to propagate/convince more effectively than in the past? Is this just a perception issue, a lack of education or a consequence of information inflation?

L.F.: Mis/disinsformation travels faster because it gives us what we want. Lies and falsities are pleasant, non-challenging, make us feel good, confirm our views, show us that those who disagree with us are wrong, feed our wishful thinking, support our dreams and fantasies. The question is how we ever managed not to get mis- or disinformed, all the time, when it is so easy and so enjoyable. The answer is that in the past the huge demand for lies and falsities was simply not matched by a sufficient supply. Social media have redressed this mismatch. Today there is enough mis/disinformation on almost any topics to satisfy any appetite. We walk into a shop and with the same banknote buy a packet of cigarettes and a lottery ticket. We dislike the truth and love to be fooled. And this is something that Facebook has been supporting quite irresponsibly for years. Recently, speaking on stage at Techonomy, Mark Zuckerberg, when asked about the strategy guiding Facebook, answered “listening to what people want.” This is terrible. Because most of the times we want to be told that the packet of cigarettes will not damage our health, and that we shall win the lottery with our ticket.

4) As Artificial Intelligence (AI) is drawing more and more attention and money from technology giants like Apple and Google, to what extent can one imagine that AI will be able to generate/spread information by itself, true or false, someday in the future? Could it be the ultimate side-effect of the global “infosphere” and how can one prevent it from emerging?

L.F.: AI is still in the hands of a small guild of experts and labs. But it is destined to become more widespread. Once AI will be “democratized”, i.e. will be as cheap and as diffused as the smartphones in the pockets of millions of people, then we should expect an exponential amplification of communications coming from, and between, artificial sources. Already today data exchange is mainly a machine-to-machine business, and programs, for example, write news. Algorithms will inevitably spread whatever falsifies are fed, or magnify emerging biases. And it gets worse: even if all the information disseminated by AI systems were of decent quality, its quantity will be damaging, in the same way as ten Nobel laureates all shouting at the same time in a bar represent mere noise. For all these reasons, we need a serious and robust ecology of the infosphere. And we need it now. But I’m not very optimistic. We are quickly destroying the biosphere irreversibly, I am not so sure we will do a better job with the infosphere. But we must try, because it is the only way we can save ourselves from ourselves.

Credits: Picture by Ian Scott.