The ethics of AI – what are we even talking about?

Wovon man nicht sprechen kann darüber muss man schweigen? The discussion on the real potential of AI is often buried under the tech hype.

But AI is becoming a vital international competitive advantage for Finland. As a welfare society, the study of the ethical questions related to AI comes naturally to us, which allows us to take a pioneering role in this field as well. But what are we actually talking about, and how?

With the hype surrounding AI, we easily end up in a situation where AI is taken as absolute value for solving all kinds of problems. The practically unchallenged justification of AI is manifested in phrases such as ”AI defeats humans”, ”AI promotes well-being”, ”a more intelligent society through AI” or ”AI revolutionises traffic and promotes sustainability”. Such catchphrases touching on global technological and social challenges are thrown around not only by the media, but many experts too, thus legitimising the social superiority of certain technologies. An off-the-cuff remark in this field can have a major social impact, as Fredrika Lucivero says in her book, Ethical assessment of emerging technologies (2016). According to Lucivero, flippantly dropped visions of, for example, the potential of AI can influence how decision-makers invest funds in technological research, steer the focus of scientists competing for funding and guide the investments made by businesses. Spoken aloud and with sufficient authority, these statements either become self-fulfilling or lead to inflated expectations.

‘Who can automatic cars run over?’ is an absurd question

Artificial intelligence and its ethics are currently one of the hottest topics. But in-depth discussions of the concept of ‘AI ethics’ are rare. What are we talking about when discussing the ethics of AI? What is the substance of AI ethics? In addition to the familiar discussion on the safety risks of robot cars and a few examples pondering the disappearance of jobs (without seeking to trivialise these important issues), there has been a glaring lack of real-life examples of AI in the discussion. But discussion is needed. The questions have to be asked even if there are no answers yet. To quote Wittgenstein’s Tractatus logico-philosophicus, we are speaking about that which cannot be spoken of.

We easily fall into looking for ethical problems related to AI, when we should be asking how AI could be used for good and improving the quality of life. That’s why the question of who automatic cars should be allowed to run over is absurd: the answer is, of course, nobody. In addition to looking at individual applications, the ethical discussion should be extended to the essence of well-being and aspects of artificial intelligence that benefit people, along with AI’s relationship to society and societal change.  When we do that, the key questions become maintaining the security of citizens and their trust in society, equal availability of services, possibility to be heard, and justifying decisions from the perspectives of human dignity, welfare and sustainability.

Even though the discussion on the ethics of AI has hardly begun, the discourse on AI itself has already arrived at the requirement of user-friendliness: we are talking about the explainability of AI. This refers to the concept of ‘Explainable AI’ coined by the European Commission. The concept should be used to promote understanding of the functioning of AI systems and the decisions generated as a result of those functions.

A highly complex issue that merits discussion and brings us back to the fundamental message of the Tractatus: when it is possible to say something in the first place, it can be said clearly. Promoting public understanding of the workings of AI and what can be achieved with it is a good thing. But a technical understanding of the principles of AI is not enough. We also need responsible awareness: awareness of how the adoption of AI will influence our daily lives, actions and treatment. Only when this awareness is achieved, do we arrive at the fundamental question: what are the long-term and short-term effects of AI on the development of our community, society and world, and how could we steer those effects into a positive direction? In other words, the speed and efficiency of progress are not the only values. We also need to know which direction to take and how to predict negative consequences.

The underlying values and conception of humanity

The fundamental question is, who defines ‘explainability’? Who is doing the explaining and who the listening? From whose perspective is the explanation of the significance of technology generated? What values and conception of humanity are the viewpoints of the explainers based on? In our technologically advanced world, where children learn the basics of programming at day care, is it enough to only teach the ethics of AI to those studying for a technological degree? Should ethical questions be addressed from primary school onwards? Should pupils – the AI devs of the near future – pass an ethics licence?

Wittgenstein said that ethics cannot be discussed in a language free from misunderstandings. Language is a curious tool in that two speakers can use exactly the same words, yet be nowhere near understanding each other. For example, autonomy means moral responsibility to humanists and philosophers.  But the same word is used by engineers and computer scientists about things like cars and ships. To them, autonomy means learned activity defined by outside input, which does not involve actual responsibility in the philosophical sense. To a humanist, on the other hand, autonomy means responsible (human) activity or agency based on the actor’s own values and goals. For example, when we emphasise autonomy in technology for the elderly, we want to safeguard the right of self-determination and informed consent when deciding on the purchase and use of technology.

In his book, Rauhankone (Peace Machine, 2017), Timo Honkela expounds on the problems of understanding concepts, stating that, since people will never be able to truly understand what the other party is talking about, we should turn to technology in the negotiations on meaning: by combining data, AI could interpret the message of our discussion partner better than our own minds ever could. For example, artificial intelligence could deduce what the other party really means by concepts such as ‘health’ or ‘equality’.

Stories and narratives build shared understanding

It is important to talk about shared experiences and meanings and try to explain that which can be explained. Stories and narratives are crucial to understanding concepts and meanings. Our mind, cognition, consists of the activity of generating meaning based on thought and the processing of information (you could tell the engineers that the mind is software, and its computational (mental) processes are carried out by ‘hardware’ called the brain). The mind requires awareness, or an experience of the surrounding world that we share with others. But sharing experiences with others is difficult enough at the best of times and becomes impossible if the meanings of shared concepts are different for those talking. Sharing requires the ‘right’ words and examples from real life as we have lived and experienced it. With words, concepts and entire stories, we open our experiences and values to others and create a shared narrative understanding and reflective conception of our community and the world.

So it is not irrelevant what words we use to discuss the additional effects or ‘added value’ generated by technology. Let us focus on discussing the ethics of AI with the right words, in a clear and intelligible fashion, taking the life experiences of the listener into account. Only then can we transmit the information and underlying values that are important to us. We should be particularly careful when speaking in a foreign language, so that the value base of our message is not transformed to something different in the ear of the listener.

I simply have to conclude with a true story on the meaning of words: once upon a time at VTT, we were writing a description of the implementation of user tests for a certain technology. We briefly described the implementation of the pilot projects by telling where and when they were carried out, like this: ”Pilots were executed…”. Upon reading the text, an amused British colleague asked us: ”I say, really, how many pilots did you actually kill?”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.