Ethically responsible AI as a moral duty

Some digital trends are not about the use of digital technology, but about its consequences. One of the most prominent is artificial intelligence (ai). The question is: how do we handle this responsibly and ethically?

Ai has the potential to be used for both good and bad purposes. It can help organizations work more efficiently, produce cleaner products, reduce environmental impacts and improve human health. However, when AI is used in unethical ways – for example, for disinformation, deception, human abuse or political oppression – it can have serious harmful consequences for individuals, the environment and society, and thus for the ethical use of AI.

AI as a technology to replicate, augment or replace human intelligence is not new, but it has developed rapidly over the past decade. It has become an integral part of products and services as it can help us immensely in many areas. The development of Covid-19 vaccines went so fast thanks to AI. Examples of practical applications are the driving technology in cars or the so-called ‘recommendations’ when you visit an online store, or a similar application in Google Search that says: ‘If you find this article interesting, the following article may also be useful’. But there are also lurking negative consequences that can easily overshadow the positive ones.

This is why organizations are starting to develop AI codes of ethics.

  • If we do not from the beginning approach AI from a responsible and ethical point of view, it will slow down the development in the near future;
  • Humans cannot understand AI and must be protected from the risks of AI;
  • Creators and users of AI applications have a duty to proactively adopt a responsible and ethical approach, as this is morally necessary for a disruptive technology like AI in terms of its potential societal impact;
  • Organizations can take on a guiding role in the discussions that will be held around AI and position themselves in a positive and responsible way. In this way, you can contribute to a positive, non-threatening image of AI.

Collection of data

“The invisible aspect gets to the heart of the ethically sound AI question: is this a good thing or a bad thing?”

Throughout the customer journey, an organization continuously collects customer data because this creates an increasingly complete and better picture of a single customer. Ultimately, there is enough data to personally address an individual customer and to encourage a specific action during the customer journey. AI can be implemented at many points throughout this process and has increasingly become a fully integrated part of the entire ‘system’. It is automated and often not visible or not recognizable as such.

This invisible aspect gets to the heart of the ethically sound AI question: is this a good thing or a bad thing? Is it acceptable to use technology that is not always recognized and can ‘switch’ smarter than a human? When this is combined with the intention to ‘want to influence behaviour’, a sensitive element is automatically added.

responsible ai

We understand responsible AI in the sense of using AI responsibly, and it has different dimensions, such as:

  • Where does one use AI – in relation to human use?
  • How do you use AI? How reliable is the result? And can it take over the human interpretation?
  • Do all stakeholders know you are using AI? What AI are you using? And on what dates do you use it?
  • How can responsible use of artificial intelligence help you gain trust? Because trust is an essential element when it comes to ethics.

Few customers are likely to realize how much data they provide to an organization during a customer journey, how much customer data can be “purchased,” and how data points from different customers are used to augment his or her data to create a complete customer view. This image can then be further specified via AI thanks to more data and transactions.

Responsibility

“It seems more relevant than ever to put the ethical use of artificial intelligence on the agenda as an organization”

What responsibility do you now have as an organization towards your customers? How open and transparent should you be? What tasks do you have? And is grouping people on the basis of AI (algorithms) morally justifiable? And if this would be justified in terms of the algorithm, then the question remains how the result will be used. Because, as so often, the ‘evil genius’ is not the technology – the algorithm – but the one who uses and controls it.

It works more than ever before relevant to put the ethical use of AI on the agenda as an organization. By making a promise to handle AI responsibly, you show (social) responsibility, but you also explicitly and implicitly indicate that AI is of value to your organization to achieve your goals.

(Author Jack Klaassen is director of innovation and technology at Macaw.)

Leave a Comment