‘Reference Man’: a call for responsible use of technology

The documentary series ‘Reference Man’ has once again emphasized how much the white standard man still determines how things are designed in our society. The growing use of artificial intelligence makes the need for greater attention even greater.

“Insane man, it’s me – technology. I just wanted to say I’m proud of you. I mean, look, always renew yourself. Get the impossible out of yourself and get the best out of me. You might think you can not live without me. But without you I am nothing. With you I can take over the world and who knows what else we can achieve together. You and me.”

This is the opening of the recent Vodafone announcement about the launch of their GigaNet. It contains many promises and keywords such as innovation, ‘wanting to achieve the best’ and creating the ‘we / together’ feeling. All this using technology. But according to media researcher Mirko Schäfer, new technologies are never neutral, they are always colored.

In addition, the debate or discourse surrounding these technologies is often utopian or dystopian. In other words, there is positive or negative talk about new technologies in the media and contemporary society. For example, Mo Gawdat – former CEO of Google and author of the book Spooky Slim (2021) – warns that smart machines already control a large part of our lives. According to him, technicians and business people are not aware enough that we are no longer building tools with all the algorithms and smart machines. According to Gawdat, constructing artificial intelligence is to give birth to a thinking being.

The Scientific Council on Government Policy recently made an appeal in a new report on the uses of the use of artificial intelligence (as reported by Het Parool, among others). With the growing impact of artificial or artificial intelligence (AI) on public life, in business and in science, one will have to think carefully about the extent to which one will allow the lightning-fast and self-thinking technology in society.

Ethical dilemmas for content creators in the media world

On Marketingfacts (2020), Maurits Kreijveld wondered when Amazon and Netflix will organize rehab weeks. He points out the (addictive) danger of using smart technologies for marketing purposes: “A one-time purchase or a one-time transaction is not enough: the customer has to stick to it so that ‘stickiness’ is worked on. One tries to make the purchase decision increasingly predictable and provoke it with the right incentives. Instead of general ads with general influencing techniques from psychology (nudging), we switch to personal messages with a personal incentive. “

The question is whether consumers are still able to respond consciously to all these temptations or not. Think of the provoked binge watching by Netflix. According to Kreijveld, content specialists must therefore understand how to manage marketing campaigns, targeting and use personalization responsibly. Unfortunately, technological development is increasingly accompanied by irresponsible behavior. Companies that trade from big data is therefore often no longer trusted.

Hidden imbalances in technology

In the four-part documentary series Reference man shows journalist Sophie Frankenmolen for the whole world still focuses on the standard man with his height of 1.75 meters and a weight of about 80 kilos. In the section on technology (here at NPO Start) she goes into discussions with experts and explains what prejudices have also manifested themselves in technology. For example, there are sexist and racist algorithms.

Biased sample data

Sennay Ghebreab is an expert in artificial intelligence at the University of Amsterdam and emphasizes that sample data is often based on reference man, the standard white man, has been collected. Everyone has to deal with algorithms on a daily basis, for example in the mall, supermarkets, banks or even at the police station where it is used to track down suspects. Algorithms are not impartial or neutral. They can discriminate against certain sections of the population. Algorithms are not able to represent all groups well.

Pre-tensioned speech recognition systems

Nathalie Smuha, legal and ethical AI expert at KU Leuven, explains that an algorithm mainly looks for contexts in large amounts of data to make predictions and make decisions. It is increasingly used in translation and speech recognition systems. Translation of words depends a lot on the context. With the advent of artificial intelligence, translation services such as Google Translate benefited greatly from this.

especially with machine learning Patterns are pretty easy to recognize. Today, the Alexa voice recognition system can be found in many households. The more artificial intelligence is integrated into all areas of our lives, the better the systems can learn, but the more risks can also emerge. AI systems are generally developed by white men. They are not always aware that the applications they build actually contain prejudices. For example, Alexa understands a person with an accent less well than one who is in the norm. Another example is the benefit case, where technology required less esteemed sections of the population. Or, for example, application software that automatically rejects certain candidate profiles and where the ideal candidate is pre-programmed. The influence of technology is increasingly replacing human assumptions.

Sexist algorithms in the financial world

Joris Krijger, a specialist in ethics and artificial intelligence, is trying to find an explanation for how sexism can end up in the financial world. For example, it turned out that men with a payment card from Apple have more spending limits than women (AD, 2019). How can such a so-called bias or are there prejudices? Building an algorithm is human work. If you look, for example, at how the group of programmers is structured, it is remarkable that the distribution is generally 22% women and 78% men. This allows everyone bias tribe. In addition, data and sources also include bias, if the data are not completely representative, e.g. Read elsewhere on Emerce, for example the interview with Krijger about AI and ethics in banks.

Provided face recognition

Face recognition is increasingly being used, for example, to unlock mobile phones or at the gates of Schiphol. Luuk Spreeuwers is an expert in face recognition at the University of Twente and confirms that more and more surveillance images are being collected in our community. The banking industry is also likely to introduce face recognition identifiers in the future to log in to your banking environment.

Modern algorithms work using deep learningnetwork. These are often large data sets, complex systems. This has increased enormously in the last 5-10 years. The vulnerable are people with dark skin color, because the systems are mainly fed with data / images of white people. Globally, the white population also has more access to the Internet.

In general, an image recognition software mainly looks at what data is available. The Dutch police also make use of it, as John Riemen, an expert in biometrics at the Police, explains. For example, the technology mainly offers help with investigation questions and acts mainly as support. It is purely a search system. A trained and educated expert behind the computer makes the decision. The city of Minneapolis in the United States has since distanced itself from this software (The Guardian). Now the Dutch police use a total of two face recognition sets. It will be eighty next year.

A roadmap for the future

Today, companies in the media world are increasingly using artificial intelligence. Examples include creating, distributing, and monitoring content or using technological systems such as face recognition software. As the examples outlined above show, improper use of technology can lead to ethical dilemmas. Therefore, it is important to look at the blind spots internally and to critically reflect on the technological applications. In other words, it is essential that companies increase their efforts to use AI in a responsible, safe and sustainable way, as Genevieve Bell, a professor at the Australian National University, emphasized under # TNW2020.

In the best situation, it could look like this: Algorithms or technology are fed with complete datasets, without bias. Within media companies, it is important to have a debate in order to paint as complete a picture as possible with the aim of using technology in a responsible way.

About the Author: Alexander Lamprecht is a lecturer in digital media at the Amsterdam University of Applied Sciences.

Do you want to stay up to date on the latest news in your field? Follow Emerce on social: LinkedIn, Twitter and Facebook.

Leave a Comment