"Machines must never make man superfluous"

May 25 2022
Community

Artificial intelligence offers opportunities for a more sustainable and inclusive society, but also entails ethical risks, such as bias or social exclusion. This requires a people-oriented and value-driven approach with a long-term perspective. Only in this way can we leave the best possible future for following generations, as became clear during the second lustrum theme day of Nyenrode Business Universiteit.

Targeted fraud detection, complex medical diagnosis, or smart preventive maintenance to be ahead of a defective escalator or elevator. These are all examples where artificial intelligence (AI) beats the human brain when it comes to speed, scope, and success rate. However, with the use of AI come ethical dilemmas. Exploring these is central during the second lustrum theme day of Nyenrode Business Universiteit. "AI offers great opportunities, but the leaders of today and tomorrow must also be aware of the threats and their impact on people and society," says chairman Jessica Peters-Hondelink, Director Executive Education at Nyenrode.

Strong and weak AI

What is artificial intelligence and what is it not? Jan Veldsink, lead Artificial Intelligence at Rabobank and core lecturer AI and cybersecurity at Nyenrode, takes the physical and virtual room on a tour d'horizon along wonder and truth. Along the way, a few myths are busted. "AI is not magic or anything divine," says Veldsink. It's just software, built by humans, using existing data. The imaging is often dominated by 'strong' AI: super-intelligent systems that will take over the world. Veldsink: "As humanity, we have a duty to prevent that future scenario." Fortunately, current applications still fall under 'weak' AI: computers that perform tasks better, more efficiently and faster than humans. The basis is machine learning: you enter data, the computer very quickly recognizes patterns – it's about correlations, not causality, Veldsink relativizes – and from that you develop a predictive model that can help optimize decision-making.

Fight fraud with algorithm

What takes people years, computers do in weeks, hours, minutes, or seconds. The human brain can only handle five to seven data points at a time, the computer thousands. The applications are endless and deployable in every sector and for every discipline in the organization, Veldsink outlines. Rabobank uses AI, for example, to combat fraud and money laundering. AI can also be used to identify risks in the supply chain, to improve maintenance planning, or for marketing: which type will sell best next year? "Don't start with the technology, but with the business, with the business questions or problems," Veldsink advises.

Kasparov's lesson

The most important insight in AI: humans must stay on top, remain in control, and make the final decision. "Machines must never make humans superfluous." Models are always wrong, they make mistakes. So people are always needed to interpret the results and correct mistakes. But people themselves also make mistakes, Veldsink emphasizes. "When categorizing documents for Rabobank, people make 20% mistakes and AI only 5%." The best results are therefore obtained through an interaction between humans and computers, so that systems can develop organically. Veldsink gives the example of chess legend Garry Kasparov, who lost in 1997 against IBM computer Deep Blue.   Kasparov discovered in the years that followed that a mediocre chess player in collaboration with an AI can beat a top player. Now Kasparov is convinced that man and machine must work together on augmented intelligence. If you can't beat them, join them.

Caring for tomorrow

Connection is also the common thread in the story of Jan van de Venis, human rights lawyer, and ombudsman at the Lab Future Generations. First of all, intergenerational connection. Indigenous peoples look back seven generations and seven generations ahead, van de Venis explains. "They want to be a good ancestor for their children and children's children. In everything they do, they look at its impact on future generations. As a modern Western society, we have lost that caring for tomorrow." The Lab Future Generations wants to change this and strives for an inclusive and sustainable society by taking the well-being of the next seven generations into account in important decisions. One of the 'issues' that the lab deals with is the question: How can we design future technology that contributes to an inclusive job market and society?

Ethical anchor

Three anchors are necessary for the development of this technology, van de Venis outlines. First of all, use an ethical anchor for dealing with the growing risks of AI. "Look at the ethnic profiling in the allowances affair: are we sufficiently aware of prejudices in the systems? How do you prevent systems from being used as Chinese government control, where someone who, for example, runs red too often can no longer enroll in university? Do the advantages of AI only benefit the richest or brightest people, or can everyone in society benefit and keep up with the developments?" The second anchor: put people and the brightest future at the heart of the technological design process. Use AI to serve humanity. And the third anchor: seize the room for improvement. "As employers, do not wait for Dutch and European regulations to work on that inclusive job market of the future," Van de Venis appeals, "let's start developing new technology and the associated cultural and behavioral change right now."

Building modern cathedrals

Companies can follow the roadmap that the Lab Future Generations has drawn up. There are seven steps: define the purpose of the new technology, involve a diverse group of people, and together determine the values that the technology should support, think about how those values will be achieved, investigate the impact of the new technology (for example in an ecological or social sense), choose co-creation in the development process, keep testing and adapting, and finally, monitor and evaluate. A people-oriented and value-driven approach with a long-term perspective, as a legacy for the generations to come. Van de Venis: "Our ancestors used stones to build cathedrals. Likewise, we can use technology and AI to create the best possible future for our future generations."

Garbage in, garbage out

What does the development of artificial intelligence look like, what dilemmas do we encounter? Ronald Jeurissen, professor of Business Ethics at Nyenrode, shows a few appealing examples. AI, he says, is nothing more than bringing the world back into units of account — something the Babylonians already did — and recognizing patterns in them: "Dumb, but fast technology, with basically simple algorithms. And it is precisely in this simplicity that the threat lies." Because you can use AI systems, for example, to produce fake faces and 'recognize' them as real. And if you feed computers with data that is based on prejudice, the output is also biased. Garbage in, garbage out. Jeurissen shows an image of a woman behind a PC, who was classified as a man by the algorithm. The system mistakenly relied on the computer rather than the person behind it as so-called evidence. Apparently, the algorithm was trained with only pictures of men behind computers. Thus, the person in the picture must be a man.

Built-in social norms

Technology is never value-free, Jeurissen emphasizes. He quotes the French sociologist and philosopher Bruno Latour, who states that people (agents) and things (artefacts) are both actors who jointly realize values in networks. Abstract? No, just think of a shopping cart. With its coin lock, it contains the built-in social norm that it must be neatly returned. Or take the ultrasound during a pregnancy: if the fetus does not meet the social standard, this places the parents in front of tough ethical considerations. Usually, AI is only assessed along the technical dimension, according to Jeurissen, but the human dimension must also be taken into account. "For example, you need a diverse team to train an algorithm without prejudice. "And then there is the institutional dimension, such as legislation and regulations. "You can build a car in such a way that it is technically impossible to drive into a crowd. But the government can also legally prohibit that behavior, which is much cheaper." The same goes for the application of AI: that requires the right mix of ethical responsibility, legal institutions and norm-driven technology.

Holiday pictures on Facebook as a red flag

Both the European Commission and UNESCO have formulated guidelines for dealing with the ethical aspects of AI. The EU has four values for the use of AI: respect for the human right to self-determination, the prevention of harm to people and society, justice, and explainability. In addition, a number of core requirements must be met. For example, AI applications must respect privacy, they must not discriminate, and there must be diversity, transparency, and accountability. Fine words, but how unruly is the practice? To experience this, the participants of the lustrum theme day can get started with the KeenBee case (the name is fictitious), an algorithm that municipalities can use to detect welfare fraud. KeenBee 1.0 was an expert model, based on large amounts of data that could indicate fraud risk by a team of experts, ranging from paying dog taxes to holiday pictures on Facebook and water consumption. Labor-intensive, burdensome for innocent civilians, and a 'success rate' of only 14%. These disadvantages were overcome by KeenBee 2.0, an AI model that only trained the algorithm with the fraud history in the existing welfare data. The success score shot up to a whopping 84%, while less data and fewer inspection interviews were needed.

Select by zip code or not?

Both in the room and online, groups work on the case. KeenBee may be successful in tackling welfare fraud, but how does the case score ethically if the four EU values are applied to it? There is a lot of discussion going on. Select by zip code or not? Can the government ask innocent citizens if they pay their dog tax, or have a weed plantation in the attic? And what role do bias and social pressure play? One participant: "When Hungarians cheated with the rent and care allowance, everyone said: this has to stop. In doing so, we have given a perverse incentive to politicians, which ultimately led to the allowances affair at the Tax and Customs Administration. As a society, we are partly to blame for that debacle." In another group, the lack of transparency is a struggle. "You don't know on what basis the system marks someone as a fraudster," says one participant. "But this also allows an inspector to check a fraud report with an open mind and without prejudice," responds another.

Is the system racist?

No participant appreciated the case as ethically responsible, according to the plenary feedback: most find KeenBee morally flawed. The focus is one-sidedly on detecting as many fraudsters as possible, the ethics have been neglected. The use of AI is not transparent, there is a risk of bias ("You don't know if the system is racist", it sounds from the audience), it is unclear whether the data is used in the right context and were the results interpreted correctly? "What if the system only classifies women as fraudsters?" Jeurissen once again emphasizes the importance of a diverse team in the development of AI and the interpretation of the outcomes. "AI is too important to leave to specialists alone, you have to build a democratic process around it."

"Put a philosopher or poet in the AI team"

Veldsink adds to this during the final round of questions: "In addition to coders, you need social scientists, behavioral scientists, philosophers and poets in your team, who help ask the right questions and make moral considerations possible." Systems thinking is important, Veldsink emphasizes, not only in AI teams, but already at the front of the process, in the decision-making process in the organization. Are we allowed to use this data? Who actually owns that data? What is the impact of our actions? How do you project that onto planet earth and its inhabitants? Thee are questions from the audience. Van de Venis points to Article 1 of the Universal Declaration of Human Rights: All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act toward one another in a spirit of brotherhood. Often we only reason from the rationale, but AI requires a broader approach, according to Van de Venis. "An ancient Israeli tribe distinguished between three types of bodies: the thinking, the feeling, and the acting body. We can learn a lot from that." A balance between head, heart, and hands, paraphrases afternoon chairman Peters-Hondelink in her closing speech, which just happens to be Nyenrode's educational philosophy for 75 years. The lustrum theme day ends with the robot dance led by professional dancer Sam den Hollander: approximately 100 participants move jerkily in the hall, or behind their screens. For a moment man and machine.

Nyenrode newsletter

Nyenrode shares knowledge with interested professionals. Apply to news@nyenrode for al news of Nyenrode.