Moral design is necessary for human-centred AI

Fontys Information and Communication Technology

Moral design is necessary for human-centred AI

To combat crime in the inner city, a Limburg municipality experimented with an algorithmic risk model to analyse camera images. By feeding the model with all kinds of data from motorists, criminals could be filtered from the public thanks to predictive policing. This clever and innovative solution earned the municipality an Amnesty International report on human rights violations. The algorithm had in fact perfected and automated ethnic profiling. It is an example of an increasingly common type of techno-ethical issue, where technology is deployed without having a full picture of the possible impact, according to Bart Wernaart, lecturer Moral Design Strategy at Fontys University of Applied Sciences. 

Putting humanity back in the machine

Artificial intelligence is going to change our world forever, but whether that is positive or negative depends on the way it is used, according to Wernaart: "Many techno-ethical issues are tackled top-down and at the back of the innovation process. When a techno-ethical problem arises, we tend to solve it in the form of a commandment or ban. Look at the intervention of social media in Covid-19 discussions, or Apple's proposed photo scan action to detect child abuse." That's where it goes wrong according to Wernaart. 'putting humanity back in the machine' is an expression Wernaart likes to use: "If we want to use technology in a more human-friendly way, then there is a need to make morality part of design process."

Start with Moral Design
Technology is never without moral impact according to Wernaart, and often the (sometimes unintended) side effects are difficult to predict: "The car is a good example, it was invented with the aim of increasing our mobility. But one consequence is that our streets are no longer safe to play on. That's not necessarily a bad thing, but you have to be aware of it." A solution to this kind of dilemma, as in the aforementioned examples, is moral design strategy, which is what the proffessorship is concerned with: "What we actually do with moral design is try to capture the morality of the individual and use that as a design principle. You then investigate which values and moral considerations play a role within the group of users that an application or service concerns, and how you can deploy these at the front end of your development process. You can also extend the scope by comparing this with, for example, a random group of Dutch people or a specific group of programmers and learn from the similarities or differences. The strategy component is that this can also serve as a basis for policy." In short, by involving the user in the process, you can include wishes and moral choices in the design process. An example is social media and the algorithms of these platforms. If these had been developed with moral design principles, they would probably not send users into closed circles. They would facilitate open communication, instead of the polarising effect they currently have. 

Participation and transparency  

So how do you go about it? Wernaart is now working on a project called Mobile Moral Lab. With this, he is going into the neighbourhoods of Eindhoven. Residents give their input on techno-ethical issues, such as the moral programming of a traffic app that can influence traffic in the city centre: "People can turn the knobs themselves, and we translate that into core values for drivers. Should clean cars be given more priority so that cleaner driving is rewarded, or should polluting cars be given priority so that they can get out of the city centre faster and cause less environmental damage? In this way, they can obtain advice that will underpin their choices. It also makes the process transparent. And that is important, because techno-ethics is rarely an election theme often technology is deployed without participation or even knowledge of it. The municipality of Limburg is an example: "You see an undesirable bias in the algorithm developing in preventive policing. The cause is in the design process, where there was not enough participation in the form of various inputs to develop a smart and just AI. That's where it often goes wrong, think also of Microsoft's self-learning chatbot Tay who quickly denied the holocaust." 


AI and Moral Design  

Moral design is particularly relevant when it comes to a moral dilemma, Wernaart explains: "That is when there are multiple ethically defensible choices where the options are mutually exclusive." So if we want to ensure that AI becomes a human-centred technology, we need to do it differently. In the examples where social media censors and a tech giant investigates personal data, important values are protected at the expense of others. According to Wernaart, this is where the pain lies: "In the case of Apple, we accept it because we think child abuse is terrible. But what is the next step? We are talking about fundamental rights such as privacy and self-determination. You can ask yourself to what extent a company is allowed to decide on this, and whether such a company should not involve its users much earlier at the drawing board in the deployment of technology to achieve these kinds of goals. That is what moral design is all about."

On 14 October 2021, the Fontys AI Community is organising a debate on AI & Ethics. Students and employees are welcome to attend. Would you like to join us or would you like to know more about the AI Community? Please contact Yvonne van Puijenbroek.

 

 

Tags: Events