.jpg)
In our workshop at WeTechTogether, Tanya Golubev and Anastasia Stamatouli showed that this does not always have to be the case.
Unfortunately, the market still focuses too much on this accuracy aspect and is wrongly used as an indication of a robust AI system.
The workshop showed that if you don't do anything with the data (e.g. leave it full of gender and race bias) or implement 3 approaches to mitigate it, the accuracy remains the same.
Of course, this is a toy example, an AI model based on publicly available data, but it shows how misleading this aspect of accuracy is. Clearly, the quality of the data to which the mitigation measures are applied is much higher and leads to less discrimination.
But is mitigating bias technically sufficient to ensure fairness? Obviously not.
The recent AI tax scandal in the Netherlands shows that it is essential for company or institution to be aware not only of the capabilities and limitations of the technical solution being used, but also of the capabilities and limitations of the people using it. Clear procedures and proper training are crucial, because a human in the loop is not a mitigation measure per se if that human does not check or question the results of the AI system.
There is a need to go a step further.
Therefore, companies and institutions need to have many discussions about different ethical aspects. Realise trustworthy AI means, among other things, making the decision-making process transparent, establishing accountability, and providing the ability to contest and effectively challenge decisions made by AI systems and the humans who operate them.
Companies and institutions that are not prepared to address both dimensions of fairness (the technological and the procedural) run the risk of doing significant damage that would also destroy any trust in these amazing technologies.
Prevention is better than cure. That is nothing new.