ecisions and outcomes generated by algorithms can lead to discrimination against individuals and demographic groups when the algorithms are not built and deployed properly. However, monitoring and assessing the fairness of models is a daunting task, requiring a multidisciplinary collaboration between various stakeholders, including data scientists, domain experts and end users.
To provide a practical set of instruments for assessing the fairness of a model and minimizing algorithmic harms that affect citizens, Amsterdam Intelligence recently released the Fairness Handbook. This book provides an introduction to algorithmic fairness and bias for everyone whose work involves data and/or algorithms. It explains how biases and other problems in the model development cycle can cause several forms of harms that consequently impact individuals or disadvantaged groups in society. With the Fairness Pipeline, we then offer a step-by-step plan to evaluate a model for biases and to mitigate these problems.
https://openresearch.amsterdam/nl/page/87589/the-fairness-handbook