Algorithmic Discrimination in Public Policy: A Case Study of the Dutch Childcare Benefits Scandal
| Název práce v češtině: | Algoritmická diskriminace ve veřejné politice: případová studie nizozemského skandálu s dávkami v péči o děti |
|---|---|
| Název v anglickém jazyce: | Algorithmic Discrimination in Public Policy: A Case Study of the Dutch Childcare Benefits Scandal |
| Klíčová slova: | Algoritmické Rozhodování, ADM, Algoritmická Odpovědnost, Nizozemský Případ Dětských Přídavků, Spravedlnost, Transparentnost, Srozumitelnost, Institucionální Dohled, Algoritmické Vládnutí |
| Klíčová slova anglicky: | Algorithmic Decision-Making, ADM, Algorithmic Accountability, Dutch Childcare , Fairness, Transparency, Explainability, Institutional Oversight, Algorithmic Governance |
| Akademický rok vypsání: | 2023/2024 |
| Typ práce: | diplomová práce |
| Jazyk práce: | angličtina |
| Ústav: | Katedra veřejné a sociální politiky (23-KVSP) |
| Vedoucí / školitel: | Mirna Jusić, M.A., Ph.D. |
| Řešitel: | skrytý - zadáno vedoucím/školitelem |
| Datum přihlášení: | 30.06.2024 |
| Datum zadání: | 30.06.2024 |
| Datum a čas obhajoby: | 15.09.2025 09:00 |
| Místo konání obhajoby: | Areál Jinonice, C221, 221, seminární místnost ISS |
| Datum odevzdání elektronické podoby: | 29.07.2025 |
| Datum proběhlé obhajoby: | 15.09.2025 |
| Oponenti: | PhDr. Petr Witz, Ph.D. |
| Seznam odborné literatury |
| AlgorithmWatch. (2020). Automating Society Report 2020.
Amnesty International. (2021). Xenophobic machines: Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal. Amnesty International Netherlands. Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against Blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing Autoriteit Persoonsgegevens. (2020). Onderzoek Belastingdienst/Toeslagen: De verwerking van de nationaliteit van aanvragers van kinderopvangtoeslag. Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and machine learning: Limitations and opportunities. https://fairmlbook.org/ Benjamin, R. (2019). Race after technology: Abolitionist tools for the New Jim Code. Polity Press. Bovens, M. (2007). Analysing and assessing accountability: A conceptual framework. European Law Journal, 13(4), 447–468. Bovens, M., & Schillemans, T. (2014). Meaningful accountability. In M. Bovens, R. E. Goodin, & T. Schillemans (Eds.), The Oxford Handbook of Public Accountability (pp. 1–20). Oxford University Press. Diakopoulos, N. (2015). Algorithmic accountability reporting: On the investigation of black boxes. Tow Center for Digital Journalism. Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56–62. https://doi.org/10.1145/2844110 Diakopoulos, N. (2019). Automating the news: How algorithms are rewriting the media. Harvard University Press. Diakopoulos, N., & Friedler, S. A. (2021). How to hold algorithms accountable. MIT Sloan Management Review, 62(2), 1–5. Denzin, N. K., & Lincoln, Y. S. (2011). The SAGE handbook of qualitative research (4th ed.). Sage Publications. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. Dubnick, M. J. (2005). Accountability and the promise of performance: In search of the mechanisms. Public Performance & Management Review, 28(3), 376–417. ECNL. (2022). Netherlands sets precedent for human rights safeguards in use of AI. European Center for Not-for-Profit Law. https://ecnl.org/news/netherlands-sets-precedent-human-rights-safeguards-use-ai European Commission. (2019). Standard Eurobarometer 91: Public opinion in the European Union – Spring 2019. https://europa.eu/eurobarometer/surveys/detail/2255 European Union. (2016). General Data Protection Regulation, Article 22: Automated individual decision-making, including profiling. Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. Fenster, M., Ananny, M., & Diakopoulos, N. (2020). Transparency. In The Oxford Handbook of Ethics of AI. Oxford University Press. Fuster, A., Plosser, M., Schnabl, P., & Vickery, J. (2022). Predictably Unequal? The Effects of Machine Learning on Credit Markets. Journal of Finance, 77(1), 5–47. Hood, C. (2010). Accountability and transparency: Siamese twins, matching parts, awkward couple? West European Politics, 33(5), 989–1009. Konaté, S., & Pali, B. (2023). “You have to talk with us, not about us”: Exploring the harms of wrongful accusation and possibilities for transformative justice in the Dutch childcare-benefits scandal. Revista de Victimología / Journal of Victimology, 16, 139–164. https://pure.uva.nl/ws/files/136071877/276_767_1_PB.pdf Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & Yu, H. (2017). Accountable algorithms. University of Pennsylvania Law Review, 165(3), 633–705. Larson, J., Mattu, S., Kirchner, L., & Angwin, J. (2016). How we analyzed the COMPAS recidivism algorithm. ProPublica. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm Lipton, Z. C. (2018). The mythos of model interpretability. Communications of the ACM, 61(10), 36–43. Lipsky, M. (1980). Street-level bureaucracy: Dilemmas of the individual in public services. Russell Sage Foundation. Margetts, H., & Dunleavy, P. (2013). The second wave of digital-era governance: A quasi-paradigm for government on the web. Philosophical Transactions of the Royal Society A, 371(1987). OECD. (2019). OECD Digital Government Review of Sweden. OECD Publishing. OECD. (2021). The OECD AI Policy Observatory. Oswald, M., Grace, J., Urquhart, L., & Hutton, W. (2018). Algorithmic Risk Assessment Policing Models: Lessons from the Durham HART Model and ‘Experimental’ Proportionality. Information & Communications Technology Law, 27(2), 223–250. Oxford University Press. (n.d.). AI. In Oxford Learner’s Dictionaries. Retrieved March 9, 2025, from https://www.oxfordlearnersdictionaries.com/definition/english/ai Oxford University Press. (n.d.). Machine learning. In Oxford Learner’s Dictionaries. Retrieved March 9, 2025, from https://www.oxfordlearnersdictionaries.com/definition/english/machine-learning O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing. Parliamentary Interrogation Committee on Childcare Allowance. (2020). Ongekend onrecht: Verslag parlementaire ondervragingscommissie Kinderopvangtoeslag. Tweede Kamer der Staten-Generaal. Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press. Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices. Proceedings of the 2020 ACM Conference on Fairness, Accountability, and Transparency, 469–481. Richardson, R., Schultz, J., & Crawford, K. (2019). Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice. NYU Law Review, 94, 192–233. Romzek, B. S., & Dubnick, M. J. (1987). Accountability in the public sector: Lessons from the Challenger tragedy. Public Administration Review, 47(3), 227–238. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. https://doi.org/10.1038/s42256-019-0048-x Schillemans, T. (2011). Does horizontal accountability work? Evaluating potential remedies for the accountability deficit of agencies. Administration & Society, 43(4), 387–416. Snellen, I. (2005). ICTs, bureaucracies, and the future of democratic governance. In V. Bekkers, H. van Duivenboden, & M. Lips (Eds.), Governance and the democratic deficit. Routledge. Starke, C. et al. (2022). Fairness Perceptions of Algorithmic Decision-Making. Big Data & Society. Veale, M., & Brass, I. (2019). Administration by Algorithm? Public Management Meets Public Sector Machine Learning. Public Administration Review, 79(6), 811–825. Weller, A. (2019). Transparency: Motivations and challenges. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Yanow, D., & Schwartz-Shea, P. (2014). Interpretive research design: Concepts and processes. Routledge. Yeung, K. (2018). Algorithmic regulation: A critical interrogation. Regulation & Governance, 12(4), 505–523. Zarsky, T. Z. (2016). The trouble with algorithmic decisions: An analytic roadmap to examine efficiency and fairness in automated and opaque decision-making. Science, Technology, & Human Values, 41(1), 118–132. Ziewitz, M. (2016). Governing algorithms: Myth, mess, and methods. Science, Technology, & Human Values, 41(1), 3–16. |
| Předběžná náplň práce v anglickém jazyce |
| A. Research Problem Definition
The increasing integration of algorithmic decision-making (ADM) systems in public policy aims to enhance administrative efficiency, consistency, and objectivity (OECD, 2021; AlgorithmWatch, 2020). However, real-world implementations often fall short of these ideals, raising serious concerns about fairness, transparency, and accountability (Diakopoulos, 2016; Pasquale, 2015; Eubanks, 2018). The Dutch childcare benefits scandal is a stark example of how ADM systems can result in severe injustices when deployed without sufficient oversight (Amnesty International, 2021; Parliamentary Interrogation Committee on Childcare Allowance, 2020). In this case, a risk classification algorithm used by the Dutch Tax Authority wrongly accused thousands of families of fraud, leading to devastating personal, social, and economic consequences (Autoriteit Persoonsgegevens, 2020; Konaté & Pali, 2023). This scandal reveals critical limitations in the transparency and explainability of ADM systems, especially when they operate as black boxes within bureaucratic institutions (Diakopoulos, 2015; Pasquale, 2015). It also illustrates how algorithmic tools can perpetuate historical biases and institutionalize discriminatory practices when fairness is not actively considered in design and deployment (Barocas, Hardt, & Narayanan, 2019; Eubanks, 2018; Benjamin, 2019). Existing literature demonstrates that algorithmic bias can originate from data, design choices, institutional incentives, and broader socio-political contexts (Barocas, Hardt, & Narayanan, 2019; Eubanks, 2018; O’Neil, 2016; Ananny & Crawford, 2018). As ADM becomes increasingly embedded in public sector decision-making, it is important to understand how such systems fail, why accountability breaks down, and what governance mechanisms are needed to safeguard democratic values (Yeung, 2018; Ziewitz, 2016; Diakopoulos & Friedler, 2021). This thesis investigates the Dutch childcare benefits scandal through the lens of algorithmic accountability, drawing on Diakopoulos’s framework of fairness, transparency, and explainability (Diakopoulos, 2015; 2019). It also considers the role of algorithmic governance and institutional factors that contributed to the failure (Ziewitz, 2016; Yeung, 2018). The study seeks to contribute to ongoing debates on how ADM can be aligned with principles of just governance (Bovens & Schillemans, 2014; Eubanks, 2018). Research Problem This research investigates how the Dutch childcare benefits algorithm failed to uphold key principles of algorithmic accountability—fairness, transparency, and explainability—resulting in widespread discrimination and harm. It seeks to understand the institutional, technical, and governance factors that contributed to these failures and explore how similar risks can be mitigated in future public-sector ADM systems. B. Objectives To evaluate why and to what extent the Dutch childcare benefits algorithm failed to meet the principles of fairness, transparency, and explainability outlined in Diakopoulos’s Algorithmic Accountability Framework. To identify key lessons from the Dutch childcare benefits scandal that can inform the development of more accountable and ethically grounded algorithmic decision-making practices in public governance. C. Research Questions Why and to what extent did the Dutch childcare benefits algorithm fail to meet Diakopoulos’ principles? What lessons can be learned from this case to enhance algorithmic accountability in public decision-making? D. Theoretical Concept This thesis builds its analysis on two interrelated theoretical frameworks: Nicholas Diakopoulos’s Algorithmic Accountability Framework and the broader concept of algorithmic governance. Diakopoulos outlines key normative principles—fairness, transparency, and explainability—that serve as benchmarks for evaluating ADM systems. These principles guide the empirical case study analysis and the interpretation of interview and secondary data. Fairness is understood both procedurally (just decision-making processes) and distributively (equitable outcomes), recognizing that algorithms often replicate social biases embedded in data. Transparency involves making both the model and its governance processes visible and comprehensible. Explainability refers to the capacity of a system to provide accessible and meaningful reasons for its decisions. Complementing this, the concept of algorithmic governance helps explain the institutional and political conditions under which ADM systems are adopted and deployed. It highlights how such systems shift responsibility, reduce public negotiability, and embed existing power asymmetries into automated decision processes (Yeung, 2018; Ziewitz, 2016). Together, these theories enable a holistic investigation into both the technical failures and institutional dynamics that contributed to the Dutch scandal, and inform recommendations for more accountable algorithmic governance. E. Research Plan This research employs a qualitative case study methodology focused on the Dutch childcare benefits scandal. Primary data were collected through semi-structured interviews with victim, journalists, legal experts, and algorithmic governance experts. Secondary sources include government documents, official reports, and academic studies. The analysis applies Diakopoulos’s framework to evaluate how the ADM system performed with regard to fairness, transparency, and explainability. It also incorporates perspectives from algorithmic governance literature to contextualize institutional and political conditions that shaped system deployment and oversight. Data is thematically coded and triangulated across sources. The findings aim to identify accountability gaps, highlight systemic risks, and propose practical strategies for more ethical ADM practices in the public sector. |
- zadáno vedoucím/školitelem