Algorithmic impact assessments: A view beyond ethics
Abstract
Systems using artificial intelligence (AI) will be the norm of the future. However, human intervention is crucial when considering the ethical and legal issues based on the judgments of these systems. The concern is about how to use, develop and research AI with a social focus through algorithmic impact analysis (AIA). This methodology contributes to an assessment of the potential risks associated with AI. All organizations can take advantage of this methodology to increase confidence in AI applications. Despite the effort to integrate these self-regulatory strategies, in some industries, ethical principles fall short in their application. The document analyzes the AIA methodology and the social benefit based on the analysis of a legal ethical toolkit that was tested to call on regulators to specify rules and standards on the implications of this technology.
Downloads
References
Baum, S. D. (2020) ‘Social choice ethics in artificial intelligence’, AI and Society. Springer, 35(1), pp. 165–176. doi: 10.1007/s00146-017-0760-1.
Benjamins, R., Barbado, A. and Sierra, D. (2019) ‘Responsible AI by Design in Practice’, in Proceedings of the Human-Centered AI: Trustworthiness of AI Models & Data (HAI) track at AAAI Fall Symposium., p. 10. Available at: https://arxiv.org/abs/1909.12838 (Accessed: 23 August 2021).
Bhuiyan, J. (2021) ‘LAPD ended predictive policing programs amid public outcry. A new effort shares many of their flaws’, The Guardian. Available at: https://www.theguardian.com/us-news/2021/nov/07/lapd-predictive-policing-surveillance-reform (Accessed: 2 September 2022).
Borges, J. V. (2017) ‘Robots and the military: A strategic view’, Intelligent Systems, Control and Automation: Science and Engineering. Kluwer Academic Publishers, 84, pp. 199–205. doi: 10.1007/978-3-319-46667-5_15/COVER.
Castets-Renard, C. (2021) ‘Human Rights and Algorithmic Impact Assessment for Predictive Policing’, Cambridge University Press, pp. 93–110. Available at: https://papers.ssrn.com/abstract=3890283 (Accessed: 1 September 2022).
CIFRA (2020) AICan 2020 CIFAR Pan-Canadian AI Strategy Impact Report. Available at: https://cifar.ca/wp-content/uploads/2020/11/AICan-2020-CIFAR-Pan-Canadian-AI-Strategy-Impact-Report.pdf.
Davis, J. et al. (2022) ‘Five ethical challenges facing data-driven policing’, AI and Ethics 2022 2:1. Springer, 2(1), pp. 185–198. doi: 10.1007/S43681-021-00105-9.
Decker, M. and Ladikas, M. (2004) ‘Technology Assessment in Europe; between Method and Impact — The TAMI Project’, in Bridges between Science, Society and Policy. Springer Berlin Heidelberg, pp. 1–10. doi: 10.1007/978-3-662-06171-8_1.
Dulce, M., Ramírez-Amaya, S. and Riascos, Á. (2018) ‘Efficient allocation of law enforcement resources using predictive police patrolling’, in NIPS 2018 Workshop on Machine Learning for the Developing World. doi: 10.48550/arxiv.1811.12880.
Florez Rojas, M. L. (2021) Toolkit Ético Legal, GECTI Uniandes. Available at: https://gecti.uniandes.edu.co/qsm_quiz/toolkit-etico-legal-es/ (Accessed: 2 September 2022).
Flórez Rojas, M. L. et al. (2022) La adopción de tecnologías disruptivas en las organizaciones a partir de la creación e implementación de un toolkit ético-legal. Bogotá. doi: 10.1007/s11948-019-00146-8.
Flórez Rojas, M. L. (2023) ‘Pensamiento de diseño y marcos éticos para la Inteligencia Artificial: una mirada a la participación de las múltiples partes interesadas’, Desafíos. Colegio Mayor de Nuestra Senora del Rosario, 35(1). doi: 10.12804/REVISTAS.UROSARIO.EDU.CO/DESAFIOS/A.12183.
Floridi, L. (2019) ‘Translating principles into practices of digital ethics: five risks of being unethical’, Philos. Technol. Springer Netherlands, 32(2), pp. 185–193. doi: 10.1007/s13347-019-00354-x.
Gobierno de Canadá (2021) Responsible use of artificial intelligence (AI). Available at: https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai.html#toc1 (Accessed: 25 August 2021).
Gobierno de Colombia (2022) Marco ético para la Inteligencia Artificial. Available at: https://iaeticacolombia.gov.co/ (Accessed: 25 August 2021).
Guío Español, A. (2021) Consejo Internacional de Inteligencia Artificial para Colombia. Available at: https://dapre.presidencia.gov.co/TD/CONSEJO-INTERNACIONAL-INTELIGENCIA-ARTIFICIAL-COLOMBIA.pdf.
Kaminski, M. E. and Malgieri, G. (2020) ‘Multi-layered Explanations from Algorithmic Impact Assessments in the GDPR’, Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. New York, NY, USA: ACM. doi: 10.1145/3351095.
Leikas, J., Koivisto, R. and Gotcheva, N. (2019) ‘Ethical Framework for Designing Autonomous Intelligent Systems’, Journal of Open Innovation: Technology, Market, and Complexity Article, 5(18), p. 12. doi: 10.3390/joitmc5010018.
Mantelero, A. (2018) ‘AI and Big Data: A blueprint for a human rights, social and ethical impact assessment’, Computer Law & Security Review. Elsevier Advanced Technology, 34(4), pp. 754–772. doi: 10.1016/J.CLSR.2018.05.017.
Manthorpe, R. (2017) ‘The Beauty.AI robot beauty contest is back’, WIRED-Business. Available at: https://www.wired.co.uk/article/robot-beauty-contest-beauty-ai (Accessed: 17 May 2023).
Metcalf, J. et al. (2021) ‘Algorithmic Impact Assessments and Accountability: The Co-construction of Impacts CCS CONCEPTS’, in ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21). New York, NY, USA: ACM, pp. 735–746. doi: 10.1145/3442188.
Mintrom, M. and Luetjens, J. (2016) ‘Design Thinking in Policymaking Processes: Opportunities and Challenges’, Australian Journal of Public Administration. John Wiley & Sons, Ltd, 75(3), pp. 391–402. doi: 10.1111/1467-8500.12211.
Mittelstadt, B. (2019) ‘Principles alone cannot guarantee ethical AI’, Nat. Mach. Intell. Springer Science and Business Media LLC, 1(11), pp. 501–507. doi: 10.1038/s42256-019-0114-4.
Moss, E. et al. (2020) ‘Governing with Algorithmic Impact Assessments: Six Observations’, SSRN Electronic Journal. Elsevier BV. doi: 10.2139/SSRN.3584818.
Moss, E. et al. (2021) ‘Assembling Accountability: Algorithmic Impact Assessment for the Public Interest’, SSRN Electronic Journal. Elsevier BV. doi: 10.2139/SSRN.3877437.
Munn, L. (2022) ‘The uselessness of AI ethics’, AI and Ethics 2022. Springer, pp. 1–9. doi: 10.1007/S43681-022-00209-W.
NIST (2022) AI Risk Management Framework: Second Draft Notes for Reviewers. Available at: https://www.nist.gov/system/files/documents/2022/08/18/AI_RMF_2nd_draft.pdf (Accessed: 18 May 2023).
Novelli, C., Taddeo, M. and Floridi, L. (2023) ‘Accountability in artificial intelligence: what it is and how it works’, AI and Society. Springer Science and Business Media Deutschland GmbH, 1, pp. 1–12. doi: 10.1007/S00146-023-01635-Y/FIGURES/1.
OECD (2019) Artificial Intelligence in Society, Artificial Intelligence in Society. Paris: OECD. doi: 10.1787/eedfee77-en.
OECD (2023) Fundamental Rights and Algorithms Impact Assessment (FRAIA), Catalogue of Tools & Metrics for Trustworthy AI. Available at: https://oecd.ai/en/catalogue/tools/fundamental-rights-and-algorithms-impact-assessment-%28fraia%29 (Accessed: 17 May 2023).
OPSI (2023) Algorithmic Impact Assessment - Observatory of Public Sector Innovation. Available at: https://oecd-opsi.org/toolkits/algorithmic-impact-assessment/ (Accessed: 17 May 2023).
Reisman, D. et al. (2018) Algorithmic Impact Assessments: A practical framework for public agency accountability. Available at: https://ainowinstitute.org/publication/algorithmic-impact-assessments-report-2 (Accessed: 26 April 2023).
Rességuier, A. and Rodrigues, R. (2020) ‘AI ethics should not remain toothless! A call to bring back the teeth of ethics’:, Big Data & Society. SAGE PublicationsSage UK: London, England, 7(2). doi: 10.1177/2053951720942541.
Selbst, A. D. (2021) ‘An Institutional View of Algorithmic Impact Assessments’, Harvard Journal of Law & Technology (Harvard JOLT), 35.
Uchida, C. D. and Swatt, M. L. (2013) ‘Operation LASER and the Effectiveness of Hotspot Patrol: A Panel Analysis’:, Police Quarterly. SAGE PublicationsSage CA: Los Angeles, CA, 16(3), pp. 287–304. doi: 10.1177/1098611113497044.
UNESCO (2021) Recommendation on the Ethics of Artificial Intelligence. Paris. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000381137 (Accessed: 18 May 2023).
Young, M. M., Bullock, J. B. and Lecy, J. D. (2019) ‘Artificial Discretion as a Tool of Governance: A Framework for Understanding the Impact of Artificial Intelligence on Public Administration’, Perspectives on Public Management and Governance. Oxford Academic, 2(4), pp. 301–313. doi: 10.1093/PPMGOV/GVZ014.
Youth Laboratories (2016) Beauty.AI 1.0. Available at: https://www.beauty.ai/#what (Accessed: 17 May 2023).
Copyright (c) 2023 María Lorena Flórez Rojas, Dra. (Autor/a)

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Those authors who have publications with this journal, accept the following terms:
a. Authors will retain their copyrights and guarantee the journal the right of first publication of their work, which will be simultaneously subject to the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA Recognition License). 4.0) that allows third parties to share the work as long as its author and its first publication are indicated in this journal.
Under this open access license, readers (users) can:
- Share — copy and redistribute the material in any medium or format
- Adapt — remix, transform, and build upon the material
Under the following terms:
-
Attribution — Users must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
-
NonCommercial — Users may not use the material for commercial purposes.
-
ShareAlike — If remix, transform, or build upon the material, users must distribute your contributions under the same license as the original.
- No additional restrictions — Users may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
b. Authors may adopt other non-exclusive license agreements for the distribution of the version of the published work (eg: deposit it in an institutional telematic archive or publish it in a monographic volume) provided that the initial publication in this journal is indicated.
c. Authors are allowed and recommended to disseminate their work through the Internet (e.g. in institutional telematic files or on their website) before and during the submission process, which can lead to interesting exchanges and increase citations of the published work. (See The effects of open access).