Even though artificial intelligence is a technology that has power to shape the future of humanity, it could become a modern medieval injustice in the wrong hands. Just like the arbitrary decisions of kings in the past, today’s biased algorithms can deepen inequalities and weaken the foundations of social justice. For example, it has been shown that facial recognition technology has higher error rates for people with darker skin, which can lead to false identification in law enforcement practices (MIT Media Lab, 2018). Unfair artificial intelligence can make discrimination permanent in many areas, from hiring to the justice system, and weaken the sense of justice in society. However, if applied correctly, this technology also has the potential to redefine justice in the modern world. This article will discuss the technical, ethical, and social steps needed to prevent artificial intelligence bias and establish fairness in machine learning.
The main cause of unfairness in artificial intelligence systems is bias in the data sets. These biases lead algorithms to learn and repeat historical discrimination. For example, if past hiring data shows a preference for male candidates, it can disadvantage female candidates. This risks reinforcing existing inequalities. Shark (2024) emphasizes this problem, saying, “Artificial intelligence systems fed with wrong or incomplete data reproduce inequalities (Shark, 2024, p. 1).” The best way to eliminate bias is by using balanced and diverse data sets. Additionally, having independent experts audit the algorithms plays a key role in identifying and fixing biases. The ISO/IEC 23053:2022 standard offers a framework for transparency and accountability in artificial intelligence algorithms. Eliminating data bias and ensuring transparency in algorithms not only make social justice possible but also lay the foundation for equality in the modern world.
Ethical guidelines and training programs are essential for artificial intelligence technologies to support social justice. For instance, Covid-19 diagnosis algorithms were trained only on data from Western countries, which caused accuracy to drop significantly for people with darker skin tones (Wynants et al., 2021). These findings show the importance of artificial intelligence developers following ethical principles. Ethical guidelines not only protect individual rights but also make artificial intelligence a champion for justice. Approaches supported by training put technology to work for society and help prevent discrimination.
International cooperation and legal regulations are key for applying artificial intelligence technologies in a fair and unbiased way. The European Union’s Artificial Intelligence Act (2021) created a roadmap for developing artificial intelligence, focusing on transparency and ethical principles. UNESCO’s (2022) Recommendation on the Ethics of Artificial Intelligence encourages fair technology sharing between countries and provides a framework for adopting ethical standards. These efforts help ensure that artificial intelligence doesn’t become a source of inequality between societies.
The future of machine learning and artificial intelligence technologies should be shaped not only by technical achievements but also by justice and ethical values. From addressing data bias to ensuring algorithm transparency, every step is critical to making these technologies tools that support human rights and social equality. When guided properly, artificial intelligence can spread the light of justice around the world. Otherwise, the dark shadows of the medieval may find new life within modern technology.
References
European Commission. (2021). Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. EUR-Lex. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
International Organization for Standardization. (2022). ISO/IEC 23053:2022. Framework for artificial intelligence (AI) systems using machine learning (ML). International Organization for Standardization. https://www-iso-org.translate.goog/standard/
MIT Media Lab. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. https://www.media.mit.edu/
Shark, A. R. (2024). Making AI better than us: What could possibly go wrong? National Academy of Public Administration.
UNESCO. (2022). Recommendation on the ethics of artificial intelligence. United Nations Educational, Scientific and Cultural Organization. https://www.unesco.org/open-access/terms-use-ccbyncsa-en
Wynants, L., et al. (2021). Prediction models for diagnosis and prognosis of COVID-19: Systematic review and critical appraisal. The BMJ. https://doi.org/10.1136/bmj.m1328

