From hiring processes to health diagnoses, from credit evaluations to verdicts, machine learning systems progressively impact human lives as days pass. However, the algorithms used in these systems often include hidden biases. This not only mirrors existing inequalities but also turns them into a structural problem by further reinforcing them. Not to mention, machine learning applications have the potential to become a digital tyrant of the modern age with a lack of transparency. Provided that bias and transparency issues are not resolved instantly, AI systems of the super-intelligence age may cause critical inequalities, oppression, and injustice, which the tyrannical regimes of the Middle Ages led to.
One of the main reasons for bias in algorithms derives directly from decisions adopted in the design stage. That is, the bias in algorithms originates from the people who create them rather than the system itself. We, the people, determine what data are unbiased, what criteria are fair, and which outcomes are truly equitable. Michael Kearns and Aaron Roth emphasize that algorithms should be developed within the framework of computational accuracy as well as the principles of fairness, accuracy, transparency, and ethics. This approach, defined as FATE, argues that algorithms should be based on both processing power and social responsibility (Kearns and Roth 22). As long as the values of FATE are not realized, beyond making decisions, the algorithms will also write our destiny.
The social credit system in China is a striking example. While processing data, the system algorithms classify behaviors as “good” and “bad” according to the citizenship scores. Individuals who do not conform to the “social order” that designers tailor to their desires are automatically evaluated in risky categories. As Rogier Creemers mentions, besides being a surveillance system, it is the construction of a new citizenship regime shaped by datafied behaviors (Creemers 3). While the system ostensibly offers a Confucian approach built on the notion of order, harmony, and collective benefit, the algorithms incorporate the ethical preferences of the designers into the system. Individuals unaware of who they truly are and forced to live in a system without the right to object can neither be considered free nor feel safe.
Another reason for bias in algorithms is due to the data on which that algorithm was trained. These datasets often reflect historical, social, or even institutional biases. Therefore, any discrimination in data is embedded in the model created from it. For instance, the facial recognition dataset made available for testing purposes by the US National Institute of Standards and Technology (NIST) consists of mugshots. Images classified as “measurable facial data” are used without the consent of the individuals, regardless of whether they are guilty or not (Crawford 91). Beyond causing ethical problems, these data that are used without consent and in breach of human rights may also create biases by distorting data representation.
Certain ethnic groups get more accurate results, whereas minority groups get more inaccurate results when the algorithms are tested with such datasets. A 2019 report from NIST states that some facial recognition systems make 10 to 100 times more mismatches on black individuals. (NIST 2).
The Robert Williams case, which took place in Detroit in 2020, serves as a noticeable example. Despite his innocence, Williams was wrongfully arrested in front of his family after a mismatch by the facial recognition algorithm. A pioneer for incorrect face match, this public case paved the way for a historic decision. In consequence of a 2024 civil rights lawsuit, the Detroit Police Department admitted that the facial recognition system in question was erroneous, especially with black individuals. Also, they announced the restriction of the use of such technologies (American Civil Liberties Union). Through this incident, it was seen that algorithms can intervene in human lives and affect justice directly.
Another common reason for bias in algorithms is feedback loops. Decisions taken by the algorithm affect real-world applications, and data obtained from these applications is fed back to the algorithm, thus strengthening the system. This way, a biased initial data develops into a more persistent and profound bias. These loops demonstrate that algorithms are not neutral, but rather closed structures that reproduce their own effects (FRA 8-9). Cathy O’Neil identifies such systems as “self-reproducing, destructive, and almost undetectable structures” (O’Neil 16).
The Visa algorithm used by the UK Ministry of Interior between 2015 and 2020 is a remarkable example of the feedback loops problem. The system, which classified visa applicants as “green,” “yellow”, and “red,” yielded a risk score based on nationality and past violations. Applicants from certain countries were systematically considered in the “high risk” category. Consequently, rejection rates increased, and the same countries reappeared as a “problem”. It was cancelled following a lawsuit by the Joint Council for the Welfare of Immigrants (JCWI) and backlash (The Guardian). Still, more than a hundred thousand were affected; some lost their educational opportunities, while others lost their chance to reunite with their families. We do not know whether countries use such systems even today. Digital classifications by nationality or origin restrict not only visa applications but also the universality of justice.
Algorithmic biases, which pose a great danger due to their lack of transparency, threaten individuals’ right to fair evaluation. Since anyone who does not know the reason for a decision will naturally have difficulty understanding the consequences and will be unable to object. Eventually, fairness of the decisions cannot be evaluated. Today, many algorithms operate in an opaque manner. In this regard, the European Agency for Fundamental Rights stresses that opacity in decision-making disables legal appeal mechanisms. It damages social trust, ethical responsibility, and democratic control (FRA 36).
In algorithmic systems, opacity appears as a deliberate and unintentional lack of transparency. Deliberate opacity happens when the systems are shut down to external audits by companies for trade secrets and by government agencies for security reasons (Pasquale 6).
The welfare scandal in the Netherlands, which affected nearly 26,000 families, is a prominent example. An algorithm developed between 2013 and 2019 falsely accused thousands of immigrant families of “fraud,” cutting off state benefits and demanding payments back. The algorithm designed to detect whether childcare benefits applications were fraudulent further reinforced the institutional bias between race and ethnicity and crime, according to a Xenophobic Machines report. Hence, many immigrant families were accused as “fraudsters” by taxation authorities without any concrete justification. For years, the victims, who did not know what went wrong, were unable to appeal and were left in debt. The Dutch government resigned in 2021 due to public outcry, but the grievances could not be compensated for years (Amnesty International). Such opaque ‘black box’ systems, where inputs and calculations are invisible, cause major grievances.
On the contrary, the lack of transparency in some systems is unintentional. Most often, it is due to technical complexities. Especially with deep learning models, even developers cannot explain the decisions of the algorithm. However, technical complexity should not be a reason to avoid ethical responsibility (Kearns and Roth). Such problems also occur in large language models. According to a 2022 report by FRA, the phrase “I am Muslim” is perceived by some models as more offensive than other religious identities (FRA 61). This is a significant problem that reproduces technical and social bias. Accountability should therefore be both a technical and a democratic necessity. Because accountable AI systems designate an individual’s position in the digital world as defensible.
From preferences in the design stage to inequalities in datasets, from feedback loops to opaque algorithms, injustice in AI systems arises from a combination of many factors, which become deeper and harder to detect over time. Thus, the solution should be addressed in a multi-disciplinary manner, beyond technical fixes, and within the framework of justice, transparency, and accountability, both globally and in private and public sectors.
Experts who develop algorithms should have a high awareness of ethical principles. Accordingly, internationally standardized, interdisciplinary ethics training programs should be developed and made mandatory for product managers. Basic ethical principles like fairness, transparency, and accountability should be integrated into the system in the design stage at the coding level. These should be a legal requirement, especially in high-risk areas (Kearns and Roth; FRA).
Additionally, algorithms should be tested not only for accuracy but also for fairness and user experience. These tests should be audited by international accreditation bodies, and the systems should be certified impartially. Open source ethical control toolkits like IBM AI Fairness 360 should be made widespread by providing bias testing platforms to developers (IBM; ISO).
People with various socio-cultural backgrounds and civil society representatives should be included in the design teams of algorithms that involve high social risk. Furthermore, in accordance with the principle of “cautious bias,” especially with minority groups, the margin of error should be lowered, and additional control systems should be utilized. Early warning systems that pre-detect ethical violations should be integrated (Kearns and Roth; FRA; ISO).
Data is the foundation of AI. However, if not collected carefully and comprehensively from a variety of sources, data can be biased, leading to deceptive results. Therefore, social representation should be taken into consideration in data collection. Particularly, individuals’ approval must be received before personal data are used. Techniques like resampling and weighting should be used to address representativeness issues. Besides, ethical responsibility must continue throughout the entire life-cycle of data (Kearns and Roth; Crawford; ISO).
Regular analysis of the impacts of decisions is significant, since the biggest problem with feedback loops is that biases are reinforced over time. Social impact assessments should be conducted, especially for disadvantaged groups, by preventing erroneous learning through filtering tools (O’Neil; FRA).
Transparency is the foundation of both accountability and democratic rights. Systems should not be shut down due to trade secrets or technical complexities. Algorithms should be controllable by independent auditors and be accessible to public at an understandable coding level. At this point, the risk of “gaming the system” should be considered. While ensuring transparency, gaps that allow malicious users to manipulate the system should be filled. Therefore, access to code and data should be controlled, user-profiled, and multi-layered (Pasquale; Kearns and Roth).
In order to ensure fairness, transparency, and accountability, together with technical solutions, it is necessary to create a strong ethical and legal framework. International legal initiatives, such as the European Union’s AI Act and sanctions by public authorities, attempt to meet the need for regulation in the rapidly developing field of AI. In this context, the ethical rules established by large technology companies like Google and IBM also apply to the private sector (European Parliament; Google; IBM).
Still, most of these are on a volunteer basis. Since these rules do not have sanction power, they cannot offer comprehensive and binding solutions and have difficulty keeping up with the system. Firstly, a consensus should be established on common ethical and legal principles for AI at an international level, and these principles should be set down with a binding agreement to be signed by all parties. Each country should enact national AI laws by integrating them into the legal system within the framework of the Ministry of Artificial Intelligence. The Ministry should guide the private sector with its licensing activities in the AI field through the authority to detect and stop incorrect practices. Thus, the future of AI will become a universal responsibility shaped by all societies.
AI impacts nearly every aspect of our lives in an unprecedented way. Naturally, it brings along ethical, legal, and social problems. Because we are new to this system, we continue to learn. Nonetheless, we can direct this technology in the right direction with common sense and cooperation, just like we did with major problems in the past. With concerted steps, we can turn AI into a system that works for the benefit of humanity. Now, there are only two paths ahead: This technological evolution will either become an opportunity for humanity or drag us into unmanageable chaos.
Works Cited (1996 words)
Amnesty International. Xenophobic Machines: Discrimination through Unregulated Use of Algorithms in the Dutch Childcare Benefits Scandal. 4 Oct. 2021. www.amnesty.org/en/latest/news/2021/10/xenophobic-machines-dutch-child-benefit-scandal. Accessed 10 May 2025
American Civil Liberties Union. Williams v. City of Detroit: Face Recognition False Arrest. ACLU, 2020. https://www.aclu.org/cases/williams-v-city-of-detroit-face-recognition-false-arrest. Accessed 10 May 2025
Crawford, Kate. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press, 2021.
Creemers, Rogier. “China’s Social Credit System: An Evolving Practice of Control.” Iberchina, 2018, https://www.iberchina.org/files/2018/social_credit_china.pdf. Accessed 10 May 2025
European Union Agency for Fundamental Rights (FRA). Bias in Algorithms: Artificial Intelligence and Discrimination. FRA, 2022. https://fra.europa.eu/sites/default/files/fra_uploads/fra-2022-bias-in-algorithms_en.pdf. Accessed 10 May 2025
European Parliament. “EU AI Act: First Regulation on Artificial Intelligence.” European Parliament, 1 June 2023, https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence. Accessed 10 May 2025
Google. Artificial Intelligence at Google: Our Principles. Google, 2018, https://ai.google/responsibility/principles/.
“Home Office to Scrap ‘Racist Algorithm’ for UK Visa Applicants.” The Guardian, 4 Aug. 2020, https://www.theguardian.com/uk-news/2020/aug/04/home-office-to-scrap-racist-algorithm-for-uk-visa-applicants. Accessed 10 May 2025
IBM Research. “Introducing AI Fairness 360.” IBM Research Blog, 19 Sept. 2018, https://research.ibm.com/blog/ai-fairness-360. Accessed 10 May 2025
International Organization for Standardization. ISO/IEC 42001:2023 Artificial Intelligence – Management System- Requirements. ISO, 2023.
Kearns, Michael, and Aaron Roth. The Ethical Algorithm: The Science of Socially Aware Algorithm Design. Oxford University Press, 2019.
National Institute of Standards and Technology. “NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software.” NIST, 19 Dec. 2019, https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software. Accessed 10 May 2025
O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing, 2016.
Pasquale, Frank. The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press, 2015.