Skip to main content
SocietySpiegeloog 434: Transformation

Every Datapoint is a Family (Too)

By September 30, 2024No Comments

With recent artificial intelligence (AI) developments dominating headlines and discussions, people and organizations are rushing to integrate it into various aspects of life. AI systems have made the leap from performing routine tasks (e.g., customer service chatbots, forecasting) to supervising human employees (Technica, 2024; Lanz et al., 2024). For example, gig economy companies such as Uber commonly rely on AI to instruct their workers, and even punish them for alleged misbehavior (e.g., automated bans for low driver rating, or low passenger acceptance rates) (Möhlmann & Zalmanson, 2017; Möhlmann et al., 2021; Lanz et al., 2024).

With recent artificial intelligence (AI) developments dominating headlines and discussions, people and organizations are rushing to integrate it into various aspects of life. AI systems have made the leap from performing routine tasks (e.g., customer service chatbots, forecasting) to supervising human employees (Technica, 2024; Lanz et al., 2024). For example, gig economy companies such as Uber commonly rely on AI to instruct their workers, and even punish them for alleged misbehavior (e.g., automated bans for low driver rating, or low passenger acceptance rates) (Möhlmann & Zalmanson, 2017; Möhlmann et al., 2021; Lanz et al., 2024).

Photo by Chris Liverani
Photo by Chris Liverani

Automation Bias

A possible psychological mechanism that can explain the corporate hype around automated technologies and AI is automation bias. Automation bias occurs when users place too much trust in the reliability and accuracy of AI performance, often preferring algorithmic recommendations over human advice (i.e., algorithmic appreciation), leading to excessive dependence on the technology (Skitka, 2011; Jones-Jang & Park, 2023). However, while AI promises efficiency and innovation with its advanced capabilities, it is still a probabilistic model, working with estimates and sometimes making mistakes (Pascual, 2024). Rushing the implementation of AI in high-responsibility roles can lead to significant consequences, that are not merely hypothetical future concerns, but already present problems.

Cases of Misuse of AI systems

To name a few problems already created by irresponsible AI implementation, one can take a look at several media reports which revealed that prominent organizations had adopted AI applications that adversely affected marginalized groups. For example, Amazon was forced to scrap its AI recruiting tool after discovering it was biased against women, favoring applicants using “masculine language” in their resumes (Dastin, 2018; Lanz et al., 2024). 

In public administration and policing, AI is advertised as able to provide efficient solutions and decrease workload. However, unchecked AI implementation has led to high-profile errors, such as the “Toeslagenaffaire” or the child care benefits scandal in the Netherlands (Amnesty International, 2021; Heikkilä, 2022). In 2013, Dutch tax authorities used a self-learning algorithm to create risk profiles for identifying child care benefits fraud. Indicators such as dual nationality or low income were incorrectly classified as risk factors by the algorithm. This resulted in harsh penalties: tens of thousands of often lower-income, ethnic-minority families being falsely accused of tax fraud and driven into poverty, leading to unemployment, debt, and forced evictions. Tragically, some victims committed suicide, and more than a thousand children were placed into foster care (Amnesty International, 2021; Heikkilä, 2022). Similarly, Amsterdam’s Top 400 and 600 programs, automated lists of people at risk of criminal behavior, are criticized for ethnic profiling and reinforcing institutional racism. A senior policy advisor at the European Digital Rights network (EDRi) says these automated risk modeling and profiling systems “treat everyone as suspects to some extent”, but “disproportionately targets racialized individuals, those perceived as migrants or terrorists, and people from poorer, working-class communities.” (Reuters, 2021). 

“AI’s potential for greater efficiency is marred by the danger of accelerating the speed of violence when humans are reduced to data points.”

Perhaps most concerning is AI’s growing use in the military. Wopke Hoekstra, then-Minister of Foreign Affairs in the Netherlands, states: “The rise of AI is one of the greatest future challenges in international security and arms control.” (Informatie Rijksoverheid, 2023). During the Israeli military’s campaign in Gaza, reliance on their Gospel and Lavender AI systems, designed to track and bomb individuals marked as targets when they are home, are criticized for making killing more objective, resulting in, as of December 2023, more Palestinians killed than all previous Palestinian-Israeli clashes since the start of the First Intifada combined (Abraham, 2024; Byman et al., 2024; Elidrissi, 2024; Pascual, 2024; Pratt, 2024). AI’s potential for greater efficiency is marred by the danger of accelerating the speed of violence when humans are reduced to data points. 

There is a fear that, since AI supervisors are designed to enhance performance, they might push employees towards unethical behavior if it appears beneficial for meeting the supervisor’s predefined objectives (Köbis et al., 2021). Although previous research indicates that current algorithms do not intentionally discriminate, but instead reproduce biases extracted from data they were trained on (Obermeyer et al., 2019; Lanz et al., 2024), the use of AI instead of a human supervisor may enable unethical actions by diffusing responsibility. In-line with automation bias, decision-makers might feel less responsible for the outcomes dictated by an algorithm perceived as more objective or efficient, thereby reducing their moral engagement (Köbis et al, 2021). Returning to the example of Amazon’s recruiting tool discriminating against women in hiring, retention, and promotion decisions, human HR employees often complied with these directives (Dastin, 2018; Lanz et al., 2024). In the Toeslagenaffaire, Dutch tax authorities were quick to penalize families for a period of 6 years over a mere suspicion of fraud based on the system’s risk indicators (Amnesty International, 2021; Heikkilä, 2022). 

Human in the Loop: An inadequate solution

A popular solution against suboptimal AI implementation is the “human-in-the-loop” approach. The idea is human oversight can correct or mitigate errors in automated decision-making. However, this solution falls short when AI systems are designed to address manpower shortages or reduce efficiency bottlenecks. In such cases, the very constraints that necessitate AI implementation—such as limited time, personnel, and resources—make it impractical to rely on extensive human review for every decision (Franks, 2022; Streitfeld, 2024). For example, the Sensing Project, which aimed to counter crimes like shoplifting in the southeastern Dutch city of Roermond, was forced to end because the police did not have enough capacity to follow up project data (Reuters, 2021). An investigation by Jerusalem-based journalists in +972 Magazine and Local Call revealed, after the October 7th 2023 terrorist attacks, the Israeli military faced immense pressure to generate and act upon a high volume of targets in Gaza. The Lavender system was given a large margin of error, and human personnel were instructed to authorize bombings with minimal review, sometimes spending only around 20 seconds to verify the target was male, despite knowing that Lavender made errors in about 10 percent of cases. This was reported as one of the reasons why so many individuals, including Palestinian civilians who were not involved in militant activities, were targeted (Abraham, 2024; Elidrissi, 2024; Pascual, 2024; Pratt, 2024). Of course, while a human-in-the-loop can offer an additional layer of oversight, it alone cannot resolve the deeper institutional and systemic biases that may affect the human itself.

“In entrusting social and organizational databases to AI systems, we need to remember that every datapoint is a family too.”

Algorithmic aversion: A respite due to novelty?

Thankfully, people are often less forgiving of AI failures. Previous studies show that, due to a psychological mechanism called algorithmic aversion, participants react more strongly to AI inconsistencies, as these failures violate the perceived perfection created by automation bias. When AI systems demonstrate decreasing reliability, trust in them diminishes significantly faster than trust in human advisers (Glikson & Woolley, 2020; Jones-Jang & Park, 2023). Studies suggest these trends to be caused by the novelty of current AI technologies (Ulfert-Blank et al., 2023). American cities, such as Santa Cruz and New Orleans, are already banning AI policing systems amid accusations that they reinforce racist policing patterns, and the European Union is positioned to take the lead towards implementing a risk-based model for regulating AI (Reuters, 2021; 2024). As future generations grow up with AI, they may become less cautious of its limitations. The lack of diversity in current AI regulatory board compositions (eg. Meta’s AI advisory council is composed entirely of White men), reflects existing societal biases which are set to be perpetuated in AI governance, further entrenching systemic biases in the future (Vlasceanu & Amodio, 2022; Duffy, 2024). 

As countries race to increase investment in AI as a competitive advantage in the civil and military sector (Fiesler, 2023; Informatie Rijksoverheid, 2023), governments should also invest in crucial  academics and watchdog or research organizations such as Bits of Freedom, an independent Dutch digital rights foundation, focusing on privacy and communications freedom, the Distributed AI Research (DAIR), founded by the ex-co-leader of Google’s Ethical A.I. team, and the Intimacies of Remote Warfare (IRW) programme by Utrecht University (Bits of Freedom, 2016; Walsh, 2022; Intimacies of Remote Warfare | Universiteit Utrecht, 2024). These organizations should be supported in their crucial role of educating the public and government sectors about overzealous AI development.

While AI promises to transform various sectors, the rush to deploy AI in high-stakes roles can lead to dehumanizing outcomes. Responsible AI development involves not only checking for biases and accuracy, but also understanding appropriate and beneficial applications for the humans interacting with these systems (Fiesler, 2024). In entrusting social and organizational databases to AI systems, we need to remember that every datapoint is a family too. Casey Fiesler, an Associate Professor of Information Science at University of Colorado Boulder, highlights a quote from The Conversation: “Automation bias cedes moral authority to the dispassionate interface of statistical processing.” (Schwarz, 2024). We must remain vigilant to prevent automation bias from eroding our moral responsibility and authority when we enter a future increasingly dominated by AI. 

References

Automation Bias

A possible psychological mechanism that can explain the corporate hype around automated technologies and AI is automation bias. Automation bias occurs when users place too much trust in the reliability and accuracy of AI performance, often preferring algorithmic recommendations over human advice (i.e., algorithmic appreciation), leading to excessive dependence on the technology (Skitka, 2011; Jones-Jang & Park, 2023). However, while AI promises efficiency and innovation with its advanced capabilities, it is still a probabilistic model, working with estimates and sometimes making mistakes (Pascual, 2024). Rushing the implementation of AI in high-responsibility roles can lead to significant consequences, that are not merely hypothetical future concerns, but already present problems.

Cases of Misuse of AI systems

To name a few problems already created by irresponsible AI implementation, one can take a look at several media reports which revealed that prominent organizations had adopted AI applications that adversely affected marginalized groups. For example, Amazon was forced to scrap its AI recruiting tool after discovering it was biased against women, favoring applicants using “masculine language” in their resumes (Dastin, 2018; Lanz et al., 2024). 

In public administration and policing, AI is advertised as able to provide efficient solutions and decrease workload. However, unchecked AI implementation has led to high-profile errors, such as the “Toeslagenaffaire” or the child care benefits scandal in the Netherlands (Amnesty International, 2021; Heikkilä, 2022). In 2013, Dutch tax authorities used a self-learning algorithm to create risk profiles for identifying child care benefits fraud. Indicators such as dual nationality or low income were incorrectly classified as risk factors by the algorithm. This resulted in harsh penalties: tens of thousands of often lower-income, ethnic-minority families being falsely accused of tax fraud and driven into poverty, leading to unemployment, debt, and forced evictions. Tragically, some victims committed suicide, and more than a thousand children were placed into foster care (Amnesty International, 2021; Heikkilä, 2022). Similarly, Amsterdam’s Top 400 and 600 programs, automated lists of people at risk of criminal behavior, are criticized for ethnic profiling and reinforcing institutional racism. A senior policy advisor at the European Digital Rights network (EDRi) says these automated risk modeling and profiling systems “treat everyone as suspects to some extent”, but “disproportionately targets racialized individuals, those perceived as migrants or terrorists, and people from poorer, working-class communities.” (Reuters, 2021). 

“AI’s potential for greater efficiency is marred by the danger of accelerating the speed of violence when humans are reduced to data points.”

Perhaps most concerning is AI’s growing use in the military. Wopke Hoekstra, then-Minister of Foreign Affairs in the Netherlands, states: “The rise of AI is one of the greatest future challenges in international security and arms control.” (Informatie Rijksoverheid, 2023). During the Israeli military’s campaign in Gaza, reliance on their Gospel and Lavender AI systems, designed to track and bomb individuals marked as targets when they are home, are criticized for making killing more objective, resulting in, as of December 2023, more Palestinians killed than all previous Palestinian-Israeli clashes since the start of the First Intifada combined (Abraham, 2024; Byman et al., 2024; Elidrissi, 2024; Pascual, 2024; Pratt, 2024). AI’s potential for greater efficiency is marred by the danger of accelerating the speed of violence when humans are reduced to data points. 

There is a fear that, since AI supervisors are designed to enhance performance, they might push employees towards unethical behavior if it appears beneficial for meeting the supervisor’s predefined objectives (Köbis et al., 2021). Although previous research indicates that current algorithms do not intentionally discriminate, but instead reproduce biases extracted from data they were trained on (Obermeyer et al., 2019; Lanz et al., 2024), the use of AI instead of a human supervisor may enable unethical actions by diffusing responsibility. In-line with automation bias, decision-makers might feel less responsible for the outcomes dictated by an algorithm perceived as more objective or efficient, thereby reducing their moral engagement (Köbis et al, 2021). Returning to the example of Amazon’s recruiting tool discriminating against women in hiring, retention, and promotion decisions, human HR employees often complied with these directives (Dastin, 2018; Lanz et al., 2024). In the Toeslagenaffaire, Dutch tax authorities were quick to penalize families for a period of 6 years over a mere suspicion of fraud based on the system’s risk indicators (Amnesty International, 2021; Heikkilä, 2022). 

Human in the Loop: An inadequate solution

A popular solution against suboptimal AI implementation is the “human-in-the-loop” approach. The idea is human oversight can correct or mitigate errors in automated decision-making. However, this solution falls short when AI systems are designed to address manpower shortages or reduce efficiency bottlenecks. In such cases, the very constraints that necessitate AI implementation—such as limited time, personnel, and resources—make it impractical to rely on extensive human review for every decision (Franks, 2022; Streitfeld, 2024). For example, the Sensing Project, which aimed to counter crimes like shoplifting in the southeastern Dutch city of Roermond, was forced to end because the police did not have enough capacity to follow up project data (Reuters, 2021). An investigation by Jerusalem-based journalists in +972 Magazine and Local Call revealed, after the October 7th 2023 terrorist attacks, the Israeli military faced immense pressure to generate and act upon a high volume of targets in Gaza. The Lavender system was given a large margin of error, and human personnel were instructed to authorize bombings with minimal review, sometimes spending only around 20 seconds to verify the target was male, despite knowing that Lavender made errors in about 10 percent of cases. This was reported as one of the reasons why so many individuals, including  Palestinian civilians who were not involved in militant activities, were targeted (Abraham, 2024; Elidrissi, 2024; Pascual, 2024; Pratt, 2024). Of course, while a human-in-the-loop can offer an additional layer of oversight, it alone cannot resolve the deeper institutional and systemic biases that may affect the human itself.

“In entrusting social and organizational databases to AI systems, we need to remember that every datapoint is a family too.”

Algorithmic aversion: A respite due to novelty?

Thankfully, people are often less forgiving of AI failures. Previous studies show that, due to a psychological mechanism called algorithmic aversion, participants react more strongly to AI inconsistencies, as these failures violate the perceived perfection created by automation bias. When AI systems demonstrate decreasing reliability, trust in them diminishes significantly faster than trust in human advisers (Glikson & Woolley, 2020; Jones-Jang & Park, 2023). Studies suggest these trends to be caused by the novelty of current AI technologies (Ulfert-Blank et al., 2023). American cities, such as Santa Cruz and New Orleans, are already banning AI policing systems amid accusations that they reinforce racist policing patterns, and the European Union is positioned to take the lead towards implementing a risk-based model for regulating AI (Reuters, 2021; 2024). As future generations grow up with AI, they may become less cautious of its limitations. The lack of diversity in current AI regulatory board compositions (eg. Meta’s AI advisory council is composed entirely of White men), reflects existing societal biases which are set to be perpetuated in AI governance, further entrenching systemic biases in the future (Vlasceanu & Amodio, 2022; Duffy, 2024). 

As countries race to increase investment in AI as a competitive advantage in the civil and military sector (Fiesler, 2023; Informatie Rijksoverheid, 2023), governments should also invest in crucial  academics and watchdog or research organizations such as Bits of Freedom, an independent Dutch digital rights foundation, focusing on privacy and communications freedom, the Distributed AI Research (DAIR), founded by the ex-co-leader of Google’s Ethical A.I. team, and the Intimacies of Remote Warfare (IRW) programme by Utrecht University (Bits of Freedom, 2016; Walsh, 2022; Intimacies of Remote Warfare | Universiteit Utrecht, 2024). These organizations should be supported in their crucial role of educating the public and government sectors about overzealous AI development.

While AI promises to transform various sectors, the rush to deploy AI in high-stakes roles can lead to dehumanizing outcomes. Responsible AI development involves not only checking for biases and accuracy, but also understanding appropriate and beneficial applications for the humans interacting with these systems (Fiesler, 2024). In entrusting social and organizational databases to AI systems, we need to remember that every datapoint is a family too. Casey Fiesler, an Associate Professor of Information Science at University of Colorado Boulder, highlights a quote from The Conversation: “Automation bias cedes moral authority to the dispassionate interface of statistical processing.” (Schwarz, 2024). We must remain vigilant to prevent automation bias from eroding our moral responsibility and authority when we enter a future increasingly dominated by AI. <<

References

Zhen Cong

Author Zhen Cong

Zhen Cong (2000) is a third-year Clinical Developmental Psychology student. He is passionate about developmental, military, and gaming psychology. A lover of sci-fi stories, he enjoys creative scientific communication through storytelling.

More posts by Zhen Cong